A Sense of Place: national survey fails to overcome apathy

 Four out of five people are satisfied with their local area as a place to live, a new survey published today by the Department of Communities and Local Government (CLG) has found. But issues about how the survey was designed and carried out, together with an embarrassingly low response rate to some questions, suggest we shouldn’t attach too much importance to the findings.

 The Place Survey aims to get a measure of people’s satisfaction with their immediate area – how well it is run, whether the dustbins are emptied, the streets cleaned, the teenagers rowdy, and the police caring and efficient. The results have been published several months later than scheduled, after a disagreement between statisticians over whether they were robust enough to be treated as National Statistics.  
 
While most people have a view on dustbin collections, the Place Survey went a lot further, including questions about a list of 18 “National Indicators”. These include people’s perceptions about whether the community they live in is cohesive - one in which people of different backgrounds get on well together. It is here that the ragged edges of the survey begin to show.
 
A pilot study was carried out to test these questions, and found that many respondents failed to understand what they were being asked, and that only a small percentage would be likely to reply. Despite these discouraging findings the survey went ahead regardless. Extra questions were added about issues such as satisfaction with refuse collection, the old Best Value indicators which focused on satisfaction rather than “quality of life”.
 
The results went to the Audit Commission, commissioned by CLG to administer the survey. But because of concerns about data quality, publication was delayed. In face of  the disagreement, an independent reviewer was called in to look at the issue. The survey was carried out by post, not the best way to elicit accurate responses about delicate issues such as race relations.
 
The published results - which are described as Government rather than National Statistics - show that the response rate was very poor in many places. In Manchester and Liverpool, for example, only 28 per cent of the sample filled in the forms at all. Generally, areas where race relations have been an issue responded even less enthusiastically than the rest. In North West England, for example, where the British National Party won a seat at the recent European elections, response rates fell below 30 per cent in many places.
 
In Oldham, just 22 per cent responded to the survey, half of this tiny sample declaring that theirs was not an area in which people from different backgrounds got on well together. In Rochdale, 26 per cent responded; in Salford 29 per cent, in St Helens, 28 per cent.
 
A reluctance to fill in the questionnaire was also evident in the south and south west. In Luton, 28 per cent filled in the forms; in Plymouth, 28 per cent. And in the London boroughs of Brent, Haringey, Islington, Camden, Hackney, Hammersmith and Fulham, Lambeth, Lewisham, Tower Hamlets, Waltham Forest, Newham, and Southwark response rates were all below 30 per cent. City of Westminster achieved a 23 per cent response rate, equalled by Kensington and Chelsea. Only a tiny fraction of local authorities across England achieved response rates above 50 per cent.
 
This, in spite of instructions to local authorities to send out two reminder letters, using pre-paid white (not brown) envelopes, offering incentives such as free admission to a local leisure centre as a reward for completing the forms, or even delivering and collecting questionnaires by hand. The “worked example” showing how to calculate response rate came up with an answer of 74 per cent. As if!
 
The report admits that where response rates are below 30 per cent “some caution may be necessary when using the results to set performance targets (for example as part of a local area agreement) particularly where the target is linked to a financial reward”.
 
It also admits the results were massaged by capping the effect that any individual respondent could have to the overall result, and applying an “inflation factor” to the confidence intervals “which enabled them to more accurately capture the impact of the survey design and non-response”. Neither of these appears to me to add any credibility to the findings.
 
Given that this survey is meant to act as a benchmark for local authorities, and may in some cases determine how much money they receive, its inadequacies cannot be dismissed as unimportant.
 
As one disgruntled local authority manager put it: “All this tells you is what a certain segment of the population thinks – the segment who fill in questionnaires. To say it’s a useful performance management measure is ridiculous.”