Let the power of probability be with you

IS healthcare in Britain good, bad, or indifferent? We spend £100 billion a year on it, so it would be nice to know.

The Department of Health has recently launched a new drive to measure quality that will tell us precisely nothing. Why? Because hospital doctors and managers have been asked to choose the measures that will be used to rate the quality of the services they deliver.

That’s like asking students to choose the questions in finals. Or rating films on the quality of the popcorn.

NHS staff can choose from a list of 400-odd Clinical Quality Indicators, ranging from pretty serious, such as how many patients die after operations, to pretty banal – like the percentage of written complaints resolved within 25 days. All these are already measured, so doctors and managers know exactly how well they are doing on all of them.

What’s to stop them choosing a dozen measures on which they know they will do well? Actually, nothing. When challenged on this point, the Information Centre of the NHS, which is running the consultation, says we should take a “grown-up” view. I think that means please don’t ask awkward questions.

Will managers really have the nerve to choose the indicators that show them in a good light? Of course not. They are far cleverer than that. They know enough about statistics to pick indicators that show them in a bad light.

If you know that targets are being introduced, it really helps to have a bad record in advance. The first year, nobody is going to take any action, however bad the indicator is. It’s just a benchmark. For the second and subsequent years, it’s a safe bet that it will look a lot better simply as a result of reversion to the mean. Without doing anything at all, indicators that are anomalously high in year one will tend to revert to an average figure in subsequent years.

That means that the right choice of indicators will more or less guarantee to show improvement year on year, without raising a finger. The analogy is with speed cameras, which are sited where accidents have taken place. The local police force then claims, when accidents fall the following year, that the cameras are responsible. They may be, to a limited extent. But it is more likely that the peak in accidents that triggered the camera placement is an anomaly and that accidents would have fallen the next year anyway, closer to the long-term average.

So a clever NHS manager will have looked at the indicators on offer, and chosen ones where his hospital appears to be doing badly. Then he’ll win points for boldness, and at the same time be confident that the indicators are almost certain to improve, winning him points for good management as well. Give that man a bonus!

The pity is that the exercise will tell us nothing of any value about the quality of care. In its anxiety not to be seen to be imposing any more central targets on managers, the Department of Health has let them write their own rules. Any link between those rules and the actual quality of care is coincidental.

In fairness, it should be added that there will be national indicators as well, to which all hospitals will have to subscribe. But we’ve had those for years, without eliminating differences in quality between hospitals, or necessarily raising quality overall. Targets are there to be gamed. But it’s an even better game when you can pick your own targets and let the laws of chance work in your favour.