Hospital mortality – the genie is out of the bottle

The Health Secretary, Andrew Lansley, is very keen on measuring healthcare by its outcomes, and there’s no more unambiguous outcome than dying.
 
Hence the popularity of Hospital Standardised Mortality Ratios (HSMRs) which compare hospitals by how many patients die in their care, duly corrected for confounding factors such as severity, co-morbidity, and socio-economic status. The healthcare analysis company Dr Foster uses HSMRs as part of its annual Good Hospital Guide – though its conclusions don’t always match those of the Care Quality Commission.
 
Straight Statistics ran a lengthy piece this spring that attempted to get behind the mystery of how hospitals rated by CQC as poor could be rated by Dr Foster as among the most improved. This discrepancy had so embarrassed the Department of Health that it had set up an expert review to try and devise a bullet-proof version of HSMRs that everybody could sign up to.
 
Health Service Journal reports today that the review did put forward a new methodology, to be called the Summary Hospital-level Mortality Indicator (SHMI). So RIP HSMRs, long live SHMIs.
 
How SHMIs differ from HSMRs we do not yet know. Maybe they’ve simply changed their name by deed poll.  
 
And we may not know for a while, because the review reported to the NHS National Quality Board that more work was needed to complete the development of SHMIs, including details of the methodology, statistical modelling, technical commentary and guidance. There will also be further discussions with ‘stakeholders’ to test any remaining concerns. This year’s Good Hospital Guide, due in November, won’t incorporate them.
 
A lot of statisticians believe that trying to attach a single number to a hospital’s performance is dodgy. Reading between the lines, that’s what the NHS review said, too, but concluded that there was no realistic chance of rebottling the genie.
 
In the US, HSJ also reports, the state of Massachusetts has just decided against publishing hospital-level mortality data. Dr JudyAnn Bigby, health and human services secretary for the state, who chaired the group, said that current methodology is so flawed that the group did not believe it would be useful to hospitals and patients and could harm public trust in government.
 
The review compared software from four potential vendors. All were tested on Massachusetts data from 2004 to 2007 and all came out with vastly different results.
 
“In every year there were at least a couple of hospitals ranked as having low mortality with one vendor and high mortality with another” Dr Kenneth Sands, a member of the panel, told The Boston Globe. “That hospital could either be eviscerated or rewarded, depending on which vendor you choose.” Any of this sound familiar?
 
Another member of the panel, Deborah Wachenheim, said hospital-wide death rates were not ready for prime time. “You want information out there, but you want to make sure it’s good information.”
 
The Department of Health here said that, like the review panel in Massachusetts, its experts had given careful consideration to the usefulness of such a metric to the public. “Whilst our review raised similar points about how the public could meaningfully use this information, the department also recognises the ‘public interest’ nature of the indicator, which has been available for the NHS in England for a number of years.”
 
So while Massachusetts started with a clean slate, the DH didn’t. Withdrawing HSMRs would have been like giving a bear a bun through the bars of its cage, and then trying to get it back again (my words, not theirs). There would have been claims of a cover-up. Dr Foster could still have continued supplying its own figures, anyway.
 
Yet we know that the NHS has had exactly the same experience as the Massachusetts panel, when attempts by another company to replicate the Dr Foster results failed.
 
The danger is that of being in the worst of all worlds, where the NHS validates a methodology, making it official and hence more credible, but without cast-iron evidence that it represents a true and reproducible test of hospital performance. Let’s hope the review panel has avoided that trap.
 
Whatever SHMIs turn out to be, we are bound to be told that they are only one measure, to be considered in the context of lots of others. But journalists don’t think like that. A single number that can be attached to a hospital and used to create a league table trumps any number of caveats. The genie is well and truly loose.