Hospital mortality – the genie is out of the bottle
The Health Secretary, Andrew Lansley, is very keen on measuring healthcare by its outcomes, and there’s no more unambiguous outcome than dying.
Hence the popularity of Hospital Standardised Mortality Ratios (HSMRs) which compare hospitals by how many patients die in their care, duly corrected for confounding factors such as severity, co-morbidity, and socio-economic status. The healthcare analysis company Dr Foster uses HSMRs as part of its annual Good Hospital Guide – though its conclusions don’t always match those of the Care Quality Commission.
Straight Statistics ran a lengthy piece this spring that attempted to get behind the mystery of how hospitals rated by CQC as poor could be rated by Dr Foster as among the most improved. This discrepancy had so embarrassed the Department of Health that it had set up an expert review to try and devise a bullet-proof version of HSMRs that everybody could sign up to.
Health Service Journal reports today that the review did put forward a new methodology, to be called the Summary Hospital-level Mortality Indicator (SHMI). So RIP HSMRs, long live SHMIs.
How SHMIs differ from HSMRs we do not yet know. Maybe they’ve simply changed their name by deed poll.
And we may not know for a while, because the review reported to the NHS National Quality Board that more work was needed to complete the development of SHMIs, including details of the methodology, statistical modelling, technical commentary and guidance. There will also be further discussions with ‘stakeholders’ to test any remaining concerns. This year’s Good Hospital Guide, due in November, won’t incorporate them.
A lot of statisticians believe that trying to attach a single number to a hospital’s performance is dodgy. Reading between the lines, that’s what the NHS review said, too, but concluded that there was no realistic chance of rebottling the genie.
In the US, HSJ also reports, the state of Massachusetts has just decided against publishing hospital-level mortality data. Dr JudyAnn Bigby, health and human services secretary for the state, who chaired the group, said that current methodology is so flawed that the group did not believe it would be useful to hospitals and patients and could harm public trust in government.
The review compared software from four potential vendors. All were tested on Massachusetts data from 2004 to 2007 and all came out with vastly different results.
“In every year there were at least a couple of hospitals ranked as having low mortality with one vendor and high mortality with another” Dr Kenneth Sands, a member of the panel, told The Boston Globe. “That hospital could either be eviscerated or rewarded, depending on which vendor you choose.” Any of this sound familiar?
Another member of the panel, Deborah Wachenheim, said hospital-wide death rates were not ready for prime time. “You want information out there, but you want to make sure it’s good information.”
The Department of Health here said that, like the review panel in Massachusetts, its experts had given careful consideration to the usefulness of such a metric to the public. “Whilst our review raised similar points about how the public could meaningfully use this information, the department also recognises the ‘public interest’ nature of the indicator, which has been available for the NHS in England for a number of years.”
So while Massachusetts started with a clean slate, the DH didn’t. Withdrawing HSMRs would have been like giving a bear a bun through the bars of its cage, and then trying to get it back again (my words, not theirs). There would have been claims of a cover-up. Dr Foster could still have continued supplying its own figures, anyway.
Yet we know that the NHS has had exactly the same experience as the Massachusetts panel, when attempts by another company to replicate the Dr Foster results failed.
The danger is that of being in the worst of all worlds, where the NHS validates a methodology, making it official and hence more credible, but without cast-iron evidence that it represents a true and reproducible test of hospital performance. Let’s hope the review panel has avoided that trap.
Whatever SHMIs turn out to be, we are bound to be told that they are only one measure, to be considered in the context of lots of others. But journalists don’t think like that. A single number that can be attached to a hospital and used to create a league table trumps any number of caveats. The genie is well and truly loose.
Anonymous (not verified) wrote,
Fri, 10/09/2010 - 10:00
Nigel
-so is your report above a) exposing bad practice or b) rewarding good. All you've done is report that this is difficult to measure reliably not just in UK but elsewhere, that there has been a methodological review that you've not seen and that you hope it gets things right, while giving publicity to a particular private company. You seem to be implying it's taking too long, suggesting that the review may have attempted to just fiddle by change the name of a current set of flawed indicators to another name, but withoutany apparent evidence to support your accusations. Dont you think that if further work is needed as recommended by a [presumably expert] review panel, that it should be done? Or that stakeholders - presumably including those hospitals who are to be measured but also users of data - should be consulted ie involved in the development of new methodology?
If the review's recommendations and new proposals on methods dont get made public and consulted on you would have a reasonable beef, but until then you seem just to be speculating negatively and accusing without evidence, and thereby damaging trust in official statistics. Or do you have evidence that the same officials have deliberately fiddled their indicators in the past
it would also be better if you took on board the comments in response to your previous article? Whic make the very good point that individual indicators [as with other league tables] are never sufficient without an accompanying proper inspection regime - this is because those providing data on their own institution for public scrutiny [whether schools, hospitals, LAs, police forces, or others] will often have incentives to 'game' or fiddle their numbers to some extent.
anon
[i do not work in healthcare statistics nor ever have done but i can tell misleading journalism when i see it]
Richard Blogger (not verified) wrote,
Fri, 10/09/2010 - 10:35
HSMR is hot topic because the Press do not have a clue what it means: they assume that if HSMR is greater than 100 then there must be some form of euthanasia going on. The Francis inquiry on Mid Staffs has a whole section on this and debunks the whole "400-1200 patients were killed there" scare story. In fact it is interesting to read the relevant sections:
Section G para 179 says
"Mr Yeates noted that in 2007/08 concern had been raised about mortality and the Hospital Standardised Mortality Ratio (HSMR) of 127 for 2005/06. He said that reports from Dr Foster Intelligence and CHKS confirmed that overall mortality was within the national average: and therefore the focus of attention was related to data capture and coding."
HSMR of 127 is regarded as high, but the last sentence is important: if you look at the actual figures the mortality rates are within the statistical spread expected (all on the low side, but still statistically "average"). The problem was not wide scale euthanasia at the hospital but poor measurement of statistics. But Section G para 50 is very important. HSMR for 2005/06 was 127 and for 2008/09 was 89.6. The Francis inquiry asked how this improvement happened, the reply was (Mr Sumara)
"I think there are four elements in why Dr Foster is different… which I have no evidence for and I can’t give you any detail. One is that the coding is just better now. The second one is we don’t do strokes any more. The third one is we don’t do MIs [myocardial infarctions] any more and the fourth one is actually because we have improved that emergency care pathway, your chances are you will get to see the right doctor quickly if you are medically ill. I think that will make a big difference to outcomes eventually. But I have got no evidence to say that has done the trick. In many ways do I care because all I am interested in is can I get it right every time? It is a bit of reassurance."
Note: better coding (so they are recording more accurately why people died). But more significantly the hospital no longer treats specific groups of people who are likely to die ("we don’t do strokes any more", "we don’t do MIs any more"), those people becomes someone else's problem. If HSMR was an accurate measurement then not doing strokes and MIs would have no effect on the HSMR value. But that is not what this witness is saying.