Desperate Hubris: the costly, ineffectual risk sharing scheme for multiple sclerosis

Seven years ago, after strong media, clinical and patient pressure, the Government negotiated a “risk-sharing” deal over access to drugs to treat multiple sclerosis.

The National Institute for health and Clinical Excellence (NICE) had decided not to recommend the drugs - the beta-interferons Avonex, Betaferon and Rebif, and glatiramer acetate (Copaxone) - for MS because they did not meet its criteria of cost-effectiveness. After a storm of protests, the Department of Health negotiated a deal by which patients gained access to the medicines in return for the reimbursement of costs if results fell below the manufacturers’expectations.

 This month in the British Medical Journal, Mike Boggild and colleagues reported on the outcome at 2 years for 3,686 (the ‘per protocol analysis’ cohort, 78 per cent) out of 4,749 patients with relapsing-remitting multiple sclerosis who, between May 2002 and April 2005, entered the risk sharing scheme. At analysis, only 1,479 of them (40 per cent) had a valid year 2 disability progression score.
 
The progression of disability was worse than was required if the drugs were to show a  cost-effectiveness below £36,000 per quality-adjusted life year (QALY), let alone below £20,000. The department had set a liberal threshold of £36,000 per QALY in  determining the price-reductions that pharmaceutical companies would have to make.
 
Sensitivity analyses around how disability progression scores had been manipulated to conform to (historical) model assumptions gave a bewildering range of answers - so much so that the authors do not say whether there has, indeed,  been payback to the department for excess costs; or has a new interpretation of ‘tolerance’ been invoked to let pharmaceutical companies off?
 
The risk sharing scheme was set up despite an exceptional inter-collated study to establish why the cost-effectiveness estimates for multiple sclerosis drugs had ranged impossibly widely. After this, the Appraisal Committee for the National Institute for Health and Clinical Excellence (NICE), of which I was a member, was confident in its rejection of them as not cost-effective.
 
The provision of follow-up data on disability scores was a condition of the patients’ receiving medication paid for by the NHS but rejected by NICE. The 2-year performance is poor, at barely 50 per cent, even when 322 non-valid disability (EDSS) scores are added back in.
 
Moreover, the cohort’s EDSS scores frankly contradicted the London, Ontario (and modelled) assumption that an individual’s disability score can only worsen over time. Due to measurement error, or for more fundamental reasons, the assumption did not hold in practice. Instead of either accessing the unsmoothed Canadian data or invoking formal statistical techniques to check whether the data were consistent with a patient’s underlying trajectory being progressive, ad hoc data fixes were invoked to make the model still work. Is this proper science?
 
The “deviation” of actual benefit from model-expected benefit is expressed as a percentage of expected benefit. This deviation measure is calculated every two years and used as the basis for possible price adjustments. If the shortfall between actual and expected benefit exceeds an agreed “tolerance margin” (20 per cent at year two, 10 per cent at subsequent review points), the department will renegotiate treatment prices to maintain cost-effectiveness at £36,000 per QALY.
 
With data gone, model gone, statistics gone and cohort gone, what price renegotiation has the department actually done? How much did it invest in the research cohort and its analysis? Too little in experimental design, that’s for sure! 
 
Had DH funded a national cost-effectiveness trial, in which patients with relapsing-remitting multiple sclerosis were randomized (once only) to NHS-funded-drug-treatment or no-NHS-funded-drug-treatment, then UK might by now have had robustly analysable data on nearly 5,000 patients, with which to inform the decision to treat.

Sheila M. Bird is from the  MRC Biostatistics Unit, CAMBRIDGE CB2 0SR