£6 million Blueprint down the drain

In Saturday’s Guardian, Ben Goldacre dismembered a Home Office study designed to evaluate a drug education project in schools.

Blueprint – unfortunately named, as it enabled the article to be headlined “A Blueprint for how not to do research” – was an intervention project aimed to help meet the Government’s target of reducing the use of Class A drugs in schools. It included drug education lessons to pupils aged 11and 12, together with the involvement of parents, media, health policy and the community.
 
Enthusiasts for Blueprint expected effects of a sufficient size to enable a modest improvement in the schools where it was trialled to be detected when compared to control schools. But this was over-optimistic. Plausible effect sizes were smaller, so the number of schools that needed to be randomized in an adequately-powered cluster-randomized trial of the intervention was shown in 2002 to be too high to be affordable.
 
The right response at the time would have been have recognized that the Blueprint intervention was unlikely ever to be cost-effective, as the Home Office advisers (including me) recommended. Then it would have been possible to direct researchers’ attention, and public money, at another sort of intervention.
 
This might, for example, have been one that focused resources on a subset of young people who, on the basis of a few characteristics knowable by schools, were at higher risk of future drugs use. For them, we suggested, the pay-off from a targeted, suitable intervention - in terms of health, attendance at school, reduced criminality – might have been cost-effective and could have been robustly evaluated.
 
Good research requires good ideas. Both a proposed intervention, and the design for its evaluation, need to work for there to be a practical advance.
 
As Ben Goldacre pointed out, knowledge is of course gained when a well-designed study demonstrates that an intervention which passed muster for evaluation ultimately fails to deliver in terms of outcomes that matter.
 
However, the smarter the science, the better it is at picking for winners for evaluation. We can’t afford to evaluate everything. What we do evaluate, should be selected well; and formal evaluations done properly in terms of their experimental design.
 
The Home Office, now the lead department across government for drugs science, deserves credit for at least acknowledging last week that Blueprint, taken on much earlier despite statistical advice to the contrary, had no hope of being a decent evaluation design.
 
But this acknowledgement also points the finger at researchers who undertake contracts to do the undo-able; at policy-makers who offer such contracts; and at ministers, too eager to be “doing something” and unwilling prudently to acknowledge uncertainty, who give the go-ahead for their favoured ideas to be piloted. The cumulative cost of such ministerial kite-flying could be considerable if Blueprint is any guide . . . which I sincerely hope it isn’t for drugs science in the future.
 
Conflict of interest: I was part of the trio who gave statistical advice in 2002 on Blueprint. I currently chair the Home Office’s Surveys, Design and Statistics Subcommittee.