Dog eats dog on The Guardian
An entertaining spat has broken out between Ben Goldacre, author of the Bad Science column in The Guardian, and James Randerson, environment and science news editor of The Guardian.
It relates to a study that Dr Goldacre and colleagues undertook of the accuracy of health claims made in British newspapers.Their paper, published in Public Understanding of Science, found that more than two thirds of the claims were based on poor evidence.
Dr Randerson took objection to this in a feature article, claiming the study gave a false impression. The sample was small – a single week’s output – and the week chosen untypical, because it was the week Barack Obama was elected US President. Of the claims examined, a disproportionate number came from the Daily and Sunday Express.
The argument between the two rages on – read the reams of comments appended to Randerson’s article, including several by Goldacre - and I don’t propose to get involved. But there are plenty of examples on this website of poor reporting, and it’s undeniably true that many of the dietary claims made in newspapers fall short of the highest standards of evidence. That’s largely because nutrition is an inexact science: prospective randomised trials are hard or impossible to do, so a large part of the literature is based on case-control studies, with all their acknowledged defects. In their flawed stories, journalists are reflecting an imperfect science.
There is, in addition, the problem originally identified by the economist Stephen Landsburg. He wrote: “If a prestigious journal publishes a theory, it’s probably wrong”. How so? Given two equally plausible theories from equally credible sources that have passed equally strict scrutiny, the one that gets published has a smaller chance of being right. That’s because editors like to publish theories they find surprising. And the best way to surprise an editor is to be wrong.
This isn’t because journal editors are dishonest. More papers survive peer review than they have space to publish. Given a choice, the editors prefer the ones that have surprising conclusions, which means in average they prefer the ones that are wrong.
So what should journalists do if a journal publishes a study making an implausible claim? Act as gatekeepers to keep this disreputable information from the public? In an ideal world, perhaps so. But when I was a journalist on The Times, I used to find that the response of the newsdesk to missing a story could seldom be brushed aside by my assurance that it was rubbish. That invited the retort: “What do you mean, it’s rubbish? It’s on page one of The Daily Telegraph.”
The structure of news stories militates against the conveyance of doubt. You don’t see many stories beginning: “A small, poorly-designed case-control study has produced unconvincing evidence that eating rice pudding increases the risk of lung cancer” though I did once read a study that precisely meets this description.
There’s another issue that relates to the order in which evidence tends to be published. The RCT is usually the end-point of a process that begins with an ecological study (eg “There are many fewer deaths from heart disease in France than in the UK. Why?”) and is followed by a case-control study and finally, if you’re lucky, an RCT.
A classic example of this was the long-held belief that taking anti-oxidant vitamins staved off cancer. An important finding, if true. Unfortunately when an RCT was done it was found not to be true, even though in the meantime a whole industry of food supplements had levered itself into existence on the back of the earlier studies.
What this means is that the first news of a finding is usually based on weak science. If it’s new, it probably isn’t true. If it’s true, it probably isn’t new. You simply can’t expect journalists to keep their mouths shut until an RCT is finally performed and gold-standard evidence is available. They would be failing their readers by not reporting the earlier studies, flawed or not.
However, as I’ve discovered on this website, it’s much easier to criticise from the outside than do the job well on the inside. You have longer to reflect, no newsdesk to satisfy, no hostages to fortune. It’s easy to become a tiny bit sanctimonious about the failings of others, something I strive to avoid.
So in this row I have sympathy for both sides. Journalists should try harder to read the data, not only the abstract, and certainly not only the press release. They shouldn’t be so beholden to the journals, some of which specialise in publishing the implausible. They should include caveats. But they can’t be expected to generate perfect results from the banquet of uncertainty laid before them. Readers mostly understand this.
Declaration of interest: Ben Goldacre is a member of the Executive Committee of Straight Statistics.
christopher (not verified) wrote,
Wed, 06/07/2011 - 08:50
Agree with all that. Trouble is most journalists are not schooled in statistics and most "surveys" land on their desks in the form of ready-to cut and paste press releases, often from sources in the business of talking up how important their own work is.
What's needed is a very simple set of interpretive guidelines to help editors and writers to form a critical view of reports and findings before rushing into print - maybe reduced down to an easy to recall mnemonic. Starting with "M" - for the Mandy Rice Davies attitude to evidence! (they would say that...)
Maybe a challenge for your readers?