Pages

.

Why Science Can't Be Trusted

Funnel graph for a meta-analysis of studies
involving Cognitive Behavioral Therapy.
I was shocked yet greatly intrigued to encounter a paper on PLoS called "Why Most Published Research Findings Are False," by John P. A. Ioannidis, professor and chairman at the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine. (It appears both John and the university are named after the city of  Ioannina, Greece.)

Far from a kook or an outsider, Ioannidis is regarded as one of the world’s foremost experts on the credibility of medical research. Using a variety of accepted techniques, he and his team have shown that much of what biomedical researchers publish as factual scientific evidence is almost certainly misleading, exaggerated, and/or flat-out wrong.

Ioannidis alleges that as much as 90 percent of the medical literature that doctors rely on is seriously flawed. His work has been published in top journals (where it is heavily cited) and he is reportedly a big draw at conferences. Ioannidis's efforts were given very serious consideration in a 2010 Atlantic article called "Lies, Damned Lies, and Medical Science."

It turns out quite a bit of work has been done on developing analytical methods for inferring publication bias by a variety of statistical methods. (Some of the techniques are summarized here.) For example, there are now such accepted methodologies as Begg and Mazumdar’s rank correlation test, Egger’s regression, Orwin’s method, "Rosenthal's file drawer," and the now widely used "trim and fill" method of Duval and Tweedie. Amazingly, at least four major software packages are available to aid detection of publication bias, for researchers doing meta-analyses. Read about it all here

Some kind soul has put on Scribd a complete copy of the 2004 book Methods of Meta-Analysis: Correcting Error and Bias in Research Findings, by John E. Hunter and Frank L. Schmidt. Because the book is copyrighted, I don't expect the Scribd URL to stay valid for long; the book will probably be taken down at some point, but hopefully not before you've had a chance to go there.

There are many factors to consider when looking for publication bias. Consider trial size. People who do meta-analysis of scientific literature have wanted, for some time, to have some reasonable way of compensating for the trial size of studies, because if you give small studies (which often have large variances in results) the same consideration as larger, more statistically significant studies, a handful of small studies with large effects sizes can swing the overall average effect size (for all studies) unduly. This can be extremely worrisome in cases where small studies showing a negative result are simply withheld from publication. And it does seem that the studies most likely to be kept unpublished are those that show negative results.

If you do a meta-analysis of a large enough number of studies and plot the effect size on the x-axis and standard error on the y-axis (giving rise to a "funnel graph"; see the above illustration), you expect to find a more-or-less symmetrical distribution of results around some average effect size, or failing that, at least a roughly equal number of data points on each side of the mean. For large studies, the standard error will tend to be small and data points will be high on the graph (because standard error, as usually plotted, goes from high values at the bottom of the y-axis to low numbers at the top; see illustration above). For small studies, the standard error tends (of course) to be large.

What meta-analysis experts have found is that quite often, the higher the standard error (which is to say, the smaller the study), the more likely the study in question is to report a strongly positive result. So instead of a funnel graph with roughly equal data points on each side (which is what you expect statistically), you get a graph that's visibly lopsided to the right, indicating that publication bias (from non-publication of "bad results") is likely. Otherwise how do you account for the points mysteriously missing from the left side of the graph, in a graph that should (by statistical odds) have roughly equal numbers of points on both sides?

Small studies aren't always the culprits. Some meta-analyses, in some research fields, show funnel-graph asymmetry at the top of the funnel as well as the bottom (in other words, across all study sizes). Data points are missing on the left side of the funnel. Which is hard to account for in a statistical distribution that should show points on both sides, in roughly equal amounts. The only realistic possibility is publication bias.

So is Dr. Ioannidis right? Are most published research findings false? I don't think we have to go that far. I think it's reasonable to say that most papers are probably showing real data, obtained legitimately. But we also have to admit there is a substantial phantom literature of unpublished data out there; and we should definitely be concerned by that. Suppression of data can cost lives. So this is far from a trivial matter.

I've seen publication bias with my own eyes, in graduate school. A postdoc in my lab at U.C. Davis was trying to get a paper published in the prestigious Journal of Biological Chemistry. The paper this guy wanted to write revolved around a certain graph. I watched this person repeat his experiment numerous times in order to get a data set that exactly matched the results he wanted to show on the graph. He must have done the experiment nine times, and only once did it come out "perfect." (And that's what he submitted to JBC.) That's eight suppressed data sets. One published data set.

I saw enough of this as a grad student to turn me off to science altogether. I quit the program about six months after being advanced to candidacy for a Ph.D.; gave up a Regents Fellowship, walked away from NIH money. I was young and impetuous. "Real-world science is bullshit," I said to myself. "It's fraudulent, front to back."

I never thought anyone would blow the whistle on how science really operates. I'm gratified to see that people like John Ioannidis are now busy doing just that.

No comments:

Post a Comment