As a health journalist, I’m a big fan of scientific studies. They are an endless source of good article fodder — in part because they involve a goodly amount of controversy.
One study supports X. The next subverts X. One interpretation of a study on Y asserts that Y is the next great hope. Another interpretation of the same study decries Y’s dangers.
So what’s true? It depends on which studies you think have the most merit and in which expert interpretations you put the most faith.
“But this is about science, not faith!” you say. Fair enough. But studies can be biased and befuddled, poorly designed, or unfairly maligned. They can be misconstrued and misreported. There’s a lot of money and power involved, and a lot of reputations at stake. And that’s why a lot of potentially compelling studies (including long-term dietary studies with low profit potential) never get funded, and why many others (like clinical trials that report negative results about blockbuster drugs) simply never see the light of day.
A new study from the Cochrane Collaboration confirms that the phenomenon known as “publishing bias” may be having a significant (and scary) impact on which studies make it into major journals, and which don’t. The Cochrane researchers reviewed and applied a statistical analysis to five studies of publishing bias in clinical trials from 1992 to 1998. They found that about 41 percent of negative results were published, compared with about 73 percent of positive results. The researchers also found that there was a lag effect in the reporting of negative results: Positive findings were typically published in journals within four to five years, while negative results (when they did get published) often took as long as eight years to make it to print.
This means that the studies we are most likely to hear about are not likely to be telling the whole story.
The Need to Dig Deeper into Research Studies
One good example of this concerns antidepressants. In January 2008, the New England Journal of Medicine published a study looking at selective publication of antidepressant trials. Researchers found that between 1987 and 2004, the published studies of newly approved antidepressants appeared to be 94 percent positive. Yet, in reviewing all of the research registered with the U.S. Food and Drug Administration, the positive results were just 51 percent.
A study published in Public Library of Science Medicine in November 2008 found similar results across all kinds of new-drug-application trials submitted to the FDA between 2001 and 2002. Researchers in that study found evidence of non-publication of negative results. They also found selective reporting of data favoring positive results.
A similar area of controversy is now brewing in the realm of cholesterol, statins and heart disease. We first touched on this topic in 2004 (“Cholesterol Myths,” October 2004), reprised it in December 2005 (when we republished our original article with an update), and in this issue , we once again pick up the story’s main thread. But I suspect that in this area — as in many other areas of scientific investigation where diet, drugs and chronic disease intermingle — a great deal more has yet to be revealed.
What I know for sure is that trying to take all your healthy-living cues from published studies will almost certainly make you crazy, and will probably also make you sick.
The drug studies prove that drugs work (except when they don’t, or when their side effects make you sicker). The diet studies prove that diets work (except that no one wants to stay on them for long, and in virtually all of the diets studied, it could very well be the incidental increased-vegetable and decreased-sugar intake — not the reduced fat or whatever’s officially being studied — that’s doing the trick).
To the best of my knowledge, the longitudinal study I’d like to see — of thousands of people exercising regularly, getting enough rest, moderating stress, living mindfully, getting plenty of fresh air and sunshine, avoiding trans fats and refined flours and sugars, and eating mostly plants, but enjoying healthy servings of delicious natural oils, meats, fish, eggs, butter, and even the occasional dessert or glass of wine when they want to — has never been done. And it probably never will be.
And so I study the statistics carefully, but I also depend heavily on my own powers of observation. I look around at my friends, family, colleagues and acquaintances, and I note that all of them who are observing most or all of the above healthy habits are, in fact, very healthy. They are fit. They are happy. They are not dependent on a slew of prescriptions. And most of them are not inclined to change what they are doing based on the latest statistics or scientific headlines. Because they know from experience that what they are doing works.
Studies and statistics will always have a place in my heart, but when it comes to designing a life of optimal vitality for my own body, I’ll choose living proof over warmed-over statistics every single time.