In 2005, an epidemiologist and medical statistician published two papers that had a similar effect in medical research. His name? John P. Ioannidis. (Say "yo-NEE-deez") He was a math prodigy and, after studying rare diseases, became interested in the problem of solid research that backed almost all of the taken-for-granted truths that your doctor and most medical researchers were working with. At first he thought he would take the rather paltry and shoddy studies that formed the basis of sound medical practice, then perform his speciality: a critical analysis of the founding papers, do a meta-analysis, and shore up and legitimize what had been a bit non-rigorous and sloppy for a body of practice that was so basic to our understandings of the best modes of treatment.
This was in the 1990s. Ioannidis, in his research, found that about 90% of what doctors believe and what they tell their patients, what drugs they prescribe, was seriously flawed. The research that supposedly legitimized the dogma in most of our doctors' heads was riddled with mistakes.
I admit it: I read probably 100 articles a week on little scientific findings that sound..."interesting." I love Science Daily! There are about six or eight other science sites I read regularly, and I'm interested in physics, chemistry, cosmology, medical findings, neurobiology, social psychology, genetics, archaeology, evolutionary studies, on and on, und so weiter. And I link to those in the OG, as for instance, earlier articles on the obesity problem, which is serious to me. But there's reason to believe, given what John P. Ioannidis has found, to read those studies with far more than a grain of salt. Rather read with something in mind between a dull news article about some crime that was committed...and fiction. Why? Read on.
Ioannidis and His Two Papers of 2005
One was published in PLoS and had a wonderfully provocative title: "Why Most Published Research Findings Are False," and was a rigorous mathematical proof that, summarized by David H. Freedman, who wrote the best journalistic piece I've seen on Ioannidis (HERE, and highly recommended you read it three times over the course of a year), as "Simply put, if you're attracted to ideas that have a good chance of being wrong, and if you're motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you'll probably succeed in proving wrong theories right." This applies to medical research, but many researchers in other fields have taken note.
Another way of summarizing Ioannidis's meta-research here (the PLoS paper their most downloaded one ever, and heavily cited by others) could be like this:
1. Assume modest levels of researcher bias and add to that:
2. typically imperfect research techniques, and then add to that:
3. the human tendency (well-known) to focus on exciting rather than highly plausible theories, and it all adds up to:
4. VOILA!: researchers will come up with wrong findings most of the time
Isn't this sorta galling, freaky stuff? Or is it just me?
[I consider David H. Freedman one of our very best Expertologists, and I just now remembered that I wrote about him and his book HERE.]
Also in 2005, Ioannidis published, in Journal of the American Medical Association (JAMA), a paper that tested the idea that, sure, there's a lot of bad studies, but the larger social world of doctors and medical researchers will read these flawed studies and be able to identify them; the flawed stuff will get weeded out. As you may have guessed, this was not found to be true.
Ioannidis picked 49 of the most highly regarded research findings over the previous 13 years. These were the most widely cited articles in the best peer-reviewed journals. These papers had to do with hormone replacement therapy for menopausal women, vitamin E to stave off heart disease, the use of coronary stents, and daily low-dose aspirin for keeping the arteries unclogged, etc.
41% of these studies were shown to have major flaws. Physicians and researchers do not know how to see through dubious research findings. Or there's a herd mentality among medical professionals. Or some would rather not know. Or possibly all of the above and Something Else.
A Few Upshots From All of This
Ioannidis shows there are routinely problems at any and every step of the research process, and they have to do with conception and hypotheses, research techniques and other methodological problems, conclusions, conflicts of interest, the human desire to make a breakthrough, the human desire to publish to gain academic standing, the need to feel part of a larger research group by supporting their findings, and the politics of peer-review and how journals decide to accept or reject certain papers. Among other things. Like the common obsession of winning a big research grant, often from Big Pharma.
On top of all that, the field seems resistant to change what it already "knows." Sorta like the tuberculosis out there now, that's resistant to antibiotics because those antibiotics were too over-prescribed? (Hold on, that may be a Bad Analogy...)
Regarding obesity studies, Ioannidis says "ignore them all." Why? Because they're seriously flawed. Some studies found fat people live longer than expected, and were just as healthy as the non-fat. Why? The problem of bio-markers: they tell us something, but not the entire picture, and they don't take into account very many other interactions within the body. We may be spending far too much in the wrong places. What about using Vitamins A, D, and E? Forget those studies also, says Ioannidis. What about whether we should eat more fat or carbs? Forget it, says John P. Read the David Freedman article from The Atlantic that I linked to above for Ioannidis' reasons why these studies are to be ignored.
Then...then...then...if there's so much bad research done in the medical field, what about our health, as a society? Ioannidis answers: "That we're not routinely made seriously ill by this shortfall [of truly evidence-based medicine] he argues, is due largely to the fact that most medical interventions and advice don't address life-and-death situations, but rather aim to leave us marginally healthier or less healthy."
Ioannidis says bad ideas spread like an epidemic, "They're spreading it to other researchers through journals," says Ioannidis, who Freedman says looks "like Giancarlo Giannini with a bit of Mr. Bean."
How were Ioannidis's two bombshell papers received by the medical community? Very well. People seemed relieved. He was attacking general mistakes, and so every researcher could say to himself, maybe something like, "Yep, this stuff goes on a lot, and it's a problem. My colleagues need to be very careful..." It's not you who conducts bad research; it's other people.
Interestingly, in 1989, two Economists, Kevin Lang and Brad DeLong, published a meta-study of economic dogma and research and what they found was very much like what Ioannidis found: basic assumptions that formed the working models of almost all economists were found to be seriously flawed. However, the field of Economics has not embraced their research. There seems to be a profound lot of significance here about who we are, but I will leave that to my Dear Readers.
Here's Jonah Lehrer on video, for 30 seconds, explaining why he became a journalist:
Jonah Lehrer on This Stuff
Although I consider David H. Freedman's piece on Ioannidis most excellent journalism, Jonah Lehrer published a pretty terrific piece around the same time as Freedman, and about roughly similar problems in epistemology and scientific methodology, but for The New Yorker. It's HERE, and I was enchanted by the many examples of fascinating research Lehrer strings together. It's very entertaining, or at least I thought it was/is. Lehrer is trying to tease out a problem in science: that "the truth wears off," the name of his piece. The idea of verbal overshadowing and how it related to J.B. Rhine's studies in ESP was fascinating. One researcher's term, "cosmic habituation," reminded me of a fascinating outlaw biologist, Rupert Sheldrake, whose outre ideas about morphogenetic fields has not exactly been embraced by the larger Biology community.
Jonah quotes Ioannidis here, and many others. Irony-lovers: note the last paragraph of Lehrer's piece, where he writes, "Because these ideas seem true..." But I don't want to jump on Jonah here. He's gotta be reeling right now, the 31 year old fallen genius who fucked up royally. I am not one of those given to schadenfreude in this instance.
But read Lehrer's piece. Are you suspect of his research, now that you know he'd been outed as a fabricator of some sort? I find that I mostly believe what he's writing here. I particularly like what Jonah has to say about the "decline effect" in science, and I'll quote him here before giving Ioannidis the last say:
Jonah Lehrer: "The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that's often not the case. Just because an idea is true doesn't mean it can be proved. And just because an idea can be proved doesn't mean it's true. When experiments are done, we still have to choose what to believe."
Or, as Paul Feyerabend might've said, "Anything goes." Keep delving into the inner machinations of scientific research and it looks more and more like anarchy to us (well, I'll speak for myself and Feyerabend), and a lot less like the way Francis Bacon envisioned it.
I leave us with this quote from Ioannidis, which I find very poignant, hard-nosed, and revelatory of how rare and truly special the findings in any scientific field have been that have truly worked to provide major breakthroughs, research - like Einstein's - that led to a new paradigm:
"Science is a noble endeavor, but it's also a low-yield endeavor. I'm not sure that more than a small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be comfortable with this fact."
Here's Jonah Lehrer on Charlie Rose, talking about Bob Dylan and creativity. 2 mins:
Here's a 15 minute video in response to Ioannidis's "Most Published Research Findings Are False," and I think it's pretty good in explaining the statistical methodology for determining good research. The narrator - who, to me, sounds very erudite and funny about this stuff - seems to want to stave off the possible reactionaries who already have problems with "science," or as he says, "on the road to science denialism." Sorry for taking so much of your time!