A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. ... In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.But the killer comment comes from the comments to Borepatch's post by AndyN, who links to a similar story on Reason.com asking "Can Most Cancer Research Be Trusted?"
My favorite quote: "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."I've told the story that Mrs. Graybeard had a bone marrow transplant for breast cancer in 1997, and that paper that was based on turned out to have been falsified. Furthermore, with the passing of our friend I talked about in that post, she is the only survivor out of a group of 8 who were all given a 75% chance of survival based on cancer staging studies. Was survival pure dumb luck, or was it factors that had absolutely nothing to do with the therapy? If the research can't be trusted, we're not much more advanced than bleeding people or applying leaches. Anybody remember when Steve Martin did "Theodoric of Yorik" the barber on SNL in the '70s? The really ironic part of this is that academic researchers are usually held up as more ethical than the drug companies who are "only in it for the money" - and certainly the way they've turned cholesterol lowering drugs from a useless curiosity into a multi-billion dollar industry (cf here or here) shows they're not completely innocent of bad research themselves. But in this case, the drug companies, by trying to duplicate the studies, are performing arguably the most important part of science: independent verification. From Reason:
These results strongly suggest that the current biomedical research and publication system is wasting scads of money and talent. What can be done to improve the situation? Perhaps, as some Nature online commenters have bitterly suggested, researchers should submit their work directly to Bayer and Amgen for peer review?The experiences of my wife and her group underline that bad medical research isn't a victimless academic vice. Real people were subjected to really awful treatments (doctors call it "the most grueling ordeal in medicine") that provably did nothing for their survival chances. And it's not just medical science that's filled with bad research and outright (intentional or not) fraud. A popular psychological journal paper on "priming" has largely been disproven (9 separate studies failed to replicate the results), while the idea already has "made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace". (If need be, search this blog for mentions of Cass Sunstein) Particle physics, held out as the "hardest of the hard sciences" has been victimized by not "blinding" the study properly:
But maximising a single figure of merit, such as statistical significance, is never enough: witness the “pentaquark” saga. Quarks are normally seen only two or three at a time, but in the mid-2000s various labs found evidence of bizarre five-quark composites. The analyses met the five-sigma test. But the data were not “blinded” properly; the analysts knew a lot about where the numbers were coming from. When an experiment is not blinded, the chances that the experimenters will see what they “should” see rise. This is why people analysing clinical-trials data should be blinded to whether data come from the “study group” or the control group. When looked for with proper blinding, the previously ubiquitous pentaquarks disappeared.and:
Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”, says Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology.In August of 2005, published one of the most downloaded papers ever, "Why Most Published Research Findings are False". In it, he presents a long list of factors that are associated with results being false. One that lept out at me was this:
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.Which explains Climate Science in one sentence.
Naturally, it wouldn't be this blog if I didn't take a shot at how this is much more of a problem when there's a huge government like we have now.
We spent a while yesterday looking at the "Man on the Street" interviews that Mark Dice has on YouTube, and it doesn't take long to convince you that these people shouldn't be left alone with scissors, let alone in a voting booth or making important decisions. That thought leads to the idea that we should be led by only "Philosopher Kings", Technocrats who will be experts in the fields and choose the right course of action for us based on Science. That's an exceedingly dangerous course in politics, because it's generally a feature of command governments, and that's generally accompanied by millions dead. What this research into how well science is working is saying is that the consensus is almost always wrong, and the scientists really aren't any more qualified to make important decisions than the people who think Lee Harvey Oswald killed Jesus in the 1300s with a stolen gun.