Brian Wansink, the Cornell nutrition researcher who was world-renowned for his massively popular, commonsense-style dieting studies before ultimately going down in flames in a beefy statistics scandal, has now resigned—with a considerably slimmer publication record.The important part of the article last February is what ultimately got Wansink out of his job: he wasn't doing science, he was trying to find things that would catch public attention and go viral. The way it's supposed to work is that a researcher comes up with a hypothesis and then does an experiment to determine if their hypothesis is true; more precisely, they evaluate the null hypothesis that the experimental results were random and not due to their hypothesis. Wansink would collect gobs of data and then try to find hypotheses that are true based on that data.
JAMA’s editorial board retracted six studies co-authored by Wansink from its network of prestigious publications on Wednesday, September 19. The latest retractions bring Wansink’s total retraction count to 13, [Note: that page shows 35 papers retracted at this time - SiG] according to a database compiled by watchdog publication Retraction Watch. Fifteen of Wansink’s other studies have also been formally corrected.
Amid this latest course in the scandal, Cornell reported today, September 20, that Wansink has resigned from his position, effective at the end of the current academic year. In a statement emailed to Ars, Cornell Provost Michael Kotlikoff said that an internal investigation by a faculty committee found that “Professor Wansink committed academic misconduct in his research and scholarship, including misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.”
But, in a November 2016 blog post, Wansink inadvertently sank his own fame by noting that he encouraged his graduate students to go on statistical fishing trips, pushing them to net unintended conclusions from otherwise null nutrition experiment results. This is a huge red flag to researchers because such statistical fishing is a well-established method for reeling in false positives and meaningless statistical blips, like finding a link between cabbage and innie belly buttons. Moreover, many researchers see the dubious approach as fueling a crisis in social sciences in which findings from key studies—like Wansink’s—are not reproducible by other researchers, calling into question their original validity.As I've talked about in these pages before, there are several serious crises going on in science these days. The biggest is reflected in the August 2005 paper by one of the most downloaded papers ever, "Why Most Published Research Findings are False". Ioannidis points out that the majority of scientific papers are wrong; as much as 70% of published science is wrong. Not just biomedical but hard sciences like particle physics.
The blogged confession led to several other researchers sifting through Wansink’s studies and stats. Prime among those researchers is education researcher and blogger Tim van der Zee of Leiden University in the Netherlands. By last year, van der Zee and colleagues had identified at least 42 Wansink studies with alleged issues ranging from minor to severe. Those studies had collectively been cited by other researchers 3,700 times, been published in over 25 journals and eight books, and spanned 20 years of research, van der Zee noted.
But maximising a single figure of merit, such as statistical significance, is never enough: witness the “pentaquark” saga. Quarks are normally seen only two or three at a time, but in the mid-2000s various labs found evidence of bizarre five-quark composites. The analyses met the five-sigma test. But the data were not “blinded” properly; the analysts knew a lot about where the numbers were coming from. When an experiment is not blinded, the chances that the experimenters will see what they “should” see rise. This is why people analysing clinical-trials data should be blinded to whether data come from the “study group” or the control group. When looked for with proper blinding, the previously ubiquitous pentaquarks disappeared.Simply, the peer review process is broken - perhaps irreparably.
Science itself, as it currently works, may well also be badly broken. In the Spring/Summer 2016 issue of the new journal The New Atlantis, some important points were brought up. As I excerpted in August of 2016.
As WWII came to a close, there was an acknowledgement of how much that scientific teams had contributed to the victory and a deliberate effort to keep those teams together. Vannevar Bush, the MIT engineer called the “General of Physics” by Time Magazine, was the public face behind this push. He pushed a vision so appealing in its imagery that everyone bought into it.The typical academic scientist in a university lab may bristle at the thought of being given an assignment by a boss somewhere, and being held accountable for results. Nevertheless, a persuasive argument can be made that this might be the way to fix science.
Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.Through example after example Sarewitz demonstrates that the progress of the late 20th century was virtually never, “free play of free intellects, working on subjects of their own choice”, but instead was almost always science being managed, being driven on specific topics for specific applications. Scientific knowledge advances most rapidly, and is of most value to society, when it is steered to solve problems — especially those related to technological innovation. Could it be that the War on Cancer has floundered because there's nobody in charge; nobody driving toward a goal and asking specific people specific questions?
Five Thirty Eight did an experiment to show the kinds of spurious correlations that arise from using the typical tools of dietary studies: food frequency questionnaires and recall studies. Their study demonstrated that eating egg rolls was strongly associated with dog ownership, and that eating cabbage was strongly associated with having an "innie bellybutton". That's some real Brian Wansink quality science there!
Brian Wansink in a publicity photo. AP Photo by Mike Groll - from Buzz Feed