Previous Month | RSS/XML | Current | Next Month
WEBLOG
February 28th, 2018 (Permalink)
Junk Food Science
A scientific scandal is rocking Cornell University's Food and Brand Lab, and both BuzzFeed News1 and Slate2 have run recent articles on it. One source of the scandal seems to be that the head of the lab, Brian Wansink, is more of an activist than a scientist, and the lab's work more advocacy research than science. Multiple comparisons3 would appear to be the lab's main research sin, but there are multiple failures here, not all of which can be laid at the door of the lab:
- Multiple comparisons:
…Wansink coached [a visiting researcher] to knead the pizza data. First, he wrote, she should break up the diners into all kinds of groups: “males, females, lunch goers, dinner goers, people sitting alone, people eating with groups of 2, people eating in groups of 2+, people who order alcohol, people who order soft drinks, people who sit close to buffet, people who sit far away, and so on…”. Then she should dig for statistical relationships between those groups and the rest of the data: “# pieces of pizza, # trips, fill level of plate, did they get dessert, did they order a drink, and so on...”.1
This seems to have been the lab's usual procedure: search through data until a statistically significant relationship between two variables is found, then invent a post hoc hypothesis to explain the relationship. This would be fine in exploratory research, but there needs to be a follow-up study to see whether the relationship is real or just a statistical fluke. However, the lab seems to have neglected to follow up.
Worse, it also seems to have failed to report this procedure in its published articles. In all likelihood, many of the lab's papers would not have been accepted for publication if it had been open about all of the comparisons made, together with the apparent fact that no statistical adjustments were made for those comparisons. Statistical significance at the usual .05 level is meaningless if twenty or more comparisons are made, and there are many more than twenty comparisons suggested in the above quote.
- Poor peer review:
Scientists often rely on peer reviewers―anonymous experts―to weed out errors in papers before they go to press. But journals didn’t, or couldn’t, catch every inaccuracy from the Food and Brand Lab. For example, reviewers were generally positive about what ended up being a controversial 2012 study in Preventive Medicine. It reported that schoolchildren ate more vegetables at lunch when they had catchy names like “X-Ray Vision Carrots.” … Last fall, Wansink admitted that the lunchtime observation part of the carrots study had actually been done on preschoolers, not the reported 8- to 11-year-olds.1
This is really the researchers' fault, and not that of the peer reviewers, as it appears that the latter were misled. It's unreasonable to expect reviewers to catch misstatements or missing information about how an experiment was conducted, since they have to rely upon the experimenters to accurately and fully report what they did.
Clearly, peer review failed to prevent the publication of a large number of bad articles, but I don't see any evidence that the problem was with the reviewers rather than the researchers. This is not to say that peer review isn't often superficial―it is―just that in this particular case it doesn't seem that it can be faulted.
- Publish and perish: "When their work was rejected, the members of the Food and Brand Lab would often try increasingly lower-quality journals until they succeeded.1"
You can't blame the researchers for trying to get their papers published somewhere; the fault here is in the "lower-quality journals" being willing to publish junk papers, as well as in the entire academic ethos of "publish or perish". The lab is now in danger of perishing from publishing so much junk, but up until recently it was an extremely successful research organization. If the lab survives the current scandal, it may have to stop publishing junk research, but it would be nice if the "lower-quality journals" improved their standards. Unfortunately, as long as the pressure to publish is so strong, there will continue to be a demand for these journals―not from readers, of course, since nobody wants or needs to read this junk, but from the scholars who must publish it somewhere, anywhere.
- Replication is not optional: "This practice [publishing in 'lower-quality' journals] is in part responsible for the sheer volume of scientific findings that cannot be replicated.1"
These articles say little about the extent to which other scientists attempted to replicate the lab's work, but given that most of the supposed effects discovered by the lab were probably statistical artefacts of their flawed research procedure, I would expect that they would not replicate. As with "publish or perish", this is one of the perverse incentives towards junk research built into the current academic environment. Because there is usually so little reward for attempting to replicate the work of others, much bad research is never discovered or exposed. Labs like this can crank out hundreds of worthless studies for years because nobody bothers to try to replicate them.
Moreover, I suspect that many people in the field were aware that much of this research was shoddy, but just ignored it. How could they not be aware of the amount of silly stuff published in their own field? I'm not even in the field and I'm aware of it! All you have to do is read the popular press to see that much of the research about food, nutrition, weight loss, etc., is hokum.
Notes:
- Stephanie M. Lee, "The Inside Story Of How An Ivy League Food Scientist Turned Shoddy Data Into Viral Studies", BuzzFeed News, 2/25/2018.
- Daniel Engber, "Death of a Veggie Salesman", Slate, 2/28/2018.
- The Multiple Comparisons Fallacy
Update (9/23/2018): After an investigation, Cornell University has found Wansink guilty of academic misconduct1 and he has announced that he is retiring next year2. This comes a day after six more of Wansink's papers were retracted3.
Notes:
- Ivan Oransky, "Cornell finds that food marketing researcher Brian Wansink committed misconduct, as he announces retirement", Retraction Watch, 9/20/2018
- Ivan Oransky, "Wansink admits mistakes, but says there was 'no fraud, no intentional misreporting'", Retraction Watch, 9/21/2018
- Ivan Oransky, "JAMA journals retract six papers by food marketing researcher Brian Wansink", Retraction Watch, 9/19/2018
February 20th, 2018 (Permalink)
The Puzzle of the Five Suspects
Who shot Victor Timm? The only witness to the shooting saw a short blond running away.
There are five suspects who each had the means, motive, and opportunity to commit the crime: Avery, Bailey, Casey, Davey, and Eddy.
Three of the suspects are short―less than six feet tall―and two are tall―greater than six feet. Three of the suspects have brown hair, while two have blond. At least one of the suspects is a tall blond.
Avery and Casey are the same height, whereas Davey and Eddy are different heights―that is, one is short and the other is tall. Bailey and Eddy have the same hair color, while Casey and Davey have different colors.
Which of the five suspects shot Timm?
February 1st, 2018 (Permalink)
What's New?
I have a new email course called "Logic Basics: Understanding Arguments"*, which consists of ten short lessons delivered daily by a company called Highbrow. It's a very brief introduction to the fundamental concepts of logic needed to analyze and evaluate reasoning―and, I might add, to understand fallacies.
The course is an extensive revision, especially in the later lessons, of a series posted here many years ago. Moreover, the last few lessons includes new material not covered in those posts. It's a Premium course, which means it's not free, but you can try out Highbrow's Premium service for a month for free, and the course only takes ten days! Check it out.
* Gary Curtis, Logic Basics: Understanding Arguments, Highbrow
Update: You can also use the following coupon to receive a 25% discount on an Annual Premium membership: Coupon, Highbrow, 3/17/2018.
Solution to the Puzzle of the Five Suspects: Avery and Casey are the same height, so either both are short or both are tall. However, either Davey or Eddy is tall, so if Avery and Casey were both tall then there would be three tall suspects. But we know that there are only two tall suspects. Therefore, Avery and Casey must both be short.
Since three of the other suspects are short, this means that Bailey must be tall. Moreover, the witness saw a short person running away, so we can eliminate Bailey as a suspect, which leaves four.
Casey and Davey have different hair colors, while Bailey and Eddy are either both brown-haired or both blond. If Bailey and Eddy were both blonds, then there would be three blond suspects, whereas we know that there are only two. Therefore, Bailey and Eddy both have brown hair, eliminating Eddy as a suspect and leaving three.
Since Bailey, Eddy, and either Casey or Davey have brown hair, this means that Avery must have blond hair. We also know that the other blond suspect is tall, which rules out Casey, who is short. Therefore, Casey must have brown hair and can be eliminated as a suspect. Three down and two to go.
So, Davey must be the tall blond suspect, but the shooter was described as a short blond. Davey is out, leaving one final suspect, who is the only one to fit the description of the fleeing shooter: Avery, a short blond.