Previous Month | RSS/XML | Current
Michael Shermer got his first clue that things were changing at Scientific American in late 2018. The author had been writing his "Skeptic" column for the magazine since 2001. … In the twenty-first century, …American scientific media, including Scientific American, began to slip into lockstep with progressive beliefs. Suddenly, certain orthodoxies…couldn't be questioned."I started to see the writing on the wall toward the end of my run there," Shermer told me. "I saw I was being slowly nudged away from certain topics." One month, he submitted a column about the "fallacy of excluded exceptions,1" a common logical error in which people perceive a pattern of causal links between factors but ignore counterexamples that don't fit the pattern.
As far as I can tell, this is Shermer's name for the fallacy of one-sidedness2, which is the logical mistake of considering only the evidence for a particular hypothesis and ignoring that against it. The fallacy itself has been known for a long time under a plethora of names, and I don't think we really need another, though it's standard practice to rediscover old fallacies and give them new names.
In the story, Shermer debunked the myth of the "horror-film curse," which asserts that bad luck tends to haunt actors who appear in scary movies. (The actors in most horror films survive unscathed, he noted, while bad luck sometimes strikes the casts of non-scary movies as well.) Shermer also wanted to include a serious example: the common belief that sexually abused children grow up to become abusers in turn. He cited evidence that "most sexually abused children do not grow up to abuse their own children" and that "most abusive parents were not abused as children." And he observed how damaging this stereotype could be to abuse survivors; statistical clarity is all the more vital in such delicate cases, he argued. But Shermer's editor at the magazine wasn't having it. To the editor, Shermer's effort to correct a common misconception might be read as downplaying the seriousness of abuse. Even raising the topic might be too traumatic for victims.The following month, Shermer submitted a column discussing ways that discrimination against racial minorities, gays, and other groups has diminished (while acknowledging the need for continued progress). … For progressives, admitting that any problem―racism, pollution, poverty―has improved means surrendering the rhetorical high ground. "They are committed to the idea that there is no cumulative progress," Shermer says, and they angrily resist efforts to track the true prevalence, or the "base rate,3" of a problem. Saying that "everything is wonderful and everyone should stop whining doesn't really work," his editor objected. …
Speaking of logical fallacies, this is such a glaring straw man4 that an editor of a science magazine should be ashamed to use it: logic is a science, after all.
American journalism has never been very good at covering science. In fact, the mainstream press is generally a cheap date when it comes to stories about alternative medicine, UFO sightings, pop psychology, or various forms of junk science. For many years, that was one factor that made Scientific American's rigorous reporting so vital. … But over the past decade or so, the quality of science journalism―even at the top publications―has declined in a new and alarming way. Today's journalistic failings don't owe simply to lazy reporting or a weakness for sensationalism but to a sweeping and increasingly pervasive worldview. …Several prominent scientists took note of SciAm's shift. "Scientific American is changing from a popular-science magazine into a social-justice-in-science magazine," Jerry Coyne, a University of Chicago emeritus professor of ecology and evolution, wrote…. He asked why the magazine had "changed its mission from publishing decent science pieces to flawed bits of ideology." …
Scientific American's increasing engagement in politics drew national attention in late 2020, when the magazine, for the first time in its 175-year history, endorsed a presidential candidate. "The evidence and the science show that Donald Trump has badly damaged the U.S. and its people," the editors wrote. "That is why we urge you to vote for Joe Biden." …
Vinay Prasad, the prominent oncologist and public-health expert, recently lampooned the endorsement trend…, asking whether science journals will tell him who to vote for again in 2024. "Here is an idea! Call it crazy," he wrote: "Why don't scientists focus on science, and let politics decide the election?" When scientists insert themselves into politics, he added, "the only result is we are forfeiting our credibility."
But what does it mean to "focus on science"? Many of us learned the standard model of the scientific method in high school. We understand that science attempts…to shield the search for truth from political interference, religious dogmas, or personal emotions and biases. But that model of science has been under attack for half a century. …
Longer, actually. Scientific objectivity has always been attacked by dogmatists, whether religious or political.
Shermer believes that the new style of science journalism "is being defined by this postmodern worldview, the idea that all facts are relative or culturally determined." Of course, if scientific facts are just products of a particular cultural milieu, he says, "then everything is a narrative that has to reflect some political side." Without an agreed-upon framework to separate valid from invalid claims―without science, in other words―people fall back on their hunches and in-group biases, the "my-side bias."Traditionally, science reporting was mostly descriptive―writers strove to explain new discoveries in a particular field. The new style of science journalism takes the form of advocacy―writers seek to nudge readers toward a politically approved opinion.
"Lately journalists have been behaving more like lawyers," Shermer says, "marshaling evidence in favor of their own view and ignoring anything that doesn't help their argument." This isn't just the case in science journalism, of course. Even before the Trump era, the mainstream press boosted stories that support left-leaning viewpoints and carefully avoided topics that might offer ammunition to the Right. Most readers understand, of course, that stories about politics are likely to be shaped by a media outlet's ideological slant. But science is theoretically supposed to be insulated from political influence. Sadly, the new woke style of science journalism reframes factual scientific debates as ideological battles, with one side presumed to be morally superior. Not surprisingly, the crisis in science journalism is most obvious in the fields where public opinion is most polarized. …
As Shermer observed, many science journalists see their role not as neutral reporters but as advocates for noble causes. This is especially true in reporting about the climate. Many publications now have reporters on a permanent "climate beat," and several nonprofit organizations offer grants to help fund climate coverage.
This is a conflict of interest if the organizations have a position on the issue, so responsible news organizations should reject such grants5.
Climate science is an important field, worthy of thoughtful, balanced coverage. Unfortunately, too many climate reporters seem especially prone to common fallacies, including base-rate neglect3, and to hyping tenuous data.The mainstream science press never misses an opportunity to ratchet up climate angst. No hurricane passes without articles warning of "climate disasters." And every major wildfire seemingly generates a "climate apocalypse" headline. For example, when a cluster of Quebec wildfires smothered the eastern U.S. in smoke last summer, the New York Times called it "a season of climate extremes." It's likely that a warming planet will result in more wildfires and stronger hurricanes. But eager to convince the public that climate-linked disasters are rapidly trending upward, journalists tend to neglect the base rate. In the case of Quebec wildfires, for example, 2023 was a fluky outlier. During the previous eight years, Quebec wildfires burned fewer acres than average; then, there was no upward trend―and no articles discussing the paucity of fires. By the same token, according to the U.S. National Hurricane Center, a lower-than-average number of major hurricanes struck the U.S. between 2011 and 2020. But there were no headlines suggesting, say, "Calm Hurricane Seasons Cast Doubt on Climate Predictions."
Most climate journalists wouldn't dream of drawing attention to data that challenge the climate consensus. They see their role as alerting the public to an urgent problem that will be solved only through political change. …
Scientists and journalists aren't known for being shrinking violets. What makes them tolerate this enforced conformity? …[I]ntimidation…is one factor. Academia and journalism are both notoriously insecure fields; a single accusation of racism or anti-trans bias can be a career ender. In many organizations, this gives the youngest, most radical members of the community disproportionate power to set ideological agendas. …
Whether motivated by good intentions, conformity, or fear of ostracization, scientific censorship undermines both the scientific process and public trust. … When scientists claim to represent a consensus about ideas that remain in dispute―or avoid certain topics entirely―those decisions filter down through the journalistic food chain. Findings that support the social-justice worldview get amplified in the media, while disapproved topics are excoriated as disinformation. Not only do scientists lose the opportunity to form a clearer picture of the world; the public does, too. At the same time, the public notices when claims made by health officials and other experts prove to be based more on politics than on science. A new Pew Research poll finds that the percentage of Americans who say that they have a "great deal" of trust in scientists has fallen from 39 percent in 2020 to 23 percent today. …
Unfortunately, progressive activists today begin with their preferred policy outcomes or ideological conclusions and then try to force scientists and journalists to fall in line. Their worldview insists that, rather than challenging the progressive orthodoxy, science must serve as its handmaiden. This pre-Enlightenment style of thinking used to hold sway only in radical political subcultures and arcane corners of academia. Today it is reflected even in our leading institutions and science publications. Without a return to the core principles of science―and the broader tradition of fact-based discourse and debate―our society risks drifting onto the rocks of irrationality.
The following video covers the same ground as the above article:
Notes:
Disclaimer: I don't necessarily agree with everything in the articles and video, but I think they're worth reading and watching as a whole.
In the previous entry1, I pointed out that a study of the supposed health benefits of coffee that was funded by the coffee industry should be regarded with healthy skepticism. However, it's not just industry funding of studies that should arouse skepticism, but also funding by advocacy groups with a stake in the outcome of the research.
In October, a study was released claiming that products made of black plastic, such as spatulas and food containers, might contain harmful levels of chemical fire retardants2. News articles were soon advising people to discard any such items that could come into contact with food3.
It turned out that much of the alarm about such plastics was literally exaggerated by an order of magnitude (OOM). Here's a description of the mistake by the authors of the study:
The authors regret that our original manuscript was printed with an error when calculating the…reference dose for a 60 kg adult. We compared the estimated daily intake of 34,700 ng/day of BDE-209 from the use of contaminated utensils to the…reference dose of 7000 ng/kg bw/day. However, we miscalculated the reference dose for a 60 kg adult, initially estimating it at 42,000 ng/day instead of the correct value of 420,000 ng/day. As a result, we revised our statement from "the calculated daily intake would approach the U.S. BDE-209 reference dose" to "the calculated daily intake remains an order of magnitude lower than the…reference dose."4
My purpose in this entry is not to highlight the multiplication error, which has already received considerable publicity, or to subject the authors to further public humiliation. Instead, I want to consider the question: How did such a mistake happen? The research was not only funded, but conducted by a group called "Toxic-Free Future" (TFF). Unlike the neutral-sounding names of other advocacy groups, at least the name of this one reveals its agenda. As with the case of industry-funded research, we have a right to be skeptical of research from such groups, and for the same reason: both types of group have a conflict of interest.
As I asked in the previous entry, if the error had gone in the other direction would we have ever heard of the study? If the researchers for TFF had not made the math mistake, would they have bothered publishing the results? Even if they had, would we have seen stories telling us that we should immediately dispose of black plastic utensils? The scare was premised on the amount of fire retardant found in the plastic items being close to the maximum level permitted by regulators, but it was actually an OOM below that level.
The first line of defense against such mistakes is the researchers themselves, but the second line is peer review. Chemosphere, the peer-reviewed journal that published the study, has since been removed from the Web of Science index for failing to meet the index's editorial standards5. This is not the journal's first problem because earlier this year it released a statement revealing that it was "reviewing potential breaches of Chemosphere's publishing policies, including unusual authorship changes, compromised or manipulated peer-review processes, and undisclosed conflicts of interest.6" In the case of the black plastic article, it appears that the peer review was at least deficient if not manipulated.
The third line of defense is the scientific community at large, and that's what happened in this case:
The error was discovered by Joe Schwarcz, director of McGill University's Office for Science and Society in Canada. In a blog post, Schwarcz explained that the Toxin-Free Future scientists miscalculated the lower end of what the EPA considered a health risk through a multiplication error. Instead of humans being potentially exposed to a dose of toxic chemicals in black plastic utensils near the minimum level that the EPA deems a health risk, it's actually about one-tenth of that. Though Schwarcz said the risks outlined in the study aren't enough for him to discard his black plastic kitchen items if he had them, he agreed with the authors that flame retardants shouldn't be in these products in the first place.7
The fourth and final line of defense against the pollution of science by advocacy research is a healthy skepticism in the general public.
Notes:
I was just sitting down with my morning cup of coffee and the newspaper the other day when I saw the following headline:
When I was about to go get a refill I saw another:
It was the same study as that reported in the first headline! What would you expect from an "industry-funded study"? I skipped the refill.
It might be too much to ask that every health news article mention in its headline when the study reported on is funded by an industry, though it might save a lot of reading time. However, I'd settle for the source of funding at least being mentioned in the body of the article. The article beneath the first headline above never mentions industry funding, though in a "Paper Summary" at its bottom, in the section on "Funding & Disclosures", it discloses that the study "was supported by the Institute for Scientific Information of [sic] Coffee"3, but what is that? The name sounds neutral, but it's typical of advocacy groups to adopt such names; for instance, the tobacco-industry's Center for Indoor Air Research4. As a matter of fact, the ISIC is a non-profit group created by European coffee companies5, among which the best-known to Americans are Nestlé and Peet's.
Now, I don't mean to suggest that just because the study was funded by the coffee industry it must be wrong. In fact, so far as I know, it's a perfectly fine study. The main concern I have about industry funding is what would happen if the results of the study had gone the opposite way? What if the study had shown that coffee abstainers lived two years longer than coffee drinkers? Would we have seen the following headline?
I doubt it.
In addition to its industry funding, the study in question is a review study6, that is, a study of studies. While the review itself may be unaffected by its funding, it is necessarily limited to past studies that have been published, and how many of those were funded by the coffee companies? So, if there is a systematic bias towards publishing studies with results that favor coffee, the review itself may be biased by such studies. It is not unheard of for funders to bury studies with negative results4, though I've no evidence of that in this particular case. Nonetheless, it is one reason to worry about such funding.
Finally, I'm getting tired of preaching that observational studies cannot establish causation, and regular readers are probably getting tired of hearing me preach it7, but the fact remains. Similarly, a review of studies that cannot establish causation cannot itself establish causation, because a hundred observational studies does not add up to one experimental study.
Notes:
Disclosures and disclaimers: I am not a physician though some people call me "doctor". The opening autobiographical remarks in this entry are fictional, but I do drink coffee.
To get the most out of this entry, give the following puzzle a shot. This is a difficult puzzle, so don't be too disappointed if you can't solve it completely, but don't give up too easily. When you've tried the puzzle and read the solution, check out my comments below.
Puzzle: Cards on the Table
Instructions: You are sitting at a large table and spread out on the tabletop in front of you is a deck of cards. You know that forty-four of the cards are face down and the remaining eight are face up. However, you are blindfolded and cannot see any of the cards; nor can you tell which side of a card is up by touching it or in any other way. Your task is to separate the cards into two groups with the same number of face-up cards in each group. How do you do it?
Did you assume that the two groups must have an equal number of cards?
Did you assume that you can't turn over any cards?
Try contraction! If you're unfamiliar with contraction or need a refresher, see the first entry in this series*.
Separate eight of the cards from the remaining forty-four, then turn over each of the eight cards1. The two resulting groups will have the same number of face-up cards in each.
Explanation: Perhaps the best way to see that the above solution will actually work is to use contraction―see the first entry in this series. Suppose that, instead of eight face-up cards there were only two. Then, you would separate out two of the fifty-two cards, leaving fifty in the remaining group. It's unlikely that you would happen to pick the two face-up cards, so let's see what happens if both cards are face down. You turn both cards over, which means that both would then be face up. Since there are two face-up cards in the remaining group, that means there are the same number of face-up cards in each group: success! Did you assume that the number of face-up cards had to remain the same?
What if one of the two cards you pick out is face up, you may ask. Then, when you turn over the two cards selected one will still be face up and one face down. Since only one face-up card is left in the main group, both groups have the same number of face-up cards: success again!
Finally, suppose you just happened to choose both the face-up cards: then, when you turn them over both of the selected cards will be face down. Since there are no face-up cards left in the main group, both groups have same number of face-up cards, namely, zero: success yet again!
Therefore, no matter how many face-up cards are among the two you select, if you follow the above procedure you'll always end up with the same number of face-up cards in both groups. You can carry out this same line of thinking for four, six, and finally eight cards, and get the same result, though the reasoning is longer and more tedious.
History: The above puzzle is adapted from one that usually involves a hundred coins instead of a deck of cards, with ninety of the coins heads up on the table and ten tails up, or vice versa. The solution is logically identical. The puzzle has apparently been used as a job interview question by Apple2, but I haven't been able to find out who invented it or when.
Notes:
Comments: The first step in problem solving is to understand the nature of the problem. If you get stuck on a problem or it seems impossible to solve, go back and re-check what it is you have to do. It may be that you made a false assumption that rendered the problem unsolvable.
I selected the puzzle above as an example for this entry on problem-solving because it illustrates the danger of making false assumptions. When you began to think about how to solve it, did you assume that you had to split the deck into two equal groups? I did! But you can't solve it that way except by pure luck. Notice that the instructions place no restrictions on how many cards are in each group.
Did you assume that you couldn't turn over the cards? I did! Again, the instructions do not forbid turning the cards over, and you're unlikely to solve the puzzle without doing so.
If you make either of these assumptions, you'll find yourself stuck and the puzzle will appear to be impossible to solve. That's a sign that you should re-read the instructions to check whether you are making any false assumptions.
The title of this entry is based on the old saying that when you assume "you make an ass of you and me", that is, "ass" + "u" + "me" = "assume". However, it is often useful in reasoning to make assumptions, and some standard proof methods, such as conditional proof and indirect proof, do so. However, false assumptions can render a problem unsolvable, so be careful what you assume.
* Previous entries in the "How to Solve a Problem" series:
A recent article on the restoration of Notre Dame cathedral in Paris includes the following sentence: "Louise Bausiere, who spent the last two years working on the cathedral's knave, told NBC News Wednesday said [sic] she hoped people would admire what the team of craftspeople had done.1" "Knave" is an old-fashioned word for a dishonest or untrustworthy man2, as well as an alternative name for a Jack in a deck of cards. So, the Knave of Hearts, who stole the tarts in the nursery rhyme, is a Jack of Hearts3. Puzzle fans may be familiar with the type of logic puzzle in which knights always tell the truth and knaves always lie.
So, what could be meant by Bausiere "working on the cathedral's knave"? Does Notre Dame have its own designated rascal? Obviously not; instead, what Bausiere spent a couple of years working on was the nave4. "Nave" and "knave" are pronounced exactly the same, and the only difference in spelling is the silent "k" on "knave", but they mean very different things. A nave is the central part of the floorplan of a church running its length from the front entrance to the transepts5.
Since both "nave" and "knave" are English words, a pure spell-checking program won't catch the substitution of one for the other. I checked the example sentence in several online spelling or grammar checkers and some found nothing wrong with it, others at least noticed the ungrammatical "said", and a couple actually suggested dropping the "k" on "knave".
There's an old proverb that "knaves and fools divide the world"6, but naves and transepts divide a cathedral.7
Notes: