Against Gatekeeping & "Democracy Dies in Darkness"
- Jeffrey A. Singer, "Against Scientific Gatekeeping", Reason, 5/2022
As you read this, keep in mind that "gatekeeping" is the new doublespeak for "censorship".
Most people prefer experts, of course, especially when it comes to health care. … But a problem arises when some of those experts exert outsized influence over the opinions of other experts and thereby establish an orthodoxy enforced by a priesthood. If anyone, expert or otherwise, questions the orthodoxy, they commit heresy. The result is groupthink, which undermines the scientific process.
The COVID-19 pandemic provided many examples. Most medical scientists, for instance, uncritically accepted the epidemiological pronouncements of government-affiliated physicians who were not epidemiologists. At the same time, they dismissed epidemiologists as "fringe" when those specialists dared to question the conventional wisdom. … The medical science priesthood has a long history of treating outside-the-box thinkers harshly.
I've omitted here some interesting and important history concerning Edward Jenner, Ignaz Semmelweis, and John Snow―if you're not familiar with their stories, be sure to read the whole thing. We would like to believe that such stories are remnants of a blinkered past and could never happen in the 21st century, but see below.
…[E]ven health care practitioners who recognize the value of unconventional thinking tend to bridle when they face challenges from nonexperts. Today the internet gives everyone access to information that previously was shared only among medical professionals. Many lay people engage in freelance hypothesizing and theorizing, a development turbocharged by the COVID-19 pandemic. Every physician can tell stories about patients who ask questions because of what they've read on the internet. Sometimes those questions are misguided, as when they ask if superfoods or special diets can substitute for surgically removing cancers. But sometimes patients' internet-inspired concerns are valid, as when they ask whether using surgical mesh to repair hernias can cause life-threatening complications. … Health care professionals who see only the costs of their patients' self-guided journeys through the medical literature tend to view this phenomenon as a threat to the scientific order, fueling a backlash. Their reaction risks throwing the baby out with the bathwater.
It is easy to understand why the scientific priesthood views the democratization of health care opinions as a threat to its authority and influence. In response, medical experts typically wave the flag of credentialism: If you don't have an M.D. or another relevant advanced degree, they suggest, you should shut up and do as you're told. But credentials are not always proof of competence, and relying on them can lead to the automatic rejection of valuable insights. …
Still, it is certainly true that lacking a background in a specific discipline can impede critical analysis of scientific studies by laypeople, making them more vulnerable to quacks and charlatans. Training in the discipline can make it easier to detect "cherry picking" of data and anticipate alternative interpretations of the evidence. Experts are experts for a reason. The question is how we can maximize the benefits of scientific democratization while minimizing its costs. …
The politicization of COVID-19 science was…apparent in the reaction to a prominent skeptic of lockdowns. In early March 2020, John P.A. Ioannidis, a professor of epidemiology and biostatistics at Stanford and an icon of the movement for evidence-based medicine, published an essay in STAT titled "A Fiasco in the Making?" The subhead warned that "as the coronavirus pandemic takes hold, we are making decisions without reliable data." Ioannidis argued that school closures and other lockdown measures could inflict great harm. Before imposing unprecedented restrictions, he said, public health officials should wait for more data.
Ioannidis' political views are unknown. But his essay jibed with the skepticism expressed by the president and many of his supporters. The heretofore revered epidemiologist therefore was pilloried by the medical science priesthood and its supporters in the media. The Nation published an article calling Ioannidis' work a "black mark" on Stanford and implying it was influenced by corporate sponsors.
This is the typical ad hominem attack you'd expect from The Nation.
Or consider the reaction to the Great Barrington Declaration, published on October 4, 2020, by Martin Kulldorff, then a professor of epidemiology at Harvard; Sunetra Gupta, a professor of epidemiology and immunology at Oxford; and Jay Bhattacharya, a Stanford professor of medicine with a Ph.D. in economics. The statement, which was eventually endorsed by thousands of medical and public health scientists, including the recipient of the 2013 Nobel Prize in chemistry, noted that broad lockdowns entail large costs and advocated a more focused approach that would let those least vulnerable to COVID-19 resume normal life as much as possible.
The authors of the Great Barrington Declaration represent a range of political ideologies. But because they opposed the policies favored by the public health establishment and received applause from people aligned with Trump, they were vilified. An editorial in the journal Science-Based Medicine said they were "following the path laid down by creationists, HIV/AIDS denialists, and climate science deniers."
This is a pathetic guilt-by-association smear, and a journal supposedly in support of science-based medicine should be ashamed of itself for resorting to it.
One cannot ignore the role of social media in all this. Platforms such as Facebook, Twitter, and YouTube are private property, and the owners have the right to decide what sort of content they will allow. But the major platforms, like the mainstream news media, tend to align themselves with the science priesthoods. They therefore are inclined to suppress scientific heterodoxy—a tendency encouraged by the Biden administration's explicit demands that they eliminate COVID-19 "misinformation," including content that is deemed "misleading" even if it is not verifiably false.
Cultural and ideological affinity with the priesthood might partially explain this alignment. While many of the tech entrepreneurs didn't acquire academic credentials, they see themselves as new members of the intellectual elite. In addition, both Republicans and Democrats in Congress have spoken of the digital media as "the wild west," each seeking to regulate it to their own advantage. By forging an alliance with the scientific priesthood and academic elite, tech entrepreneurs might strengthen their position against political assaults.
While they remain nominally private property, these platforms have become virtual monopolies as there is no substantial competition to any of them. Then they are threatened with regulation by members of Congress as well as the White House if they don't follow the government's line. As a result, the government now has powers of censorship it lacked as little as ten years ago, and it won't willingly give up that power. This explains the recent furore over Elon Musk's purchase of Twitter since he has expressed the intent to free it up from government control.
This helps explain why Facebook used "fact checkers," Twitter applied warning labels, and YouTube removed posts that questioned the lockdown policies advocated by the public health establishment in the pandemic's early days. Yet, in recent months it has become acceptable in polite society to criticize school and business closures and other lockdown measures. …
Perhaps the most egregious example of digital media doing the dirty work for the priesthood is the suppression of talk about the potentially embarrassing source of the COVID-19 virus. Efforts to suggest the source was a leak at the Wuhan Institute of Virology were dismissed as a "conspiracy theory" by pundits and suppressed by social media gatekeepers. After The Wall Street Journal reported in May 2021 that intelligence sources believed a lab leak is a plausible explanation that deserves further investigation, Facebook lifted its ban on posts that mentioned the theory. Twitter, on the other hand, refused to commit to what it would censor on the subject. By summer 2021, a consensus emerged among scientists in the academy and the media that the lab leak theory was at least plausible and should be explored.
During the last two years, public health officials got a lot of things wrong, although it remains to be seen if they will ever admit it. Multiple studies, for example, have concluded that there is little or no evidence that shelter-in-place orders and other lockdown strategies had an important impact on COVID-19 infections or deaths. Other research has shown that such restrictions disproportionately harmed the young and the poor.
Public health officials criticized Kuldorff et al. for stressing natural immunity's role in protecting against infection. On January 19, 2022, the CDC publicly acknowledged that during the delta wave, natural immunity had offered better protection than vaccination. The authors of the Great Barrington Declaration had been largely correct. …
Because the internet has democratized science, the academy no longer has a monopoly on specialized information. Based on their own assessments of that information, lay people can chime in and may even end up driving the scientific narrative, for good or ill. Meanwhile, the internet is developing its own would-be gatekeepers. Those who oversee the major social media platforms can filter information and discourse on their platforms. Pleasing the priesthood enhances their credibility with elites and might protect them from criticism and calls for regulatory intervention, but they risk being captured in the process.
Challenges to the priesthoods that claim to represent the "scientific consensus" have made them increasingly intolerant of new ideas. But academic scientists must come to terms with the fact that search engines and the digitization of scientific literature have forever eroded their authority as gatekeepers of knowledge, a development that presents opportunities as well as dangers.
…[T]he science priesthood must adapt to a world where specialized knowledge has been democratized. For scientific knowledge to advance, scientists must reach a rapprochement with the uncredentialed. They must not dismiss lay hypotheses or observations out of hand. They must fight against the understandable desire to avoid any hypothesis that might upset the health bureaucrats who control billions of research grant dollars. It is always useful to challenge and reassess long-held premises and dogmas. People outside of a field might provide valuable perspectives that can be missed by those within it.
It would help if fewer billions of dollars were in the control of a handful of entrenched bureaucrats.
Openness to unconventional ideas has its limits. We don't take flat-earthers seriously. Nor should we lend credence to outlandish claims that COVID-19 vaccines cause infertility, implant people with microchips, or change their DNA. There are not enough hours in the day to fully address every question or hypothesis. But a little tolerance and respect for outsiders can go a long way. If those habits become the new norm, people will be more likely to see rejection of challenges to the conventional wisdom as the objective assessment of specialists rather than the defensive reaction of self-interested elites. Science should be a profession, not a priesthood.
I'm not sure what Singer is suggesting here. Up until the last few years, no one would have thought it necessary to do anything about flat-earthers. The solution to such claims is not to suppress them, but to debunk them, and to explain why we know the earth is round and so on. Attempts at suppression backfire, anyway, since conspiracy theorists thrive on censorship. They like nothing better than to claim that the government or some "scientific priesthood" is trying to hide the truth.
My main disagreement with Singer is with his notion that the "gatekeepers" are a "scientific priesthood". Instead, the "gatekeepers" are mostly a small group of bureaucrats, some of who have scientific credentials, but they are bureaucrats first and scientists second if at all. Other "gatekeepers" are the giant corporations that run so-called social media, which are certainly not controlled by some scientific priesthood.
- Cristiano Lima, "Hunter Biden laptop findings renew scrutiny of Twitter, Facebook crackdowns", The Washington Post, 3/31/2022
Just weeks before the 2020 election, Twitter and Facebook took rare steps to limit the circulation of an article by the New York Post detailing emails supposedly from the laptop of Hunter Biden, son of President Biden. Facebook limited the reach of the story while it attempted to look into the veracity of the report. Twitter went a step further, locking the New York Post account and blocking users from posting links to the story over concerns it was based on hacked materials. …
Amid new reporting by The Washington Post, verifying the authenticity of thousands of emails purportedly from the laptop of Hunter Biden, and the New York Times, which authenticated some messages in the cache, the social media giants are facing fresh scrutiny. …[M]y colleagues Craig Timberg, Matt Viser and Tom Hamburger wrote that “thousands of emails purportedly from the laptop computer of Hunter Biden” are “authentic communications that can be verified through cryptographic signatures from Google and other technology companies,” based on an analysis two security experts conducted for The Post. …
…[T]he findings highlight the fact that technology companies appear to have acted both forcefully and preemptively against a perceived threat that to this day has not been publicly corroborated. Twitter and Facebook declined to comment on the new findings. The findings are also resurfacing broader debates within civil society about where social media obligations to curtail suspected misinformation begin and end.
This last sentence contains a false presupposition: since when are gigantic corporations obligated to act as censors? I don't trust Twitter or Facebook to censor what I write or read, which is one reason I don't use them. More important is the fact that such corporations are subject to capture and control by the government.
A crucial question is what role, if any, should they play in restricting articles by prominent news outlets, particularly when dealing with unverified or dubious sourcing? “A company like Twitter should not be trying to make a determination on the veracity of information when it is impossible for them to have the type of information they would need to do so,” Evan Greer, director of digital activist group Fight for the Future, told The Technology 202. …
Even if they had that information, what business is it of theirs to be "restricting articles by prominent news outlets"?
Former Facebook security chief Alex Stamos told my colleague Will Oremus that, in retrospect, the platforms “overreacted” against the New York Post coverage of Hunter Biden….
It takes a former security chief to say this? When will the current chief publicly apologize?
Twitter, which took more aggressive action, took the brunt of the heat in Washington. While the company reversed its initial ruling that the article violated its policies against hacked materials, it continued to block users from posting it under a separate rule against publishing private user information. Twitter changed course again, saying it would allow users to share the link because the information was widely available in publications beyond the New York Post. Ultimately, the episode underscores the risks social media platforms take when deciding whether to limit content where the risk of harm is unclear.
It appears that Twitter was looking for a policy―any policy―that would allow it to "limit content" until the election was over. "Limit content" is a nice euphemism for "censor", and the risk of harm from censorship is not at all unclear. What else does "Democracy Dies in Darkness" mean?
It's nice to see The Washington Post acknowledging that "social" media censorship is a problem, even in this half-hearted way. Imagine the outrage if it had been The Washington Post that had been locked out of Twitter rather than The New York Post. It would have been more justifiable for Twitter to have limited the former's content rather than the latter's if the real intent was to limit the spread of misinformation.
Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I have sometimes changed the paragraphing and rearranged the order of the excerpts in order to emphasize points.
The following very long article could use a good edit, but it's still worth reading as a whole. Below, I've excerpted only a small fraction of it, including some points on which I wanted to comment.
April 21st, 2022 (Corrected: 4/30/2022) (Permalink)
Credibility Checking, Part 4: Ballpark Estimation
In the previous installment of this series on how to check claims for credibility1, we needed to know how many teenaged drivers there were in the United States in 2007, which is the sort of very specific statistical fact that you are unlikely to be able to find quickly or at all. Rather than giving up, we estimated the number based on a couple of other facts that we knew or were able to look up quickly: the population of the country and the average life span in that year. Thankfully, in order to check a claim for credibility, we seldom if ever need precise numbers, and ballpark estimates will serve.
What is a "ballpark" estimate? It is an estimate that is "in the ballpark", as we say, that is, it's not spot-on but it's close enough. The use of "ballpark" as a name for this type of estimation appears to have come from rough estimates of the number of spectators in a ballpark watching a baseball game2. We frequently don't need an exact number, which is fortunate since an exact number is often not available. In such situations, we can make do with a ballpark estimate, which I define as one that has the same order of magnitude as the number estimated.
What are orders of magnitude? They are tens, hundreds, thousands, tens of thousands, and so on, but also tenths, hundredths, thousandths, and so forth. For an estimate to be "in the ballpark" means that the estimate and the number estimated both belong to the same order of magnitude.
The best way to learn how to successfully guesstimate numbers is to see examples of how to do so, and then try your own hand at it. You may be surprised at how easy it is―and fun!
What percentage of the American population dies in automobile accidents?3 This is a statistic that you're unlikely to just happen to know. However, you may well know enough to make a good estimate. If you'd like to try answering this question yourself, stop reading here and do so; then come back and read on to see how I estimated it.
What is it that you need to know to make such an estimate? As discussed in the previous part, a percentage is a type of ratio. To figure the ratio, you need two numbers: a numerator and a denominator. In this case, the numerator is the number of Americans who die in car crashes and the denominator is the total number of Americans who die―since everyone dies, this is simply the total population. Since we're doing a guesstimate, we don't need precise statistics for either of these numbers.
As I've mentioned in previous entries4, the population of the United States is a good landmark number to remember, and it's easy: the current population is approximately a third of a billion. So, that's our denominator.
The numerator is trickier: how many people out of that 330 million population die in traffic accidents? I've also mentioned in previous entries that the number of Americans who die in traffic accidents per year tends to be around 30-40K5; let's use the midpoint, 35K. However, this is not our numerator, since that ratio gives the risk of being killed in a car accident in a year. Luckily, the average American lives to be about 80 years old6, so the numerator we want is 35K × 80, which is approximately three million. Now, the math is so simple that you can do it in your head: three million is about 1/100th of 330 million, so the percentage of Americans who die in automobile crashes is about 1%.
Does the Estimate Hold Up?
Now let's compare our estimate with the one given by Weinstein & Adam in Guesstimation3. They actually formulated the question as: "What fraction of American deaths are caused by automobiles?", but any fraction can be easily turned into a percentage1.
So, for the numerator they used 40K Americans "killed on the roads each year", which they multiplied by 75, life expectancy in 20087, to get three million, which represents the total number of Americans who die in a car crash at some age. That's our numerator.
The denominator for the fraction is 300 million―which was the approximate population of America in 2008 when the book was published8. Again, you can do the math in your head, and the result is the same as our estimate above, which is evidence that it is a good estimate.
Another way to test a ballpark estimate is to compare it to your own experience. If one percent of us die in car accidents, what is the chance that you have known someone who died that way. How many people have you known who have died? Obviously, this depends upon how old you are and how many people you have known. Have you known as many as a hundred people who have died? If so, then you probably knew at least one automotive fatality.
I don't know about you, but I have a hard time remembering all of my relatives, friends, and acquaintances who are gone. Still, it doesn't seem to be close to a hundred, but perhaps around half that many. Nonetheless, one close relative of mine died in a car crash, and a friend of a friend also perished that way. So, if anything, it seems as though our estimate may be on the low side.
Finally, how does our estimate compare to official statistics? The Centers for Disease Control and Prevention (CDC) collect statistics on "leading causes of death", among them "Accidents (unintentional injuries)". According to the most recent data, a total of 3,383,729 Americans died in 20209. However, the CDC does not break out the numbers for accidents involving cars.
Thankfully, another alphabet agency, the National Highway Traffic Safety Administration (NHTSA), collects statistics on automotive deaths. According to an NHTSA press release, 38,680 Americans are estimated to have died in automobile accidents in 202010. This, of course, is close to our midpoint estimate of 35K. Again, you can do the math at a glance: 39K is about 1% of 3.4 million; more precisely, it's 1.1%.
Clearly, by these various measures, our estimate did not just hit the dartboard, it hit the bullseye.
One surprise of this exercise is just how many Americans die in automobiles. Why are we not more alarmed by this fact? Why don't we hear more about driving safety? I suspect that at least part of the reason is that we have grandfathered automotive accidents into our risk assessments. It's almost as if such accidents are simply "acts of God" that we have no control over, so we just accept that one-percent of us will die in a car accident, even though most are preventable.
Finally, this exercise shows that by combining what we know with what we can research quickly―using tools such as Wolfram Alpha―and using only simple math, we can make surprisingly accurate estimates. Such estimates are useful, as we have seen in previous installments, for checking the credibility of claims made by activists, advertisers, and politicians.
- Credibility Checking, Part 3: Ratios, Rates & Percentages, 3/27/2022.
- Webb Garrison, Why You Say It (1992), p. 175.
- This example is based on the second question in section 11.1 of Lawrence Weinstein & John A. Adam's, Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin (2008). This is a useful book for practicing ballpark estimates, though it's heavily skewed toward ones involving physics, perhaps because one of the authors is a physicist.
- For instance: A trillion here, a trillion there, and pretty soon you're talking about real money., 9/29/2021.
- Credibility Checking, Part 2: Divide & Conquer, 2/4/2022.
- "What is life expectancy in the United States?", Wolfram Alpha, accessed: 4/19/2022.
- "What was life expectancy in the United States in 2008?", Wolfram Alpha, accessed: 4/20/2022.
- "What was the population of the United States in 2008?", Wolfram Alpha, accessed: 4/20/2022.
- "Deaths and Mortality", Centers for Disease Control and Prevention, accessed: 4/21/2022.
- "2020 Fatality Data Show Increased Traffic Fatalities During Pandemic", National Highway Traffic Safety Administration, 6/3/2021.
April 8th, 2022 (Corrected: 4/9/2022) (Permalink)
When the Sleepers Wake
The Agency for Counter-Terrorism (ACT) has discovered the existence of sleeper agents of a foreign power. In addition to discovering the existence of the sleepers, the ACT recovered a secret document outlining the strict rules used by this power in its sleeper agent operations.
Each of the sleeper agents is assigned to two cells―that is, groups of agents that are known to each other. Any two cells have one, but only one, agent in common. In order to protect the sleeper operation, the agents are kept in the dark about the existence or membership of any cells to which they don't belong. In this way, if an agent is discovered by the enemy, only two cells will be in danger of being lost.
In addition, the secret document revealed that there are currently four sleeper cells. However, what ACT would most like to know is just how many sleeper agents they should look for. Can you help? Can you determine from the above information the number of sleeper agents?
How many pairs of sleeper cells are there?
There are six sleeper agents.
Explanation: Given the rules that any two cells have a single agent in common and that each agent is assigned to exactly two cells, it follows that the number of agents is equal to the number of pairs of distinct cells. Let's call the four cells A, B, C and D. Then there are six pairs of cells: AB, AC, AD, BC, BD and CD. Thus, the number of sleeper agents has to be at least six: one for each of the distinct pairs. That there can't be more than six can be seen by supposing that there were a seventh agent; then that agent would have to be assigned to two distinct cells, which would mean that the seventh agent would belong to one of the six pairs of cells. However, that would mean that two of the cells would have two agents in common, which violates the first rule. Therefore, there are exactly six sleeper agents.
Disclosure: This puzzle was based on puzzle 38 from Jaime & Lea Poniachik's book Hard-to-Solve Brainteasers (1998). If you liked this puzzle and would like to solve a similar but somewhat harder one, see: The Puzzle of the Sleeper Cells, 2/29/2016.
Acknowledgment: Thanks to Lawrence Mayes for pointing out an error in the original wording of the puzzle.
Infer or Imply?
Imply, Infer. Not interchangeable.1
While researching the previous entry2 in this series on confusible words, I found a news story with the following headline:
Browns refute tanking claims by former coach Hue Jackson3
This is, of course, an example of the mistake of using "refute" to mean "deny" discussed in that entry, but the same article has the following sentence: "Jackson, who is now coaching at Grambling, made several posts on Twitter inferring that he received bonus payments from Browns owner Jimmy Haslam during his two-plus seasons with the team."
To infer is to draw a conclusion based on some kind of evidence; whereas, to imply is to state something that either entails or suggests a conclusion. A claim is usually said to imply something when it does not state it outright but it can be inferred from what is stated. People can imply through the statements they make, because the statements themselves imply, but only people can infer.
Is it Jackson's posts that are supposed to have inferred that he received the payments, or Jackson himself? Jackson's posts may have implied that he received such payments, but they cannot infer it. Moreover, Jackson himself surely knew directly whether he had been paid and would not need to infer it. Instead, either Jackson, his posts, or both implied that he received the payments.
Unlike the mistake of using "refute" to mean "deny", all of the books on usage that I've checked that take a position condemn using "infer" to mean "imply"4. As Bill Bryson writes after giving an example:
According to nearly all authorities, on both sides of the Atlantic, the word there should be implied, not inferred. Imply means to suggest…. Infer means to deduce…. A speaker implies, a listener infers. The distinction is useful and, in careful writing nowadays, expected.5
In the previous entry2, I discussed the fact that some dictionaries are now treating to deny as a standard meaning of "refute", apparently because the word is now used in this sense so often that it's no longer a mistake. However, this doesn't appear to be the case with "infer" used to mean to imply. For instance, the Cambridge Dictionary, which is my usual online source for word meanings, defines "infer" in the traditional way6, and includes a link to a usage note describing the "imply" sense as a "typical error"7.
When I checked it earlier this year, the online Merriam-Webster Dictionary gave "imply" as a second "essential meaning" of "infer"8. However, it labeled it "informal", which I guess was its euphemism for "incorrect". The entry has since been revised to remove "imply", but replaces it with "hint, suggest" as a fourth possible meaning, and the "informal" warning has disappeared9. I don't know whether this is progress or not.
In addition, the entry includes a very confusing "Usage Guide"10: how this brief note is supposed to guide usage is unclear since it gives no guidance. Instead, it claims that the use of "infer" to mean "imply" began centuries ago and continued until some time after World War I. It suggests that the distinction between the two words that developed in the last century was based on a confusion. However, given that Merriam-Webster seems to jump on every linguistic change bandwagon that comes along, why does it resist one that has been around for a century? Even if the usage note is correct that the change was based on a mistake, why should that matter to a descriptivist dictionary, since a lot of linguistic changes are based on mistakes?
All of this shows how capricious these lexicographical decisions are. If the dictionary makers were just frankly prescriptive in their definitions, you could disagree with their recommended usage and argue with it. Instead, the bogus claim that they are just describing the way the language is used is supposed to end all disagreement and prevent you from rejecting bad linguistic advice.
As is the case for many of these confusible pairs, the misuse usually goes only one way, that is, one word is used when the other should be. Typically, "infer" takes the place of "imply", though I recently came across an example going in the opposite direction in a book discussing an allegedly libelous claim: "It would have remained for the court to decide whether any such allusion could have reasonably been implied by any reader of NOW! magazine….11" It must have been the reader who would have inferred the libelous allusion, and the magazine that implied it.
I'm sure that readers of The Fallacy Files are smart enough to infer what my position is on this issue without my having to imply it.
- William Strunk, Jr. & E. B. White, The Elements of Style (4th Edition, 1999), p. 49.
- Refute or Deny?, 3/2/2022.
- Tom Withers, "Browns refute tanking claims by former coach Hue Jackson", Associated Press, 2/2/2022.
- Omitting Strunk & White and Bryson, they are:
- Harry Blamires, The Cassell Guide to Common Errors in English (1997):
To 'infer" something is a matter of concluding, not of conveying. The speaker implies something, the listener infers something.
- Michael Dummett, Grammar & Style for Examination Candidates and Others (1993), p. 93:
Only a person can infer; one statement can imply another, which the speaker also implied by making the first statement. To infer something is to draw a conclusion; to say something from which that conclusion follows or which the speaker means his hearers to take as following is to imply the conclusion, not to infer it.
- Mignon Fogarty, Grammar Girl's 101 Misused Words You'll Never Confuse Again (2011), under "Imply". This is the most recent book I've checked. Fogarty writes: "The incorrect use of infer to mean imply is so common that in a decade or so it may be considered standard, but for now, careful writers and speakers continue to make a distinction." It's been slightly over a decade since this was written, and if careful users of the language continue to make the distinction it may last for another decade.
- Robert J. Gula, Precision: A Reference Handbook for Writers (1980), p. 216:
You imply when you suggest something. You infer when you draw a conclusion from someone else's words.
- Adrian Room, The Penguin Dictionary of Confusibles (1980):
In fact the two are indeed often used interchangeably, but strictly speaking: 'imply' means 'express indirectly'…and 'infer' means 'derive by reasoning', 'deduce'. … So if I 'imply' that you are deceitful, I say so indirectly; if I 'infer' that you are deceitful, I gather that you are from what you say or do, or from what I hear about you.
- Harry Shaw, Dictionary of Problem Words and Expressions (Revised edition, 1987):
To imply is to suggest a meaning only hinted at, not explicitly stated. To infer is to draw a conclusion from statements, evidence, or circumstances.
- Harry Blamires, The Cassell Guide to Common Errors in English (1997):
- Bill Bryson, Bryson's Dictionary of Troublesome Words: A Writer's Guide to Getting it Right (2002).
- "Infer", Cambridge Dictionary, accessed: 4/1/2022.
- "Imply or infer?", Cambridge Dictionary, accessed: 4/1/2022.
- "Infer", Merriam-Webster Dictionary. This is the Internet Archive Wayback Machine's cache of the page for 1/22/2022.
- "Infer", Merriam-Webster Dictionary, accessed: 4/1/2022.
- "Infer vs. Imply: Usage Guide", Merriam-Webster Dictionary, accessed: 4/1/2022.
- Chapman Pincher, The Secret Offensive (1986), p. 45.