Previous Month | RSS/XML | Current


March 21st, 2023 (Permalink)

What's New?

The Independence Fallacy, that's what!

Poll Watch
March 8th, 2023 (Permalink)

Little Big MOE

Cartoonist Scott Adams got himself into trouble recently1. I think everything sensible that can be said about it has already been said, together with a lot of things that aren't sensible, so I don't have anything to add. Instead, I want to comment on the poll results that seem to have set Adams off on his rant. So, after this paragraph I will say no more about Adams.

The poll, which was conducted by the Rasmussen polling organization, asked the following question among others: "Do you agree or disagree with this statement: 'It's OK to be white.'"2 Of the black respondents to the poll, 53% either "strongly agreed" (42%) or "somewhat agreed" (11%)―whatever that means―with the statement. Of the remaining 47% of black respondents, 18% "strongly disagreed" with the statement, and 8% "somewhat disagreed", whereas the remaining 21% answered "not sure".3

The entire sample was supposedly of a thousand American adults, which means that blacks were a subsample. However, according to a video put out by Rasmussen, the actual sample size was 1,182 respondents of whom only 117 were black. What is the margin of error (MOE) for a sample of that size? Approximately nine percentage points, which means that the 95% confidence interval for those blacks who agreed with the statement is 44%-62%, and similarly for all the other results for this subsample.4

So, this is another example of the statistical fact I've been emphasizing over the last few months5: subsamples have larger MOEs than full samples, and very small samples have very large MOEs. The Rasmussen poll had such a small sample of black respondents that the results are extremely imprecise and, as a consequence, generalizing from such a small sample to all black American adults is problematic.

Unsurprisingly, the news media did their usually poor job of interpreting the poll results for their readers. For example, a Newsweek article reported: "The poll was conducted on February 13-15 and had a margin of error of +/- 3 percentage points." Then, in the next sentence: "[Mark] Mitchell [head pollster at Rasmussen] told Newsweek that the percentage of Black respondents in the poll correspond with the Black population in the United States, which is around 13 percent."1 The article does not mention the MoE of the subsample, nor even the fact that it will be larger than ±3 percentage points. By juxtaposing these two sentences, most readers will think that the MoE mentioned applies to the "Black" subsample.

Mitchell deserves credit for giving the MoE for the subsample in the video previously referred to, but less that 1,500 people have seen it. Inconsistently, he goes on to say that Rasmussen refuses to do national polls with samples of less than 750, yet he defends the results for a subsample of only 117! In addition, he claims that other pollsters do as bad or even worse polling. He's probably right about that, but "if you think our product is bad, our competitors' products are just as bad or even worse" is not a very strong defense. Moreover, he tries to shift the burden of proof onto the poll's critics, challenging us to prove the results wrong. However, it's not our job to prove the poll wrong; it's Rasmussen's job to show that it's probably right.


  1. Jon Jackson, "A Poll Involving Black People Has Set the Internet on Fire", Newsweek, 2/27/2023.
  2. "Questions - Okay To Be White - February 13-15, 2023", Rasmussen Reports, accessed: 3/8/2023.
  3. The percentages in this paragraph are based on the following "tweet": "National Survey of 1,000 American Adults", Rasmussen Reports, 2/26/2023.
  4. "Rasmussen Polls: We Answer Your Questions About our Jaw-Dropping Racism Poll", Rasmussen Reports, 2/28/2023.
  5. See:

March 4th, 2023 (Permalink)

Inflict or Afflict?

A book that I was reading last year inflicted the following sentence on me: "The main reason that they [cancer rates] are increasing is that we are not dying of the other diseases that have inflicted man throughout his existence."1 Can you see what's wrong with it?

"Afflict" and "inflict" are obviously sibling words that differ only in their prefixes since they both have the same stem, "-flict". That stem comes from the Latin verb "fligere", to beat, strike, or dash, which is also found in "conflict" and, surprisingly, "profligate"2. The difference in meaning among these words comes from their different prefixes: "con-" means "with" or "together", so that "conflict" means to fight with. The prefix of "inflict" is obviously "in-", which means much the same as the English word "in", so that "inflict" is to strike into; whereas, the prefix of "afflict", "af-", is a form of "ad-", "to" or "upon", so that to afflict is to strike upon3.

The distinction in meaning between "afflict" and "inflict" is subtle, so it's no surprise that Adrian Room writes that "[t]he two are quite often confused."4 Both mean to be struck with something bad, but infliction is typically done by people, whereas affliction is usually a natural occurrence. To return to the example sentence, the other diseases afflicted "man throughout his existence", but were not inflicted upon him.

I tried the example sentence in several online spelling or grammar checkers and only one flagged "inflicted" as a possible mistake, but it suggested "affected" rather that "afflicted" as a correction―close, but no cigar! So, it's unlikely that your spell-checking program, assuming that you use one, will catch such an error. File the distinction in your mental spell-checker, and don't inflict this confusion on others.


  1. John Brignell, Sorry, Wrong Number!: The Abuse of Measurement (2000), p. 202.
  2. John Ayto, Dictionary of Word Origins (1991).
  3. Joseph T. Shipley, Dictionary of Word Origins (1945).
  4. Adrian Room, The Penguin Dictionary of Confusibles (1980), under "inflicted/afflicted".

March 1st, 2023 (Permalink)

Coffee, Tea, or Both?

I love coffee, I love tea
I love the java jive and it loves me
Coffee and tea and the java and me
A cup, a cup, a cup, a cup, a cup!*

A restaurant that I frequent conducted a survey of its customers and it showed that 75% like coffee and 60% like tea. Knowing that I'm a logician, the owner approached me one day and showed me the results of the survey.

"What I'd really like to know is what percentage of my customers like both tea and coffee," she told me, "but I didn't think to ask. Can you figure that out?"

"I'm sorry," I replied, "but there's not enough information here to do that. However, I could tell you what the minimum and maximum percentages are."

"I'm not sure I understand that."

"The minimum is the lowest possible percentage, based on your survey data, that like both tea and coffee, and the maximum is the highest possible percentage that like both. The actual percentage that likes both will be between those two extremes."

"I see! That might help."

Can you help the restaurant owner? What are the minimum and maximum possible percentages of customers who like both coffee and tea?

Extra Credit: Assuming the maximum possible percentage of customers like both coffee and tea, what percentage likes neither? What about if the minimum likes both?

*Milton Drake & Ben Oakland, "Java Jive". You can hear the Manhattan Transfer's rendition of this song here: "The Manhattan Transfer―Java Jive" (1975), YouTube

Previous Month

Casino Bonuses are not easy to find on the internet. There are simply too many and their terms and conditions makes them difficult to compare. You can find the best bonuses at casinopilot.

You can find the best casinos at as this website update online casinos and compare them on daily basis.

February 16th, 2023 (Permalink)

Depressing Headline

The following headline appeared recently in a medical newsletter:

1 in 4 Docs Experiencing Clinical Depression1

This would be alarming news if true, but thankfully it's not. The article the headline linked to has a less alarming headline:

More Physicians Are Experiencing Burnout and Depression

Given recent events, that's not surprising. The article beneath the headline begins: "More than half of physicians reported feeling burned out this year and nearly 1 in 4 doctors reported feeling depressed…". However, "feeling depressed" is not the same thing as "clinical depression", and the newsletter headline dropped the word "nearly". A more accurate if less scary headline would be: Nearly 1 in 4 Docs Report Feeling Depressed.

Later in the article, we read: "Among the 23% of physicians who said they were depressed, about two thirds said they had 'colloquial depression' (sad, blue, feeling down) compared to about 1 in 4 who said they were clinically depressed." Obviously, the editor who wrote the headline for the newsletter saw the first sentence, then later read that "about 1 in 4…said they were clinically depressed", and then thought that these were the same 1 in 4. Instead, the latter is a fourth of the earlier fourth.

More precisely, we learn from a slideshow about the survey2 that of the 23% who said they were "feeling depressed" 24% said that they were "clinically depressed". This is only about 5.5% of the physicians surveyed, which is close to 1 in 18 rather than "1 in 4".

It's hard to know whether this statistic is worrisome without knowing what percentage of the general public is clinically depressed, which the article does not mention. According to the National Institute of Mental Health, in 2020 6% of American adults had a "major depressive episode with severe impairment"3―which seems to correspond to "clinical depression"4. If so, then it doesn't appear that the physicians surveyed were unusually depressed.

Since the article was reporting a survey, there are some questions that should always be asked, for instance: What was the size of the sample? According to the slideshow5, the sample size was 9,175―a very large sample compared to most public opinion polls, which usually have about a thousand respondents.

Another important question is: How was the sample acquired? According to the slideshow: "Physicians were invited to participate in a 10-minute online survey.5" How were the physicians "invited"? How were they chosen to receive an invitation? The slideshow doesn't say. If the "invitation" was by e-mail or by a notice on Medscape's website, and only those physicians who volunteered to answer the online survey did so, then this was not a "scientific survey"6. The slideshow claims a margin of sampling error of slightly more than one percentage point, yet the mathematical theory of sampling error is based on the assumption that the sample was selected randomly from the population7―in this case, physicians.

Without more information about who received an invitation and how the participants were selected, we can't be sure that this sample was representative of physicians in general. The physicians who were invited and accepted the invitation to participate in this survey may have differed from those who either were not invited or turned down an invitation. Perhaps those who were depressed were more likely to participate in such a survey, or maybe they were less likely to do so because they were too depressed.

For these reasons, it doesn't appear that this survey is anything to get depressed about.


  1. The headline, from an undated Medscape psychiatry newsletter, links to the following article: Christine Lehmann, "More Physicians Are Experiencing Burnout and Depression", Medscape, 2/1/2023. Thanks to Lawrence Mayes for calling the problem with the headline to my attention.
  2. Leslie Kane, "'I Cry but No One Cares': Physician Burnout & Depression Report 2023", Medscape, 1/27/2023, p. 21.
  3. "Major Depression", National Institute of Mental Health, 1/2022.
  4. Daniel K. Hall-Flavin, "Clinical depression: What does that mean?", Mayo Clinic, 5/13/2017.
  5. Leslie Kane, "'I Cry but No One Cares': Physician Burnout & Depression Report 2023", Medscape, 1/27/2023, p. 29.
  6. How to Read a Poll: Scientific Versus Self-Selected, 12/6/2022.
  7. Charles H. Backstrom & Gerald D. Hursh, Survey Research (1963), chapter 2. The reality is that no survey of human beings ever has a completely random sample since people can always refuse to participate, which is a problem currently bedeviling public opinion polling.

February 14th, 2023 (Permalink)

Who's My Baby?

The song "Everybody Loves My Baby", which will be one hundred years old next year, has been performed by everybody from Louis Armstrong to, oddly enough, Brigitte Bardot*. The song's lyrics tend to differ depending on the performer, but one thing that stays the same is the chorus:

Everybody loves my baby
But my baby don't love nobody but me
Nobody but me!

Everybody wants my baby
But my baby don't want nobody but me
That's plain to see!

Relying entirely on the above lyrics, can you figure out who my baby is? This is not a trivia puzzle, so looking up the rest of the lyrics would only lead you astray.

* You can check out B.B.'s surprisingly good performance here, though it's obvious that English is not her native tongue: "Brigitte Bardot―Everybody Loves My Baby" (1968), YouTube.

Previous Entry

Only licensed online casinos at site! Register and claim bonuses now!