Previous Month | RSS/XML | Current | Next Month


September, 29th, 2020 (Permalink)

What would we do without experts?

"It is or it isn't": Constitutional law expert weighs legality of Knox County bar and restaurant curfew*

*Madisen Keavy, "'It is or it isn't': Constitutional law expert weighs legality of Knox County bar and restaurant curfew", WATE, 9/18/2020

Recommended Reading
September 27th, 2020 (Permalink)

Survey Says & How to Read Scientific Research

* Survey Says, 8/23/2020.

Poll Watch
September 23rd, 2020 (Permalink)

"It ain't over till it's over."1

I have good news and I have bad news. The good news first.

It's widely believed that public opinion polls were wildly wrong in 20162. If so, why should we pay attention to them this year? In fact, as I pointed out at the time3, the polls were not that far off. As you may recall, Hillary Clinton won the popular vote but lost in the Electoral College (EC), which happened because a lot of those votes were in states, such as California, whose electoral votes she easily won. Trump's voters, while numerically fewer, were spread more evenly across the country, allowing him to win more electoral votes, though at times by thin margins. National public opinion polls by themselves tell us nothing about the distribution of votes, and most of them correctly told us that Clinton would win the popular vote, though they may have slightly over-estimated by how much.

State polls can provide information on the distribution of the popular vote, and may help forecast who will win the electoral votes of a state. However, most state polls are based on smaller samples than national ones, because smaller polls are cheaper. Since sample size is inversely related to the margin of error (MoE), the smaller the sample the larger the MoE4. So, the MoEs of state polls are usually larger than those of national ones.

In 2016, Trump won the popular vote in some states by such razor-thin margins that no poll could have predicted it. For instance, Trump won Michigan and all of its EC votes by only slightly more than 10,000 votes, which amounts to two-tenths of a percent of the votes cast5.

At best, polls can reveal how citizens would vote if the election were held at the time of the poll, and all national polls have lead times of at least a few days but closer to a week. Election day is still a month-and-a-third away and, as this year has demonstrated more than once, a lot can happen in that amount of time. A close election, such as 2016, is like a horse race with the two leaders running neck-and-neck, and a slight stumble at the last second can determine which horse wins.

There are three reasons for the false impression that the polls got it wrong in 2016:

  1. There were months of polls leading up to November, and Clinton led Trump in most of them, sometimes by a large amount, which may have given the impression of an inevitable Clinton landslide. However, Clinton's lead narrowed as election day approached and, when the ballots were all counted, she had won the popular vote by only 2.1 percentage points5, which is within the MoE of most national public opinion polls, namely, three percentage points.
  2. According to a Gallup poll, only half of voters in 1948 understood what the EC was6. I don't know of any more recent polling on this topic, and the situation has probably changed since. However, given the deteriorating state of civic education in the U.S., I suspect that the situation has not improved but probably has worsened. As a result, many Americans may have expected a Clinton win based on the polls alone, neglecting to consider the role played by the EC.
  3. What really did fail miserably in 2016 were not the polls themselves, but the election forecasting computer models based on polling7. Just as in the case of the epidemiological models that were used early this year to forecast the progress of the coronavirus epidemic8, the models failed spectacularly. For example, Nate Silver's model was the closest to getting the election result right, but it still gave Clinton a 71% chance of winning9!

    So, both types of model need to prove themselves before we rely on them for predicting what's going to happen. No doubt the modelers have made changes in an attempt to fix whatever went wrong in 2016, which is certainly a good thing. However, we need to take them out for a spin before we buy them, and this election will be our first chance to do so.

Now, the bad news. Even though the polls were not wildly wrong in 2016, that doesn't mean that they were of any use in predicting who would win. The best bet of doing that is through averaging the results of all the polls taken within ten days to two weeks of the election. The best known of such averages is that of Real Clear Politics and it had Clinton winning the popular vote by 3.2 percentage points, which is only 1.1 percentage points too high10. This was an excellent result, but it told us nothing about how the EC was going to break. Even if the average had been on the nose, it wouldn't have told us that Trump would win.

If this turns out to be a close election like the last time―and it certainly appears to be headed that way―we're likely to see a replay. I don't mean to predict that Trump will be re-elected; rather, I'm claiming that at this point we do not and cannot know who will win. We'll just have to wait until all the votes―the Electoral College ones!―are counted to find out.

  1. This is a "Berra-ism", that is, something supposedly said by Yogi Berra; see: "Widely quoted philosophy of Yogi Berra: 'It ain't over 'till [sic] it's over'", Chicago Tribune, 9/23/2015.
  2. For instance: Sam Hill, "The Pollsters Got It Wrong When Trump Took on Hillary in 2016. Can You Trust Them This Time?", Newsweek, 4/13/2020. Despite the headline, the article includes the following:
    Some elections are simply too close to call. It's possible the problem wasn't with how pollsters poll, but rather with what we expected from them. Roughly 129 million votes were cast in the 2016 election, but the election was decided by 78,000 voters in three states―Michigan, Wisconsin and Pennsylvania. That's 0.6 percent of the votes cast in those states. Even the best polls typically have an average absolute margin of error of one percentage point. In other words, we asked pollsters to predict heads or tails and got angry when they couldn't.

    What the Sam Hill!

  3. Dewey Defeats Truman, Again, 11/13/2016. Others have been recently making the same point, for instance: Grace Panetta, "The presidential polls in 2016 weren't as wrong you think. Here's why you can trust them in 2020.", Business Insider, 7/20/2020.
  4. Ethan Siegel, "The Science Of Error: How Polling Botched The 2016 Election", Forbes, 11/9/2016.
  5. "2016 National Popular Vote Tracker: Overall Vote%s", accessed: 9/23/2020.
  6. Lydia Saad, "Gallup Vault: Rejecting the Electoral College", Gallup, 7/14/2016.
  7. David Smith, "How did the election forecasts get it so wrong?", Revolutions, 11/9/2016.
  8. Phillip W. Magness, "How Wrong Were the Models and Why?", American Institute for Economic Research, 4/23/2020.
  9. Alexandra Hutzler, "How the Pollsters Changed Their Game After Getting the 2016 Election Wrong", Newsweek, 9/18/2020.
  10. "General Election: Trump vs. Clinton", Real Clear Politics, accessed: 9/23/2020.

September 8th, 2020 (Permalink)

Why You Need to be Able to Check Facts

This is the first entry in a new occasional series on how to check facts. In this introduction, I explain why such a series may be useful.

You might wonder why such a series is needed, given the existence of fact-checkers in the media: Don't they make sure that the facts are right before publishing? In addition, there are independent fact-checking groups: Won't they check anything that gets past the news media fact-checkers?

There are two types of institutional fact-checking: pre-publication and post-publication. Pre-publication checking is done by checkers hired by an author or publication before a work is published or broadcast. Post-publication fact-checking is primarily the job of the groups mentioned in the previous paragraph.

Assuming that I've now convinced you that learning to check facts for yourself is a valuable intellectual self-defense skill, you might think of getting ahold of one or both of the fact-checking books I've already cited in the notes, below. I don't mean to discourage you from doing so, but both were written by professional fact-checkers for fellow professionals, rather than for us amateurs. While there are some useful pointers in them, they are disappointingly unhelpful for the amateur, which is why some guidance specifically aimed at non-professionals may be useful.

I am not, nor have I ever been, a professional fact-checker, but this series is not intended for the pros, though perhaps even they could benefit from it. Rather, it is by an amateur, for amateurs. Amateur fact-checking differs from the professional kind in four main ways:

  1. Trivia: The professionals are often concerned with minutiae, such as exactly how people's names are spelled, and a great deal of their time and effort is devoted to phoning people to check such things. This is because one sure way to make people angry is to misspell their names, and the only sure way to check the spelling is to ask them. As an amateur, there is no reason to be concerned with such trivia, and checking facts will seldom involve using the phone. The facts we amateurs need to check are the important ones.
  2. Libel: An important concern of fact-checkers who work for authors or periodicals is the possibility of libel. As a result, the pros need to check claims that may be libelous, and one fact-checking guide has a whole chapter on libel law8. As an amateur, you're neither the author nor publisher of the claims you are checking, so you needn't worry about whether they are libelous. What you want to know is are they true.
  3. Plagiarism: Another worry for the professional but not the amateur is the possibility of plagiarism. There have been several prominent scandals in the past few decades involving plagiarism, and publishers are therefore worried about it. As a consequence, works for the professional will discuss plagiarism, what it is and how to detect it9. However, who made a claim first is really not the amateur's concern, but whether it is true or false.
  4. Tact: An important aspect of professional fact-checking is getting along with the authors and editors one has to deal with, and this may involve a lot of negotiation10. Thankfully, we amateurs don't need to worry so much about stepping on people's toes, except perhaps if you want to get an author or publication to correct or retract an article.

For the above reasons, publications aimed at professional fact-checkers are usually of limited value for the amateur, hence this proposed series of entries. Questions that future entries will attempt to answer include the following: Just what is a fact? What's the difference between a fact and an opinion? What's the difference between a fact and a value? Which facts are checkable and which are not? How can you tell when a supposed "fact" needs checking? Why should you develop your own plausibility detector, and how do you do it?

Fact-checking is too important a task to leave to the professionals.


  1. Sarah Harrison Smith, The Fact Checker's Bible: A Guide to Getting it Right (2004), pp. 29-32; hereinafter "Smith".
  2. Brooke Borel, The Chicago Guide to Fact-Checking (2016), p. 6; hereinafter "Borel".
  3. Emma Copley Eisenberg, "Fact Checking Is the Core of Nonfiction Writing. Why Do So Many Publishers Refuse to Do It?", Esquire, 8/26/2020.
  4. Stephanie Fairyington, "In the era of fake news, where have all the fact-checkers gone?", Columbia Journalism Review, 2/23/2018.
  5. Sarah Harrison Smith, author of The Fact Checker's Bible, was a fact-checker at The New Yorker; see Smith, p. i.
  6. Who Will Fact-Check the Fact-Checkers?, 7/30/2020
  7. "Snopes is the internet's definitive fact-checking resource.", Snopes, accessed: 9/8/2020.
  8. Smith, chapter 6.
  9. Smith, pp. 87-95 & Borel, pp. 88-89.
  10. Smith has an entire chapter on this: chapter 3; but Borel has only a short section: pp. 57-58.

September 1st, 2020 (Permalink)

Prussian Roulette, Game 2*

If you survive a game of Prussian roulette consisting of two rounds, those wily Prussians will suggest that you play a second game, also consisting of two rounds. However, this time, the two bullets will be inserted into non-adjacent chambers in the revolver.

Your chance of surviving the first round of this game is unchanged from the previous one. Just as in the first game, if you survive the first round you'll be offered the choice of spinning or not spinning the cylinder for the second round. Should you accept a spin in this game, should you decline a spin, or doesn't it matter?

Extra Credit: What are your chances of surviving a game of this version of Prussian roulette if you choose to spin the cylinder on the second round, and if you choose not to spin it?

* For the first game, see: Prussian Roulette, 8/3/2020

Previous Month | RSS/XML | Current | Next Month | Top of Page