Previous Month | RSS/XML | Current | Next Month
WEBLOG
September, 29th, 2020 (Permalink)
What would we do without experts?
"It is or it isn't": Constitutional law expert weighs legality of Knox County bar and restaurant curfew*
*Madisen Keavy, "'It is or it isn't': Constitutional law expert weighs legality of Knox County bar and restaurant curfew", WATE, 9/18/2020
September 27th, 2020 (Permalink)
Survey Says & How to Read Scientific Research
- Sonal Desai, "On My Mind: They Blinded Us From Science", Franklin Templeton, 7/29/2020.
Six months into this pandemic, Americans still dramatically misunderstand the risk of dying from COVID-19:
- On average, Americans believe that people aged 55 and older account for just over half of total COVID-19 deaths; the actual figure is 92%.
- Americans believe that people aged 44 and younger account for about 30% of total deaths; the actual figure is 2.7%.
- Americans overestimate the risk of death from COVID-19 for people aged 24 and younger by a factor of 50; and they think the risk for people aged 65 and older is half of what it actually is (40% vs 80%).
These results are nothing short of stunning. Mortality data have shown from the very beginning that the COVID-19 virus age-discriminates, with deaths overwhelmingly concentrated in people who are older and suffer comorbidities. This is perhaps the only uncontroversial piece of evidence we have about this virus. Nearly all US fatalities have been among people older than 55; and yet a large number of Americans are still convinced that the risk to those younger than 55 is almost the same as to those who are older. …
This misperception translates directly into a degree of fear for one's health that for most people vastly exceeds the actual risk: we find that the share of people who are very worried or somewhat worried of suffering serious health consequences should they contract COVID-19 is almost identical across all age brackets between 25 and 64 years old, and it's not far below the share for people 65 and older.
An important caution is that this survey was not a scientific one because it was based on an internet panel, albeit a very large one of over 10,000 American adults. Since it was an internet panel, it's possible that the sample was biased towards those who get information from the internet, and perhaps that medium is more likely to transmit misinformation than other news media, but this is just speculation.
However, the results of this survey are supported by those from a previous one that I discussed last month that showed similar misunderstanding*. Moreover, the errors revealed by the survey are so large that even bias would probably not account for all of it. Finally, I would expect such results given the poor performance of government agencies, health officials, and the news media.
- Sara Harrison, "How to Read Covid-19 Research (and Actually Understand It)", Wired, 7/8/2020.
Evaluating the quality of Covid-19 research is challenging, even for the scientists who study it. Studies are rapidly pouring out of labs and hospitals, but not all of that information is rigorously vetted before it makes its way into the world. Some studies are small and anecdotal. Others are based on bad data or misplaced assumptions. Many are released as preprints without peer review. Others are hyped up with big press releases that overstate the results―but when scientists are finally able to dive into the research, sometimes the study isn't as groundbreaking as it seemed.
…Keeping up with this flood of information about coronavirus therapies and reviewing all the studies coming out is a daunting task, especially for readers without a research or medical background who just want to know what's going on and how to stay healthy. "We can't expect everyone to be able to pick up any research paper and know that it's high quality," says Elizabeth Stuart, a professor at the Johns Hopkins Bloomberg School of Public Health.
Stuart is part of a team of colleagues at Johns Hopkins who run the Novel Coronavirus Research Compendium (NCRC). The team includes statisticians, epidemiologists, and experts on vaccines, clinical research, and disease modeling; together they rapidly review new studies and make reliable information accessible to the public. For those of us who don't have advanced degrees in these areas, distinguishing an inflated headline from a genuinely important discovery can feel impossible. But by looking at where a study was published, what data it uses, and how it fits into the larger body of scientific research, even the armchair experts among us can start to be more savvy science information consumers.
There is good advice in this article on reading any scientific research, not just COVID-19. It's elementary and won't take you very far, but it's a good starting place for the novice.
* Survey Says, 8/23/2020.
September 23rd, 2020 (Permalink)
"It ain't over till it's over."1
I have good news and I have bad news. The good news first.
It's widely believed that public opinion polls were wildly wrong in 20162. If so, why should we pay attention to them this year? In fact, as I pointed out at the time3, the polls were not that far off. As you may recall, Hillary Clinton won the popular vote but lost in the Electoral College (EC), which happened because a lot of those votes were in states, such as California, whose electoral votes she easily won. Trump's voters, while numerically fewer, were spread more evenly across the country, allowing him to win more electoral votes, though at times by thin margins. National public opinion polls by themselves tell us nothing about the distribution of votes, and most of them correctly told us that Clinton would win the popular vote, though they may have slightly over-estimated by how much.
State polls can provide information on the distribution of the popular vote, and may help forecast who will win the electoral votes of a state. However, most state polls are based on smaller samples than national ones, because smaller polls are cheaper. Since sample size is inversely related to the margin of error (MoE), the smaller the sample the larger the MoE4. So, the MoEs of state polls are usually larger than those of national ones.
In 2016, Trump won the popular vote in some states by such razor-thin margins that no poll could have predicted it. For instance, Trump won Michigan and all of its EC votes by only slightly more than 10,000 votes, which amounts to two-tenths of a percent of the votes cast5.
At best, polls can reveal how citizens would vote if the election were held at the time of the poll, and all national polls have lead times of at least a few days but closer to a week. Election day is still a month-and-a-third away and, as this year has demonstrated more than once, a lot can happen in that amount of time. A close election, such as 2016, is like a horse race with the two leaders running neck-and-neck, and a slight stumble at the last second can determine which horse wins.
There are three reasons for the false impression that the polls got it wrong in 2016:
- There were months of polls leading up to November, and Clinton led Trump in most of them, sometimes by a large amount, which may have given the impression of an inevitable Clinton landslide. However, Clinton's lead narrowed as election day approached and, when the ballots were all counted, she had won the popular vote by only 2.1 percentage points5, which is within the MoE of most national public opinion polls, namely, three percentage points.
- According to a Gallup poll, only half of voters in 1948 understood what the EC was6. I don't know of any more recent polling on this topic, and the situation has probably changed since. However, given the deteriorating state of civic education in the U.S., I suspect that the situation has not improved but probably has worsened. As a result, many Americans may have expected a Clinton win based on the polls alone, neglecting to consider the role played by the EC.
- What really did fail miserably in 2016 were not the polls themselves, but the election forecasting computer models based on polling7. Just as in the case of the epidemiological models that were used early this year to forecast the progress of the coronavirus epidemic8, the models failed spectacularly. For example, Nate Silver's model was the closest to getting the election result right, but it still gave Clinton a 71% chance of winning9!
So, both types of model need to prove themselves before we rely on them for predicting what's going to happen. No doubt the modelers have made changes in an attempt to fix whatever went wrong in 2016, which is certainly a good thing. However, we need to take them out for a spin before we buy them, and this election will be our first chance to do so.
Now, the bad news. Even though the polls were not wildly wrong in 2016, that doesn't mean that they were of any use in predicting who would win. The best bet of doing that is through averaging the results of all the polls taken within ten days to two weeks of the election. The best known of such averages is that of Real Clear Politics and it had Clinton winning the popular vote by 3.2 percentage points, which is only 1.1 percentage points too high10. This was an excellent result, but it told us nothing about how the EC was going to break. Even if the average had been on the nose, it wouldn't have told us that Trump would win.
If this turns out to be a close election like the last time―and it certainly appears to be headed that way―we're likely to see a replay. I don't mean to predict that Trump will be re-elected; rather, I'm claiming that at this point we do not and cannot know who will win. We'll just have to wait until all the votes―the Electoral College ones!―are counted to find out.
- This is a "Berra-ism", that is, something supposedly said by Yogi Berra; see: "Widely quoted philosophy of Yogi Berra: 'It ain't over 'till [sic] it's over'", Chicago Tribune, 9/23/2015.
- For instance: Sam Hill, "The Pollsters Got It Wrong When Trump Took on Hillary in 2016. Can You Trust Them This Time?", Newsweek, 4/13/2020. Despite the headline, the article includes the following:
Some elections are simply too close to call. It's possible the problem wasn't with how pollsters poll, but rather with what we expected from them. Roughly 129 million votes were cast in the 2016 election, but the election was decided by 78,000 voters in three states―Michigan, Wisconsin and Pennsylvania. That's 0.6 percent of the votes cast in those states. Even the best polls typically have an average absolute margin of error of one percentage point. In other words, we asked pollsters to predict heads or tails and got angry when they couldn't.
What the Sam Hill!
- Dewey Defeats Truman, Again, 11/13/2016. Others have been recently making the same point, for instance: Grace Panetta, "The presidential polls in 2016 weren't as wrong you think. Here's why you can trust them in 2020.", Business Insider, 7/20/2020.
- Ethan Siegel, "The Science Of Error: How Polling Botched The 2016 Election", Forbes, 11/9/2016.
- "2016 National Popular Vote Tracker: Overall Vote%s", accessed: 9/23/2020.
- Lydia Saad, "Gallup Vault: Rejecting the Electoral College", Gallup, 7/14/2016.
- David Smith, "How did the election forecasts get it so wrong?", Revolutions, 11/9/2016.
- Phillip W. Magness, "How Wrong Were the Models and Why?", American Institute for Economic Research, 4/23/2020.
- Alexandra Hutzler, "How the Pollsters Changed Their Game After Getting the 2016 Election Wrong", Newsweek, 9/18/2020.
- "General Election: Trump vs. Clinton", Real Clear Politics, accessed: 9/23/2020.
September 8th, 2020 (Permalink)
Why You Need to be Able to Check Facts
This is the first entry in a new occasional series on how to check facts. In this introduction, I explain why such a series may be useful.
You might wonder why such a series is needed, given the existence of fact-checkers in the media: Don't they make sure that the facts are right before publishing? In addition, there are independent fact-checking groups: Won't they check anything that gets past the news media fact-checkers?
There are two types of institutional fact-checking: pre-publication and post-publication. Pre-publication checking is done by checkers hired by an author or publication before a work is published or broadcast. Post-publication fact-checking is primarily the job of the groups mentioned in the previous paragraph.
- Pre-Publication Fact-Checking: This is not as widespread an activity as you may think. For instance, newspapers usually do not employ designated fact checkers, though reporters and editors sometimes do the job1. Similarly―and more surprisingly, at least to me―most book publishers do not employ checkers2. In fact, most book publishing contracts place the legal burden on the author to ensure that the book is factually correct. So, if a book is fact-checked, it's usually because the author employs a checker. As a result, only the authors of potential best-sellers or those who receive large advances are likely to hire one3.
Pre-pub checking, in the form of one or more individuals labelled "fact checkers" or "researchers", seems to be mostly limited to magazines. One reason for the difference between newspapers and magazines is that there is pressure on newspapers for scoops, whereas magazines are not usually expected to break news. This leaves enough time between the writing of a magazine article and its publication for it to be checked.
Professional fact-checkers do an often thankless job for low pay―at least, in comparison to what the writers make whose work they check. However, their numbers seem to be declining4, as well as the quality of their work, as the periodicals that use them lose readers and advertisers. Even the New Yorker, once celebrated for its rigorous fact-checking5, recently had an egregious factual failure6.
Given the declining state of pre-publication fact-checking, and its absence from most newspapers and traditional book publishers, more of the burden of checking facts is falling upon the post-publication checkers, including you the reader.
- Post-Publication Fact-Checking: Several groups have formed in the last decade or three that engage in fact-checking claims in the news media. Snopes appears to be the oldest of these groups, having begun as a website in 19947, though it was originally limited to debunking urban legends. As pre-publication fact-checking has diminished, the post-publication kind has grown. However, such fact-checking groups usually concentrate on political claims, especially those made by politicians. Such claims are certainly important, but they're not the only ones in need of checking. Moreover, there are so many claims made that there's no way that any one, or even all of these organizations together, can manage to check them all. Ultimately, the reader is the last line of post-publication defense against falsehood.
As an amateur fact-checker, the first step to checking a claim in the media will be to see if any of the post-pub groups has already checked it. However, what if the claim you're interested in hasn't been checked? Then, what do you do? This series will try to answer this question.
Assuming that I've now convinced you that learning to check facts for yourself is a valuable intellectual self-defense skill, you might think of getting ahold of one or both of the fact-checking books I've already cited in the notes, below. I don't mean to discourage you from doing so, but both were written by professional fact-checkers for fellow professionals, rather than for us amateurs. While there are some useful pointers in them, they are disappointingly unhelpful for the amateur, which is why some guidance specifically aimed at non-professionals may be useful.
I am not, nor have I ever been, a professional fact-checker, but this series is not intended for the pros, though perhaps even they could benefit from it. Rather, it is by an amateur, for amateurs. Amateur fact-checking differs from the professional kind in four main ways:
- Trivia: The professionals are often concerned with minutiae, such as exactly how people's names are spelled, and a great deal of their time and effort is devoted to phoning people to check such things. This is because one sure way to make people angry is to misspell their names, and the only sure way to check the spelling is to ask them. As an amateur, there is no reason to be concerned with such trivia, and checking facts will seldom involve using the phone. The facts we amateurs need to check are the important ones.
- Libel: An important concern of fact-checkers who work for authors or periodicals is the possibility of libel. As a result, the pros need to check claims that may be libelous, and one fact-checking guide has a whole chapter on libel law8. As an amateur, you're neither the author nor publisher of the claims you are checking, so you needn't worry about whether they are libelous. What you want to know is are they true.
- Plagiarism: Another worry for the professional but not the amateur is the possibility of plagiarism. There have been several prominent scandals in the past few decades involving plagiarism, and publishers are therefore worried about it. As a consequence, works for the professional will discuss plagiarism, what it is and how to detect it9. However, who made a claim first is really not the amateur's concern, but whether it is true or false.
- Tact: An important aspect of professional fact-checking is getting along with the authors and editors one has to deal with, and this may involve a lot of negotiation10. Thankfully, we amateurs don't need to worry so much about stepping on people's toes, except perhaps if you want to get an author or publication to correct or retract an article.
For the above reasons, publications aimed at professional fact-checkers are usually of limited value for the amateur, hence this proposed series of entries. Questions that future entries will attempt to answer include the following: Just what is a fact? What's the difference between a fact and an opinion? What's the difference between a fact and a value? Which facts are checkable and which are not? How can you tell when a supposed "fact" needs checking? Why should you develop your own plausibility detector, and how do you do it?
Fact-checking is too important a task to leave to the professionals.
Notes:
- Sarah Harrison Smith, The Fact Checker's Bible: A Guide to Getting it Right (2004), pp. 29-32; hereinafter "Smith".
- Brooke Borel, The Chicago Guide to Fact-Checking (2016), p. 6; hereinafter "Borel".
- Emma Copley Eisenberg, "Fact Checking Is the Core of Nonfiction Writing. Why Do So Many Publishers Refuse to Do It?", Esquire, 8/26/2020.
- Stephanie Fairyington, "In the era of fake news, where have all the fact-checkers gone?", Columbia Journalism Review, 2/23/2018.
- Sarah Harrison Smith, author of The Fact Checker's Bible, was a fact-checker at The New Yorker; see Smith, p. i.
- Who Will Fact-Check the Fact-Checkers?, 7/30/2020
- "Snopes is the internet's definitive fact-checking resource.", Snopes, accessed: 9/8/2020.
- Smith, chapter 6.
- Smith, pp. 87-95 & Borel, pp. 88-89.
- Smith has an entire chapter on this: chapter 3; but Borel has only a short section: pp. 57-58.
September 1st, 2020 (Permalink)
Prussian Roulette, Game 2*
If you survive a game of Prussian roulette consisting of two rounds, those wily Prussians will suggest that you play a second game, also consisting of two rounds. However, this time, the two bullets will be inserted into non-adjacent chambers in the revolver.
Your chance of surviving the first round of this game is unchanged from the previous one. Just as in the first game, if you survive the first round you'll be offered the choice of spinning or not spinning the cylinder for the second round. Should you accept a spin in this game, should you decline a spin, or doesn't it matter?
Extra Credit: What are your chances of surviving a game of this version of Prussian roulette if you choose to spin the cylinder on the second round, and if you choose not to spin it?
You should accept a second spin. Since the two bullets in the cylinder are no longer in adjacent chambers, there's at least one empty chamber between them. This means that of the four empty chambers in the cylinder two of them are followed immediately by a chamber with a bullet in it, instead of only one in the first game. This means that the chance that the chamber following the empty one selected in the first round contains a bullet is two out of four, or one-half. So, your odds of surviving the second round are only one in two if you don't spin, but two in three if you do. Thus, you should choose to spin.
Extra Credit Solution: As we saw in the previous puzzle, your chance of surviving both rounds of the game if you spin both times is two-thirds squared, which equals four out of nine times, or about 44%.
As we have seen in the Solution, above, if you refrain from spinning the cylinder, your chance of surviving the second round decreases to two out of four, or one-half. So, your chance of surviving both rounds is one-half of two-thirds, that is, one-third. This is an even worse chance of survival than if you spin.
* For the first game, see: Prussian Roulette, 8/3/2020
Previous Month | RSS/XML | Current | Next Month | Top of Page