December 2nd, 2021 (Permalink)
Getting Polling Wrong
Quote: "Over the past eighty years or so, polls and poll-based forecasts have misfired in many ways in the U.S. presidential elections, leaving pollsters, journalists, and pundits baffled or humiliated, and often without immediate explanation as to what went wrong. Polls in presidential elections do not always go wrong. Or dramatically wrong. But they have been wrong often enough to invite skepticism and wariness. Indeed, it is a rare election that does not produce polling controversies of some sort."1
Title: Lost in a Gallup
Comment: Clever title!
Subtitle: Polling Failure in U.S. Presidential Elections
Comment: Nice subtitle: it tells you exactly what the book is about.
Author: W. Joseph Campbell
Comment: Campbell is the author of the previous book, Getting it Wrong: Ten of the Greatest Misreported Stories in American Journalism, which I've read and recommend if you're interested in that sort of thing. He also writes the website Media Myth Alert, which mostly tracks recrudescences of the myths debunked in the book. Campbell is a professor of communication studies, whatever that is―I guess it's either a fancy name for what used to be called "rhetoric" or perhaps journalism, or some combination of the two. Based on his previous book, he's a careful and reliable historian, which is important for a debunker of pseudohistory. Since he's not a pollster or statistician, the book doesn't focus on the technical aspects of polling, but is a history of polling failures in American presidential elections instead.
Summary: This book was written and published before last year's polling failure, so it's perhaps even more timely now. Judging from the Introduction and chapter titles, its chapters discuss the following elections:
- 1936: The year of the famous Literary Digest "debacle", to use Campbell's word for it.
- 1948: When Dewey did not defeat Truman, despite the polls.
- 1952: Republican candidate Eisenhower wins in a landslide unforeseen by the polls.
- 1980: Republican candidate Reagan wins in another landslide unforeseen by the polls.
- 2000: This election was really too close to call, so it's not surprising that the polls got it wrong.
- 2004: Exit polls falsely showed Kerry winning.
- 2016: This time it was the polling models that falsely showed Clinton winning.
Excerpt: "In a way, polling failure in presidential elections is not especially surprising. Indeed, it is almost extraordinary that election polls do not flop more often than they do, given the many and intangible ways that error can creep into surveys. And these variables may be difficult or impossible to measure or quantify. … Opinion polls can never flawlessly reflect the views of the entire population. It's a statistical fact of life that some amount of error resides in every poll taken of some portion of a target group. This is true even when rigorous and reliable polling techniques are applied…. The inevitable distortion in sample surveys is called the margin of error (or, more precisely, the margin of sampling error).
"…[P]ollsters have developed various ways of estimating who among the respondents to pre-election polls are most likely to vote. Weeding out nonvoters is essential because many poll respondents fail to follow through on assurances that they will vote. … Even now, after many years of testing, pollsters do not agree on the best method for deciding who will vote. 'There are as many "likely voter" models as there are pollsters,' Kyley McGeeney, a polling expert in Washington, DC, noted…. The distorting effects of screening for likely voters were strikingly demonstrated in 2016, when Nate Cohn of the New York Times arranged for four well-regarded pollsters to calculate the results of a pre-election poll in Florida in which voters pronounced themselves in favor of Trump or Clinton for president. The pollsters were given the same raw data to analyze, and their results differed markedly, ranging from a four-point advantage for Clinton to a one-point lead for Trump."2
Comment: Likely-voter models are the suspects in the polling problems in both last year's and this year's elections, as just about every other source of bias seems to have been ruled out.
Fact Check and Copy-Editing: I didn't notice any factual errors in the parts of the book that I was able to read, namely, the introduction, the first chapter, and part of the second. The copy-editing is also first-rate as I noticed only a few typographical errors.
The Blurbs: The book is positively blurbed by Joel Best, whose books on the sociology of statistical errors I highly recommend.
Disclaimer: This book is from last year, so it's not brand new, but I just heard of it. I haven't read all of it yet, though I intend to, so I can't review or recommend it. However, its topic interests me, and may also interest Fallacy Files readers.
Most online slot players have heard of the gamblers fallacy but we would suggest you simply do your homework before you play in order limit your risk. Sites like SlotsOnlineCanada are the go-to Canadian online slots portal on everything from new slot bonuses, slot game reviews and up-to-date news on the iGaming industry.
You will never be able to dispel the truth and reasoning behind the gamblers fallacy, however if you read these winning insights on pokies you may find that you gain a slight upper hand.
CryptoCasinos provides bitcoin casino guides for crypto interested gamblers!
If you want to play casino for free, you should check out freespinsnodeposituk.com for a complete list of casinos.
If you are a player looking for new Canadian online casinos to play at, check out https://livecasinoonline.ca/new-casinos - the most authoritative guide to new casinos in Canada. Only safe and licensed operators.
Casino Bonuses are not easy to find on the internet. There are simply too many and their terms and conditions makes them difficult to compare. You can find the best bonuses at casinopilot.
Don’t waste your time looking for worthy new online casinos, as https://newcasinouk.com/ already did all the hard work for you. Check out top lists with latest casinos on the market and register an account today.
You can find the best casinos at MrCasinova.com as this website update online casinos and compare them on daily basis.
November 30th, 2021 (Permalink)
Acknowledging Reality & Living by Lies
- Bari Weiss, "The Media's Verdict on Kyle Rittenhouse", Common Sense with Bari Weiss, 11/17/2021. WARNING: Contains the f-word.
Here is what I thought was true about Kyle Rittenhouse during the last days of August 2020 based on mainstream media accounts: The 17-year-old was a racist vigilante. I thought he drove across state lines, to Kenosha, Wisc., with an illegally acquired semi-automatic rifle to a town to which he had no connection. I thought he went there because he knew there were Black Lives Matter protests and he wanted to start a fight. And I thought that by the end of the evening of August 25, 2020, he had done just that, killing two peaceful protestors and injuring a third. It turns out that account was mostly wrong.
Also, some people, if not Weiss, seem to have come away from these accounts with the impression that the men who were shot were black.
Unless you’re a regular reader of independent reporting…you would have been served a pack of lies about what happened during those terrible days in Kenosha. And you would have been shocked over the past two weeks as the trial unfolded in Wisconsin as every core claim was undermined by the evidence of what actually happened that night.
This wasn’t a disinformation campaign waged by Reddit trolls or anonymous Twitter accounts. It was one pushed by the mainstream media and sitting members of Congress for the sake of an expedient political narrative—a narrative that asked people to believe, among other unrealities, that blocks of burning buildings somehow constituted peaceful protests. … And it didn’t help our understanding of what transpired on August 25 that we were told repeatedly by national media outlets that there weren’t riots, and there wasn’t violence in Kenosha that night until Kyle Rittenhouse discharged his weapon. We could all see the blocks of burning buildings with our own eyes.
As a result of the "unrest", over a hundred businesses were damaged and forty were destroyed at a cost estimated at $50 million1.
But just as in the cases of Covington Catholic’s Nick Sandmann or Jussie Smollet or the “Russia-collusion” narrative, almost none of the details holding up that politically convenient position…were true. … In the aftermath of the media frenzy around the Covington Catholic story at least there were some mea culpas from the mainstream press, some sense of shame, some desire to get the egg off their faces. But with rare exception it doesn’t look like we’ll be getting that here.
To acknowledge the facts of what happened that night is not political. It is simply to acknowledge reality. It is to say that facts are still facts and that lies are lies. It is to insist that mob justice is not justice. It is to say that media consensus is not the equivalent of due process.
Acknowledging reality is political. What else was 1984 all about?
All the news that's fit to print after the election:
- Nellie Bowles, "TGIF: Inflation Rises. Russiagate Falls Apart. And J.K. Rowling Is Erased.", Bari Weiss, 11/19/2021
A note on Kenosha in light of the Kyle Rittenhouse trial: Until quite recently, the mainstream liberal argument was that burning down businesses for racial justice was both good and healthy. Burnings allowed for the expression of righteous rage, and the businesses all had insurance to rebuild.
When I was at the New York Times, I went to Kenosha to see about this, and it turned out to be not true. The part of Kenosha that people burned in the riots was the poor, multi-racial commercial district, full of small, underinsured cell phone shops and car lots. It was very sad to see and to hear from people who had suffered. Beyond the financial loss, small storefronts are quite meaningful to their owners and communities, which continuously baffles the Zoom-class.
Something odd happened with that story after I filed it. It didn’t run. It sat and sat. … A few weeks after I filed, an editor told me: The Times wouldn’t be able to run my Kenosha insurance debacle piece until after the 2020 election, so sorry. There were a variety of reasons given—space, timing, tweaks here or there.
Eventually the election passed. Biden was in the White House. And my Kenosha story ran. Whatever the reason for holding the piece, covering the suffering after the riots was not a priority. The reality that brought Kyle Rittenhouse into the streets was one we reporters were meant to ignore. The old man who tried to put out a blaze at a Kenosha store had his jaw broken. The top editor of the Philadelphia Inquirer had to resign in June 2020 amid staff outcry for publishing a piece with the headline, “Buildings Matter, Too.”
Actually, the suppressed article2 ran six days after the election, so Biden was not yet in the White House, but the danger that acknowledging reality might hurt his chance of getting there had passed.
- John McWhorter, "Here’s a Fact: We’re Routinely Asked to Use Leftist Fictions", The New York Times, 11/19/2021
These days, an aroma of delusion lingers, with ideas presented to us from a supposedly brave new world that is, in reality, patently nonsensical. Yet we are expected to pretend otherwise. To point out the nakedness of the emperor is the height of impropriety, and I suspect that the sheer degree to which we are asked to engage in this dissimulation will go down as a hallmark of the era….
Do you believe that being “diverse” does not make an applicant to a selective college or university more likely to be admitted? … In some circles these days, you are supposed to say you do. … And if the price for questioning that notion is to be seen as sitting somewhere on a spectrum ranging from retrogressive to racist, it’s a price few are willing to pay. One is, rather, to pretend. … That selective schools regularly admit black students with adjusted standards is undeniable. … My point here isn’t to debate the pros and cons of affirmative action. There are legitimate arguments on both sides of that debate. My point is that the existence of various forms of affirmative action in admissions is a fact, and saying otherwise is fiction. …[I]t is often suggested that it is disingenuous, if not racist, to surmise that a black student was admitted to a school via racial preferences. But this leaves the question as to just what we are to assume the aim of these policies has been, when the educational establishment so vociferously defends them. … That this is not to be mentioned is a kind of politesse requiring that we prevaricate about a subject already difficult enough to discuss and adjudicate.
All of this typifies a strand running through our times, a thicker one than always, where we think of it as ordinary to not give voice to our questions about things that clearly merit them, terrified by the response that objectors often receive. History teaches us that this is never a good thing.
Ann Althouse, through whose weblog I found the above article, makes a good point in response:
- Ann Althouse, "'Asked'?! That's putting it mildly.", Althouse, 11/19/2021. Warning: Contains a barnyard epithet.
McWhorter is underplaying the problem. We don't just think it's ordinary to refrain from saying certain things (such as, to name the example he stresses, the existence of race-preferences in higher education admissions). We think it's abnormal to the point of toxicity not to refrain. We (as a culture) are deeply engaged in teaching young people that they must lie. The "white lie" is no longer merely permissible. It's required. I wonder if young people have retained any of the old-fashioned commitment to truth. It's obviously not the highest value anymore. … We're living in a culture where lying—or at least shutting up—is the higher value. Notice that McWhorter doesn't use the word "lie." He says "fiction." … McWhorter also uses the word "prevaricate"….
Another reality that we're not supposed to acknowledge is that one reason why McWhorter can get away with mentioning these facts in The New York Times, even in this masterpiece of understatement, is that he's "Black". It's doubtful that a "white" person could do so.
- Chris Conley, "Kenosha damage estimate: $50-million", WHBL, 9/10/2020
- Here's the suppressed article: Nellie Bowles, "Businesses Trying to Rebound After Unrest Face a Challenge: Not Enough Insurance", The New York Times, 11/9/2020
Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I have sometimes changed the paragraphing and rearranged the order of the excerpts in order to emphasize points. I have also de-capitalized the word "black" in the excerpts from McWhorter's article, since this is itself racist newspeak given that "white" is not also capitalized. Either both should be capitalized or neither; I choose neither, since that's the way it's been done until very recently. I wonder whether this was McWhorter's choice or that of the editors of The New York Times. If the latter, it's further evidence for his thesis.
November 25th, 2021 (Permalink)
Thanksgiving Dinner at the New Logicians' Club
For Thanksgiving, the New Logicians' Club held a dinner party for its members. During dinner, the club played its usual truth-tellers and liars game in which every member was randomly assigned the role of a liar or a truth-teller, and was required to answer every direct question accordingly throughout the evening*.
I was seated at a table with three other members. The name tag of the one to my immediate left read "Euler". I always like to know whether I can trust what the other members say, so I asked Euler what the status of the three members was, that is, whether the three were liars or truth-tellers. However, he mumbled something inaudible because his mouth was full of turkey.
I turned to the first member to his left, whose name was "Frege", and repeated the question.
"At least one of the three of us is a liar", he replied.
I asked the same question of the last member, whose name tag read "Gödel".
"At least one of the three of us is a truth-teller", he answered.
Was what Euler said true or false?
Extra Credit: What were the first names of the three logicians?
What Euler said was false.
Explanation: According to Frege, at least one of the three logicians was a liar. If Frege himself were a liar, then his claim would be true, which is impossible. Therefore, Frege was a truth-teller and his claim was, indeed, true. So, at least one of the two other logicians was a liar.
According to Gödel, at least one of the three logicians was a truth-teller. This was a true statement since we already know that Frege was a truth-teller. So, Gödel was also a truth-teller.
Therefore, Euler must have been the liar, since we know that there was at least one liar among the three, and the other two were truth-tellers. We can conclude that whatever Euler said was false.
Extra Credit Solution: The three logicians were named Leonhard Euler, Gottlob Frege, and Kurt Gödel.
Disclaimer & Disclosure: This puzzle is a work of fiction. Names, characters, places, and incidents are the product of the author's fervid imagination or are used fictitiously. Any resemblance to actual logicians, living or dead, events, or locales is entirely coincidental, so please don't sue me.
* For previous meetings of the club, see:
- A Meeting of the New Logicians' Club, 5/30/2021
- A Second Meeting of the New Logicians' Club, 7/4/2021
- Halloween at the New Logicians' Club, 10/31/2021
November 23rd, 2021 (Permalink)
"Everyone is entitled to his own opinion, but not his own facts."1
As is the case for most words of philosophical importance, "fact" is both vague and ambiguous. I argued in a previous entry in this series on fact-checking2 that a fact is not a belief or a type of statement, but a situation in the world. Yet, in an earlier entry, I defined it as a "true factual statement"3. Both are possible meanings of "fact", as well as simply "true statement"4. For the rest of this entry, I'll use the word "fact" in its "state-of-affairs" sense.
"Opinion" is not so ambiguous as "fact", but it is vague. An opinion is, of course, a belief, but opinions are usually distinguished from knowledge, and the borderline between opinion and knowledge is notoriously blurry.
Given that both "fact" and "opinion" are vague and ambiguous, it's no wonder that the distinction between them is fraught. It's generally agreed that fact checkers are not expected to check statements of opinion; for instance, in a section on checking opinion pieces, The Chicago Guide to Fact-Checking states: "…[I]n general, fact-checking the writer's opinion in a piece isn't necessary as long as the opinion is based on facts.5" Nonetheless, the guide has nothing to say about the difference between an opinion and the facts that it is based on. Similarly, The Fact Checker's Bible, discussing how to check an author's work, warns: "Be careful not to check the author's opinion6", which assumes that the checker already knows how to tell the difference between opinions and what should be checked. "Deciding what to leave unchecked and deciding what to check require equal care6," it concludes, but offers little guidance as to how to do so.
Perhaps the authors of both books assumed that the difference is obvious, or that the checker will have learned it elsewhere. I don't blame them for avoiding the task of explaining it, since it's difficult, but the distinction is crucial to fact checking and, in this entry, I'll try to clarify it.
As I'm using the words in this entry, facts and opinions belong to different categories of thing: the former to the objective mind-independent world, and the latter to the subjective mental world. This is why Moynihan's statement, used as the title of this entry, is true. We all have our own opinions, in that they belong to our private mental worlds, but nobody owns the facts, which are part of the physical world we share. This is also why we are able, at least some of the time, to agree on the facts despite differences of opinion on religion, ethics, politics, and the like. These days it seems to be getting harder to reach such agreement due, in my opinion, partly to the loss of an understanding of the difference between fact and opinion, which is why this is an important issue for all of us, not just fact checkers. So, what fact-checkers need to distinguish is not facts and opinions, which are deeply different, but statements of fact and statements of opinion, which are superficially similar.
What makes a claim a statement of fact rather than one of opinion is that there is objective evidence that it is true or that it is false. For instance, if I say that peanuts are legumes, I make a factual claim; but if I say that cashews taste better than peanuts, I state an opinion. That peanuts are legumes is true because of certain objective facts about them7, but the only evidence available that cashews taste better than peanuts is the subjective evidence of taste. Cashews may taste better to me, but worse to you. If you do not like the taste of cashews but do like that of peanuts, then there is no evidence that would convince you that the former actually do taste better than the latter. So, the claim itself is an expression of opinion, and not of fact.
Both factual statements and statements of opinion can be classified into different subtypes, and the way that I've come to understand the differences between the two types of statement is by thinking about those different subtypes. So, let's look at some of them, starting with factual statements. I don't claim that the following classification is exhaustive, but these are at least some important types of statement of fact―keep in mind that such statements are not necessarily true:
- Mathematical and Logical Statements: Statements in logic and mathematics are factual because they are subject to proof or disproof8. For instance, evidence for the Pythagorean Theorem can be given in such a way that anyone who checks it can determine that it is true, which is what is meant by "proof". In some cases, such statements can even be proven by a computer. Mathematical conjectures that have not been proven or disproven, such as Goldbach's Conjecture9, are still factual statements, though ones whose truth hasn't yet been determined.
Example: 524 + 732 = 1,256
- Observational Statements: An "observational statement" is one that can be verified or disconfirmed by direct observation. If anyone doubts such a statement, one can answer: "Go see for yourself!" Of course, sight is not the only sense available for checking such statements; sometimes one can say: "Listen for yourself!" or "Smell it yourself!" Observational statements are, of course, not provable in the way that logico-mathematical ones are, so that there will always be some room for doubt about them―for instance, it's possible though highly unlikely that two people would have the same hallucination.
Example: "There's a zebra in the back yard."
- Scientifically Reproducible Statements: Not all scientific statements are factual statements in the relevant sense, but those that are observationally or experimentally reproducible are. There is a sense in which the theory of evolution is a fact, namely, that it is a true statement that life on Earth has evolved. However, it's not a factual statement in the sense relevant to fact checking. While it's unlikely that most fact checkers will want or be able to reproduce a scientific experiment, they will be able to check whether other scientists have done so. In contrast, high-level scientific theories are not factual in this sense, since there's no single type of observation or experiment that can confirm or refute them.
Example: "Water boils at 100°C at sea level."
- Verifiable Historical Statements: As is true of scientific statements, not all statements about history are verifiable―by which I mean that there is sufficient evidence on the issue to settle the matter for all reasonable investigators. Moreover, such statements range on a continuum from ones for which there is simply no evidence to those for which there is an abundance of evidence. Many historical claims, especially ones about ancient history, are somewhere in the middle. Sometimes there is only one source of evidence and that, perhaps, a dubious one. Unfortunately, you often don't know in advance whether a statement is verifiable until you try checking it and discover that there is a dearth of evidence. Because the past is not directly observable, even the best verified historical statements are open to some minor degree of doubt.
Example: "President John F. Kennedy was assassinated in Dallas, Texas."
You can see that what these different types of statement in common is that there is some way in which their truth or falsity can be objectively established to the satisfaction of all reasonable inquirers. In contrast to factual statements, there is no way to establish the truth or falsity of statements of opinion to the satisfaction of all, which is part of what makes them opinion. Here are a few prominent types for comparison:
- Expressions of Taste: We've already seen the example of peanuts and cashews.
Example: "This candy it too sweet."
- Value Judgments: Most aesthetic judgments on whether a particular work of art is good or bad, or whether one work is superior to another, are subjective judgments that cannot be proven by appeal to objective facts. If I find a particular painting beautiful but you think it's ugly, there is nothing about the painting that I can point at to show your opinion mistaken. Instead, we might each conclude that the other has bad taste in art. "Beauty is in the eye of the beholder", as we say. Other types of value judgment, such as ethical ones, are more controversial. Philosophers dispute about whether there is an unbridgeable divide between facts and values, but we needn't get into such deep waters here. Instead, it's enough to know that moral disagreements are often unresolvable by appeals to the facts.
Example: "The Mona Lisa is the most beautiful painting in the world."
- Predictions: "It's difficult to make predictions, especially about the future", as Yogi Berra didn't say10. Predictions are the opposite of historical judgments: the past is fixed but the future has yet to happen, so there are no objective facts to check predictions against. The only way to check a prediction is to wait until it is supposed to come true, then check it against the present facts.
Example: "People will return to the moon in a few years time."
As is the case for most important philosophical distinctions, the difference between these two types of statement is not an absolute one, but one of degree. There is a continuum with logico-mathematical statements at one end and expressions of taste at the other, and all other statements fall somewhere in between.
The failure of professional fact checkers to understand this difference has led some to try checking statements of opinion with which they disagree. This is not just a waste of time, it's also a source of reputational damage. One of the charges made against them is that they are just pundits in disguise, and checking opinions is a sure way to confirm the charge. I predict that I will write a future entry in this series critiquing an example of a professional fact checker doing so, but that's just my opinion.11
- This statement has a long history, but this was the wording used by Daniel Patrick Moynihan to whom it is usually credited. For the full history, see: Garson O'Toole, "People Are Entitled To Their Own Opinions But Not To Their Own Facts", Quote Investigator, 3/17/2020.
- What is a Fact?, 4/29/2021.
- Fact Vs. Opinion, 6/22/2018.
- Monroe Beardsley used "fact" in this sense, see his: Thinking Straight: A Guide for Readers & Writers (1950), p. 5.
- Brooke Borel, The Chicago Guide to Fact-Checking (2016), p. 56.
- Sarah Harrison Smith, The Fact Checker's Bible: A Guide to Getting it Right (2004), p. 54.
- Editors, "Peanut", Encyclopedia Britannica, accessed: 11/22/2021.
- Gödel's Incompleteness Theorem proved that there are arithmetical statements that are undecidable, that is, they can be neither proven nor disproven. These are rare types of exception to the factuality of mathematical claims. See: Melvin Henriksen, "What is Godel's Theorem?", Scientific American, 1/25/1999.
- Eric W. Weisstein, "Goldbach Conjecture", Wolfram MathWorld, accessed: 11/22/2021.
- See: "It’s Difficult to Make Predictions, Especially About the Future", Quote Investigator, 10/20/2013.
- I found the following article very helpful when researching this entry: John Corvino, "The Fact/Opinion Distinction", The Philosopher's Magazine, 3/4/2015. Of course, Corvino's opinion at the end is not acceptable to fact checkers, who must maintain the distinction between checkable factual statements and uncheckable statements of opinion.
November 12th, 2021 (Permalink)
First, the Bad News
Quote: "Bad News is a populist critique of American journalism. But I write this book from the Left, from a deep-seated dismay with rising inequality and the way the global economy has decimated the American working class, depriving them of the dignity of good jobs in a culture that sneers at them and their values. … Still, this is an optimistic book. Although I am deeply critical of the direction American journalism has taken, I am also convinced that it's not too late to change course. That's why the book is so tightly focused on the media powerhouses that have seized upon and further capitalized on this trend; it is they that set the tone for the rest of the industry. "1
Title: Bad News
Subtitle: How Woke Media Is Undermining Democracy
Comment: I don't like the word "woke". I assume that it started out as propaganda promoting the idea that the "woke" folks were somehow awake while the rest of us were sleeping, when the truth is the opposite. It's reminiscent of the slogan in 1984, "freedom is slavery"2, which is almost literally what the woke are trying to make the rest of us believe.
Like all euphemisms, however, "woke" seems to be losing its ability to conceal the reality it stands for. People are waking up to the fact that what is hidden behind the benign label is poison. For that reason, some of the wokeys are now declaring it the latest taboo word, at least for the rest of us3.
While in ordinary circumstances my distaste for the word and dislike of doublespeak would lead me to avoid using it, I'm going to use it at least for the rest of this entry. It seems to have lost its power to fool people, and it may irritate those trying to fool them.
Author: Batya Ungar-Sargon
Comment: I assume that this is the same Batya Ungar-Sargon whose article I recommended last month4. That article may have been an excerpt from the book, though it didn't say so. It described the ravages of the woke invasion of The New York Times (NYT), specifically, but the book appears to be more general in its treatment of the subject. In addition to writing the article and book, she's a journalist and deputy opinion editor for Newsweek. Other than the article and what I've read of the book, I'm not familiar with her work.
Summary: I've only been able to read the introduction, first chapter, part of the second, and the brief epilogue. Moreover, the titles of subsequent chapters are not very revealing about their subjects. For this reason, I can't really summarize the book. However, I get the impression that its theme is how reporting went from a working-class job in the 19th and early 20th century to the profession of an upper-class, college-educated elite in the late 20th and current century. Specifically, after a mostly theoretical Introduction, the book starts with an historical chapter on the rise of the penny press in the nineteenth century, and the career of newspaperman Joseph Pulitzer, for whom the most prestigious journalism prizes are named.
This history is relevant because wokeness is mostly a disease of upper-class whites, though Ungar-Sargon didn't seem to be so clear on this fact in the article I recommended last month. My one complaint about that article was that she seemed to believe that the moral panic she described at the NYT had created a "social consensus" in favor of wokeness. However, as I showed, there is no such consensus in the country as a whole. Perhaps she meant only that there is now a consensus at the NYT, which may be true, but if so it's due at least partly to the purging of dissenters and intimidating into silence of those who remain.
Comment: My main concern about wokeness is what it is doing to the institutions that serve an essential function in a democracy. Democracy is impossible without a largely free press, and practically every day now brings a new assault on that freedom. Moreover, the press needs to be free from both censorship and propaganda, since democracy cannot work even in theory if the public is kept ignorant and misinformed. This is how wokeness is "undermining democracy", to quote the book's subtitle. The NYT and other major newspapers, while never perfect, at least made an attempt in the past at reporting the news accurately, whereas they are increasingly becoming a "Ministry of Truth" for woke propaganda. Anti-social media such as Twitter, Facebook, and YouTube are also increasingly censoring users at the instigation of woke mobs.
One thing that Orwell did not foresee in 1984 is private businesses willingly offering their services as censors and propagandists to the woke mob and government bureaucrats. As you may remember, Winston Smith worked for the Ministry of Truth, largely as a censor, rewriting old newspapers to bring them into alignment with current propaganda. The ministry was, of course, a government agency, not a giant business like the NYT or Twitter. Nonetheless, there's now a small army of Winston Smiths laboring away to produce propaganda or to censor alternative sources of information. Moreover, many of them are amateurs, providing their "services" for free because they're woke.
The good news is that, while the NYT is pretty far gone, America still has a largely free press. The NYT is, of course, worrisome because it is the most prestigious American newspaper, and sets the example for much of the rest of the news media. However, awareness of the threat to freedom of thought and speech by wokeness is itself spreading. As mentioned above, the very fact that the word "woke" is changing from a euphemism to a dysphemism is a sign that people are catching on. So, like Ungar-Sargon, I am optimistic that we can still save democracy from wokeness.
The Blurbs: The book is favorably blurbed by, among others, Greg Lukianoff of the Foundation for Individual Rights in Education, and Jonathan Haidt.
Disclaimer: This is a new book and I haven't read it yet, so can't review or recommend it. However, its topic interests me, and may also interest Fallacy Files readers.
- "Introduction", pp. 16-17.
- Irving Howe, editor, Orwell's Nineteen Eighty-Four: Text, Sources, Criticism (1963).
- See, for instance: Sam Sanders, "Opinion: It's Time To Put 'Woke' To Sleep", National Public Radio, 12/30/2018. This was written almost three years ago, and Sanders was unhappy that affluent white liberals―the very people who listen to NPR―were starting to use the word.
- Remembering the Sokal Hoax & Another Sign of The Times, 10/29/2021.
November 1st, 2021 (Updated: 11/3/2021 & 11/6/2021) (Permalink)
The Trump Effect
According to the headline of a recent article, American polling is broken. The article itself doesn't use the word "broken", but it does raise a serious problem:
Pollsters are nearly a year into battling the four-alarm fire set by their general-election disaster in 2020―the biggest national-level polling miss in nearly half a century. One year ago, Democrats rolled into Election Day confident that they would see a relatively easy Joe Biden victory―remember the closing-stretch Quinnipiac poll showing him up five in Florida and the CNN-SSRS survey with a margin of six in North Carolina, or the Morning Consult poll with Biden up nine in Pennsylvania? And, of course, there was the USC projection of a 12-point national gap. Trump, of course, won Florida and North Carolina and came perilously close in the Keystone State….1
This problem currently looms large because tomorrow is election day for the next governor of Virginia, and polling averages show the race tied: both the Five Thirty Eight (538)2 and the Real Clear Politics (RCP)3 averages show the Republican candidate Glenn Youngkin ahead of Democrat Terry McAuliffe, 538 by one percentage point, and RCP a tenth of a point less than that. Because they're not themselves sample-based polls, such averages don't have a margin of sampling error, but that doesn't mean that they're perfectly precise. Both averages correctly predicted that Biden would win last year, but each had him winning the popular vote by a greater margin than he received, which was four-and-a-half percentage points4. The 538 average showed Biden winning by over eight percentage points5, and RCP by over seven6. So, a lead of a percentage point or slightly less is clearly within the error bars for either polling average, which means that the race is too close to call.
However, both averages were off in the same direction, namely, exaggerating the vote for Biden and short-changing Trump, which suggests a bias of three to four percentage points. As Gabriel Debenedetti, the author of the article, puts it: "Across the country, pollsters seemed to systematically undercount GOP support, despite the fact that they were trying very hard, after some issues in 2016, not to do that.1" Assuming that such a bias explains last year's "errors of unusual magnitude7", and that it applies to a state's gubernatorial election, then Youngkin is not one point ahead of McAuliffe but four to five points ahead. Is this outside the error bars?
I wrote previously about two different groups studying what happened last year8, but according to Debenedetti:
…[T]he pollsters…are offering a new round of projections without ever having quite figured out what went wrong last year. … The industry set out to resolve all the ugly questions around the disaster of 2020 in the standard manner: with a big autopsy. In July, the American Association for Public Opinion Research [AAPOR] released its eagerly awaited (in the biz) report with the cooperation of a range of political and academic researchers. It first concluded that that year's national-level polling error―4.5 percentage points, on average…. And 2020's mistakes were different from 2016's, it continued:… Now…something bigger and scarier had happened. They just still couldn't conclude what, exactly, that was.
That discomfiting research wasn't the only effort of its kind. …[F]ive rival Democratic Party polling firms―including Joe Biden's―secretly joined forces two weeks after the election to find a diagnosis and solution. But when they revealed the collaboration in April, they still had no solid answers. One pollster involved in the Democratic effort told me the coalition was still deep in its experiments and didn't expect to know much at all for at least a few more months, "at a minimum."1
The studies seem to have ruled out some possible suspects, though:
The AAPOR report preempted one obvious question: No kind of interview―phone, text, online―clearly outperformed the others in terms of accuracy in 2020. It also ruled out the notions that 2020's issues were caused by anything from late-deciding voters who skewed the numbers to mis-weighted demographics when pollsters tried projecting the makeup of the electorate. It couldn't even be poll respondents' hesitance to admit they liked Trump. The Democratic groups, for their part, said they'd fixed their (relatively small) 2016 error by increasing representation for white non-college voters in their samples, and that their polling results in 2017 and 2018 races looked good―at least before 2020 spoiled that fix and revealed that their numbers were especially bad in more Republican areas of the country.1
This certainly sounds like some kind of anti-Republican bias, and only one suspect seems to remain:
The likely problem, in short, is that they simply aren't reaching a significant number of voters activated by Trump, perhaps because they don't know how to find them, or maybe because those voters mistrust and therefore ignore polls. …
"We would argue nonresponse is, by far, above and beyond, the biggest issue here," Johannes Fischer, the survey methodology lead for progressive polling outfit Data for Progress, told me this month. If nonresponse has always been a problem, its sheer scale is what's new now.1
Nonresponse bias9 arises when those who do not respond to a poll are different in some relevant way from those who do respond, which makes samples unrepresentative of the population. If the people who voted for Trump were more likely to be suspicious of polls and, therefore, less likely to respond to them than those who voted for Biden, the result would have been a bias in favor of Biden and against Trump. Ironically, such suspicions may have been a self-fulfilling prophecy: some voters thought polls were biased, then refused to participate, which resulted in biased polls.
If the polls are "broken" due to nonresponse bias, did Trump break them? Did his public attacks on polls10 cause enough of his supporters to stop responding that the polls are now unreliable? If so, then there is a simple fix:
In talking to a range of nonpartisan and party-affiliated pollsters in recent weeks, I found that many dismissed, but laughed nervously about, the least scientifically sound idea of all, which unfortunately would have looked on the surface like a fix in 2020: just artificially slapping four extra points of support on Trump's side. …
Some pollsters, such as [Patrick] Murray, have argued that the nonresponse problem appears to be Trump-specific, since the errors were far more pronounced in 2016 and 2020 than in any of the intervening or proceeding years, including the 2018 midterms. "The evidence suggests that when Trump's name is not on the ballot itself, we don't have a problem with missing a portion of the electorate that doesn't want to talk to us," he said. "The question is: Do we treat the 2020 election as something entirely brand-new, so we have to add a four-point arbitrary margin on the model for Republicans? Or do we look at when Trump has not been on the ballot and our polling has basically been okay? My working hypothesis is that's probably the better path to take, which means our 2021 polling isn't that different than what it was in 2017."1
An alternative hypothesis is that the bias is not so Trump-specific that it can't transfer from Trump to other Republican candidates, especially those he supports. Trump is supporting Youngkin in Virginia11, and McAuliffe has been portraying his Republican opponent as a Trump "wannabe"12. Meanwhile, Youngkin has been trying to put distance between himself and Trump without alienating Trump supporters13. Could Trump's support, or McAuliffe's attacks, have linked Youngkin to Trump sufficiently to have affected the response rate to recent polls? We'll have to wait till tomorrow or thereafter to find out.
I'll update this entry after the election results from Virginia are in.
Update (11/3/2021): 95% of the ballots in Virginia have been counted and the results are Youngkin at 50.68% and McAuliffe with 48.55%14, which is a difference of just over two percentage points. The Virginia Department of Elections will continue to accept absentee ballots until two days from now, for some bizarre reason, and the official results won't be certified until the fifteenth of this month, but I'm not going to wait.
A two-point win for Youngkin is not quite a confirmation of the hypothetical Republican nonresponse effect, but it is in the right direction, though it suggests that the effect is smaller than 3-4 points. However, since it's only about one point from the aggregated poll results, which is surely within the error bars for such averages, it's just as much a confirmation of the polls, and evidence against the idea that they're "broken".
Surprisingly, the election for governor of New Jersey, which also took place yesterday, is turning into a more interesting test of the nonresponse effect than Virginia. The New Jersey election didn't receive as much attention as Virginia because it was widely assumed that it would be an easy win for the incumbent Democrat, Phil Murphy. The final RCP average for the state showed Murphy ahead by 7.8 percentage points15. I can't find a polling average from 538, but the last six polls it lists all showed Murphy ahead, with a lead ranging from four to eleven points16.
Despite all that, the current election results are too close to call, with Murphy at 49.66% and his Republican opponent, Jack Ciattarelli at 49.59% with 88% of precincts reporting17. So, even if Murphy does win, the results will be such that the hypothesized nonresponse effect of 3-4 points will be too small to account for them.
So, what can we conclude from this exercise? The polls are not broken in Virginia, but they are in New Jersey? Either there is no nonresponse effect, or if there is it's much smaller than hypothesized, or perhaps much bigger?
I'll leave it to you to decide, because I'm stumped.
Update (11/6/2021): An unusual mea culpa has been issued by Patrick Murray, the director of the Monmouth University Polling Institute:
I blew it. The final Monmouth University Poll margin did not provide an accurate picture of the state of the governor’s race. … I owe an apology…because inaccurate public polling can have an impact on fundraising and voter mobilization efforts. But most of all I owe an apology to the voters of New Jersey for information that was at the very least misleading.18
I mentioned in the above Update that the polls showed the incumbent governor ahead by a margin ranging from four to eleven percentage points, the high end of which was from Monmouth19. Murray writes: "Monmouth’s conservative estimate in this year’s New Jersey race was an 8-point win for Murphy, which is still far from the final margin18." The poll predicted the outcome of the race correctly, since it appears that Murphy won by a little over two percentage points17, and I suspect that many pollsters would have simply defended that as a hit rather than apologizing. Murray's admirable forthrightness in admitting error is rare among pollsters.
Despite apologizing at the beginning of his opinion piece, Murray spends a large part of it defending himself and Monmouth's other polling results. However, I'm more interested in what he thinks caused Monmouth's large error, as well as why the polls in general were so wrong about the closeness of that race:
Election polling is…prone to its fair share of misses if you focus only on the margins. For example, Monmouth’s polls four years ago nailed the New Jersey gubernatorial race but significantly underestimated Democratic performance in the Virginia contest. This year, our final polls provided a reasonable assessment of where the Virginia race was headed but missed the spike in Republican turnout in New Jersey.
The difference between public interest polls and election polls is that the latter violates the basic principles of survey sampling. For an election poll, we do not know exactly who will vote until after Election Day, so we have to create models of what we think the electorate could look like. Those models are not perfect. They classify a sizable number of people who do not cast ballots as “likely voters” and others who actually do turn out as being “unlikely.” These models have tended to work, though, because the errors balance out into a reasonable projection of what the overall electorate eventually looks like.
Monmouth’s track record with these models…has been generally accurate within the range of error inherent in election polling. However, the growing perception that polling is broken cannot be easily dismissed.18
Murray here is raising a problem distinct from the nonresponse bias that I discussed above, namely, that the failure of Monmouth's poll may be due to its models of "likely voter". So, it needn't be the case that Trump supporters, conservatives, or Republican voters are not responding to polls, but that the pollsters judge them to be unlikely to vote and weight the results accordingly. If this hypothesis is correct, then it ought to be possible to fix the pollsters' models, at least if there is a systematic change in likelihood of voting.
As surprising as Murray's apology is, he even more surprisingly suggests that there ought to be fewer election polls. Such a suggestion may be possible because he works for a university rather than a commercial polling company:
Some organizations have decided to opt-out of election polling altogether, including the venerable Gallup Poll and the highly regarded Pew Research Center, because it distracts from the contributions of their public interest polling. Other pollsters went AWOL this year. For instance, Quinnipiac has been a fixture during New Jersey and Virginia campaigns for decades but issued no polls in either state this year.
Perhaps that is a wise move. If we cannot be certain that these polling misses are anomalies then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.
This is especially important now because the American republic is at an inflection point. Public trust in political institutions and our fundamental democratic processes is abysmal. Honest missteps get conflated with “fake news”—a charge that has hit election polls in recent years. … If election polling only serves to feed that cynicism, then it may be time to rethink the value of issuing horse race poll numbers as the electorate prepares to vote.18
Unlike Murray, I'm not particularly worried about the effects of polling failures on the public's trust in political institutions. Such institutions in general are currently doing a terrible job, and the public should recognize that fact. People should have less trust in polls than many seem to, partly because of those failures. One advantage of having so many polls is that it encourages skepticism about polling, since it's so easy to see how widely their results range. Of course, a healthy skepticism about polling is not the same as a dismissive cynicism, and I hope that the latter is not encouraged.
However, it would probably be better if there were fewer polls, because that might lead to less "horse race" coverage. The news media sponsor most of them because covering a campaign as if it were a race is dramatic and easy. No matter what a poll shows, it's considered newsworthy. So, it's unlikely that we'll see the end of election polling, since it allows the media to manufacture news rather than just sit around waiting for something to happen. A big benefit of fewer polls would be less such lazy reporting, and perhaps more reporting on issues and checking of factual claims made by the candidates instead. One can always dream, anyway.
- Gabriel Debenedetti, "Polling in America Is Still Broken. So Who Is Really Winning in Virginia?", New York Magazine, 10/28/2021.
- "Who's ahead in the Virginia governor's race?", Five Thirty Eight, accessed: 11/1/2021.
- "Virginia Governor―Youngkin vs. McAuliffe", Real Clear Politics, accessed: 11/1/2021.
- "Winning margins in the electoral and popular votes in United States presidential elections from 1789 to 2020", Statista, accessed: 10/30/2021.
- "Who's ahead in the national polls?", Five Thirty Eight, accessed: 10/30/2021.
- "National General Election Polls", Real Clear Politics, accessed: 10/30/2021.
- See: Errors of Unusual Magnitude, 7/19/2021.
- See the previous note and: What Biased Last Year's Polls?, 4/27/2021.
- Sheldon R. Gawiser & G. Evans Witt, A Journalist's Guide to Public Opinion Polls (1994), pp. 92-95.
- Lindsey Ellefson, "Trump Admits He Calls Polls ‘Fake’ When They Don’t Favor Him (Video)", The Wrap, 7/12/2021.
- Jill Colvin, "Trump Plans Last Minute Tele-Rally for Virginia's Youngkin", Associated Press, 10/28/2021.
- Aila Slisco, "After Larry Elder's Defeat, Terry McAuliffe Tries to Paint Glenn Youngkin as 'Trump Wannabe'", Newsweek, 9/16/2021.
- Darragh Roche, "Glenn Youngkin Keeps Distance From Unpopular Donald Trump in Virginia", Newsweek, 10/15/2021.
- "2021 November General", Virginia Department of Elections, 11/3/2021.
- "New Jersey Governor―Ciattarelli vs. Murphy", Real Clear Politics, accessed: 11/3/2021.
- "Latest Polls", Five Thirty Eight, 11/2/2021.
- "New Jersey Election Results", The New York Times, accessed: 11/3/2021.
- Patrick Murray, "Pollster: ‘I blew it.’ Maybe it’s time to get rid of election polls.", NJ, 11/5/2021.
- "Murphy Maintains Lead", Monmouth University Polling Institute, 10/27/2021.
Looking for a cheap research paper? CopyCrafter.net offers high-quality service with 24/7 support.
Superb online gambling Australia at Casinonic! Register and claim bonuses now!
If you're looking for an extensive review of the best sports betting games in the Philippines today, check out 22betphilippines.com.ph complete with thorough game reviews, attractive bonus offers, and responsive customer support.
Get the top-notch online casino entertainment available in the Philippines when you visit onlinegambling.com.ph
Online cricket betting is the most played sports betting game in India. If you want to play and win awesome offers on cricket games just check out topcricketbetting.in.
Read and write reviews of Casino Gods at TheCasinoDB.