Previous Month | RSS/XML | Current | Next Month
WEBLOG
October 31st, 2023 (Permalink)
Trick or Treat
On Halloween, a group of trick-or-treaters visited all the houses on the north side of the one-hundred block of South Sycamore street. There were four houses on the block from corner to corner, one numbered 106. The children started at 102, the lowest numbered house, and visited each in turn. Each house was a different color, including a wooden one painted white. In each house lived a different family, and each family gave out different homemade treats to the visiting children.
Clues:
- The trick-or-treaters visited the yellow house immediately after the Allen house, which wasn't the one where they received candy apples, but before they went to the house that gave out chocolate chip cookies.
- Brownies were not given out at the first or last house on the block, 108, one of which was built out of red bricks.
- The Cohen family was not the one that gave out candy apples.
- The trick-or-treaters received brownies at the house built out of gray stone.
- The Drummond family does not live in one of the corner houses.
- The house numbered 104 is not painted yellow.
- The Baxter family, who gave out caramel popcorn, do not live in the red brick house.
Using the clues above, can you determine which family lived in each house, the color of the house, and the treats given?
Number | Family | Color | Treat |
---|---|---|---|
102 | Baxter | White | Caramel Popcorn |
104 | Allen | Gray | Brownies |
106 | Drummond | Yellow | Candy Apples |
108 | Cohen | Red | Cookies |
October 29th, 2023 (Permalink)
The Associated-with-Terrorists Press & Dishonest(y) Researchers
- Joseph Simonson, "Associated Press Won't Let Reporters Call Hamas a Terrorist Organization", The Washington Free-Beacon, 10/23/2023
The Associated Press [AP] instructs reporters and organizations that rely on its style guide to avoid referring to Hamas as a terrorist organization…. The news outlet states in its "Israel-Hamas Topical Guide" that because "terrorism and terrorist have become politicized, and often are applied inconsistently…1the AP is not using the terms for specific actions or groups, other than in direct quotations."
The guidance will affect how dozens of regional newspapers and national outlets…report on the ongoing war in Gaza. …Instead of referring to Hamas—which earlier this month killed hundreds of innocent civilians, including children in an unprompted attack on Israel—as a terrorist group, the outlet says journalists should call its members "militants." "Terms such as Hamas fighters, attackers or combatants are also acceptable depending on the context," the Associated Press style guide states. …
Bucking the "terrorist" label is one in a series of strange decisions by the Associated Press, which include once sharing office space in Gaza with Hamas. That office, which the AP used for 15 years, was destroyed by Israeli airstrikes in May 2021. The Israeli Defense Force stated the building contained Hamas operatives and weapons, as well as an office for Islamic Jihad, another terrorist organization based in Gaza.
"We are shocked and horrified that the Israeli military would target and destroy the building housing AP's bureau and other news organizations in Gaza," AP president and CEO Gary Pruitt said in a statement at the time.
But an Israeli Defense Force source said at the time that Associated Press reporters were aware who their neighbors were. And The Atlantic reported in 2014 that Associated Press staffers in the Gaza office could see terrorists stationed next to their building launch rockets into Israel2. Despite their proximity, the Associated Press never reported the rocket launches, which endangered the group’s staff and nearby civilians.
"Hamas fighters would burst into the AP’s Gaza bureau and threaten the staff—and the AP wouldn’t report it," according to The Atlantic. "Cameramen waiting outside Shifa Hospital in Gaza City would film the arrival of civilian casualties and then, at a signal from an official, turn off their cameras when wounded and dead fighters came in, helping Hamas maintain the illusion that only civilians were dying."
The following article is way too long, and I don't recommend reading the whole thing, but if you skim over the excessive biographical and legal details, there's much that's worth reading about what's wrong with too much current scientific research.
- Noam Scheiber, "The Harvard Professor and the Bloggers", The New York Times, 9/30/2023
The day almost two years ago when Harvard Business School informed Francesca Gino, a prominent professor, that she was being investigated for data fraud[,]…the school told Dr. Gino it had received allegations that she manipulated data in four papers on topics in behavioral science, which straddles fields like psychology, marketing and economics.
Dr. Gino published the four papers under scrutiny from 2012 to 2020, and fellow academics had cited one of them more than 500 times. The paper found that asking people to attest to their truthfulness at the top of a tax or insurance form, rather than at the bottom, made their responses more accurate because it supposedly activated their ethical instincts before they provided information. …
Many behavioral scientists believe that, once we better understand how humans make decisions, we can find relatively simple techniques to, say, help them lose weight (by moving healthy foods closer to the front of a buffet) or become more generous (automatically enrolling people in organ donor programs).
The field enjoyed a heyday in the first decade of the 2000s,…[b]ut it has been fending off credibility questions for almost as long as it has been spinning off TED Talks. In recent years, scholars have struggled to reproduce a number of these findings, or discovered that the impact of these techniques was smaller than advertised.
There was a lengthy article thirteen years ago in The New Yorker, of all places3, discussing the latter under the name the "decline effect". What happens is that the original researchers massage the data in various ways, usually short of outright fraud, in order to get results that are both statistically and practically significant. When later researchers try to replicate the experiments, the effect either disappears or is smaller than in the original study, which is exactly what you'd expect if the original researchers were exaggerating their results.
Fraud, though, is something else entirely. Dozens of Dr. Gino’s co-authors are now scrambling to re-examine papers they wrote with her. Dan Ariely, one of the best-known figures in behavioral science and a frequent co-author of Dr. Gino’s, also stands accused of fabrication in at least one paper.
Though the evidence against Dr. Gino, 45, appears compelling, it remains circumstantial, and she denies having committed fraud, as does Dr. Ariely. … That has left colleagues, friends, former students and, well, armchair behavioral scientists to sift through her life in search of evidence that might explain what happened. Was it all a misunderstanding? A case of sloppy research assistants or rogue survey respondents? Or had we seen the darker side of human nature—a subject Dr. Gino has studied at length—poking through a meticulously fashioned facade? …
Among those co-authors was Dr. Ariely…. Dr. Gino and Dr. Ariely became frequent co-authors, writing more than 10 papers together over the next six years. The particular academic interest they shared was a relatively new one for Dr. Gino: dishonesty. …[T]he papers she wrote with Dr. Ariely…made a splash. One found that people tend to emulate cheating by other members of their social group—that cheating can, in effect, be contagious….
Unlike some of those discussed below, this is scarcely a surprising result, since it's common sense that people are influenced for good or ill by their peers.
…[T]hey seemed to share an ambition: to show the power of small interventions to elicit surprising changes in behavior: Counting to 10 before choosing what to eat can help people select healthier options (Dr. Gino); asking people to recall the Ten Commandments before a test encourages them to report their results more honestly (Dr. Ariely). …
These sort of claims invite immediate skepticism. It's implausible that such small changes should have large effects, so in all likelihood the effect if any is also small. Obviously, small effects are difficult to measure and require large samples to reach significance. How large were the samples in these experiments? This article doesn't say, but it mentions later that small samples were common in this field, suggesting that the data may have been manipulated to exaggerate results. Of course, this isn't disproof, but it's reason for caution in accepting such claims.
It’s often difficult to identify the moment when an intellectual movement jumps the shark and becomes an intellectual fad—or, worse, self-parody. But in behavioral science, many scholars point to an article published in a mainstream psychology journal in 2011 claiming evidence of precognition—that is, the ability to sense the future. In one experiment, the paper’s author, an emeritus professor at Cornell, found that more than half the time participants correctly guessed where an erotic picture would show up on a computer screen before it appeared. He referred to the approach as “time-reversing” certain psychological effects.
The paper used methods that were common in the field at the time, like relying on relatively small samples. Increasingly, those methods looked like they were capturing statistical flukes, not reality.
“If some people have ESP, why don’t they go to Las Vegas and become rich?” Colin Camerer, a behavioral economist at the California Institute of Technology, told me. … Few scholars were more affronted by the turn their discipline was taking than Uri Simonsohn and Joseph Simmons, who were then at the University of Pennsylvania, and Leif Nelson of the University of California, Berkeley.
The three behavioral scientists soon wrote an influential 2011 paper showing how certain long-tolerated practices in their field, like cutting off a five-day study after three days if the data looked promising, could lead to a rash of false results. (As a matter of probability, the first three days could have lucky draws.) The paper shed light on why many scholars were having so much trouble replicating their colleagues’ findings, including some of their own. …
This is a "questionable research practice"4 under the name "optional stopping"5: an experiment is run until the results are statistically significant, then stopped. If you have enough patience, you can eventually get significant results this way by random chance.
Dr. Gino and Dr. Ariely…sometimes produced work that raised eyebrows, if not fraud accusations, among other scholars. In 2010, they and a third colleague published a paper that found that people cheated more when they wore counterfeit designer sunglasses. “We suggest that a product’s lack of authenticity may cause its owners to feel less authentic themselves,” they concluded, “and that these feelings then cause them to behave dishonestly.”
If this doesn't make you skeptical, you need to recalibrate your baloney detector.
This genre of study, loosely known as “priming,” goes back decades. The original, modest version is ironclad: A researcher shows a subject a picture of a cat, and the subject becomes much more likely to fill in the missing letter in D_G with an “O” to spell “DOG,” rather than, say, DIG or DUG.
But in recent decades, the priming approach has migrated from word associations to changes in more complex behaviors, like telling the truth—and many scientists have grown skeptical of it. That includes the Nobel laureate Daniel Kahneman, one of the pioneers of behavioral economics, who has said the effects of so-called social priming “cannot be as large and as robust” as he once assumed. …
In 2021, the Data Colada bloggers, citing the help of a team of researchers who chose to remain anonymous, posted evidence that a field experiment overseen by Dr. Ariely relied on fabricated data, which he denied. The experiment, which appeared in a paper co-written by Dr. Gino and three other colleagues, found that asking people to sign at the top of an insurance form, before they filled it out, improved the accuracy of the information they provided.
Dr. Gino posted a statement thanking the bloggers for unearthing “serious anomalies,” which she said “takes talent and courage and vastly improves our research field.”
Around the same time, the bloggers alerted Harvard to the suspicious data points in four of her own papers, including her portion of the same sign-at-the-top paper that led to questions about Dr. Ariely’s work.
The allegations prompted the investigation that culminated with her suspension from Harvard this June. Not long after, the bloggers publicly revealed their evidence: In the sign-at-the-top paper, a digital record in an Excel file posted by Dr. Gino indicated that data points were moved from one row to another in a way that reinforced the study’s result.
Dr. Gino now saw the blog in more sinister terms. She has cited examples of how Excel’s digital record is not a reliable guide to how data may have been moved. … She spends much of her time laboring over responses to the accusations, which can be hard to refute.
Just invite the critics to your lab to observe a replication of the original research. Allow independent statisticians to analyze the raw data. If you can't do that, why should we take any of your research seriously?
In a paper concluding that people have a greater desire for cleansing products when they feel inauthentic, the bloggers flagged 20 strange responses to a survey that Dr. Gino had conducted. In each case, the respondents listed their class year as “Harvard” rather than something more intuitive, like “sophomore.”
Though the “Harvard” respondents were only a small fraction of the nearly 500 responses in the survey, they suspiciously reinforced the study’s hypothesis.
Dr. Gino has argued that most of the suspicious responses were the work of a scammer who filled out her survey for the $10 gift cards she offered participants—the responses came in rapid succession, and from suspicious I.P. addresses.
But it’s strange that the scammer’s responses would line up so neatly with the findings of her paper. When I pointed out that she or someone else in her lab could be the scammer, she was unbowed.
“I appreciate that you’re being a skeptic,” she told me, “since I think I’m going to be more successful in proving my innocence if I hear all the possible questions that show up in the mind of a skeptic.”
More damningly, the bloggers recently posted evidence, culled from retraction notices that Harvard sent to journals where Dr. Gino’s disputed articles appeared, indicating that much more of the data collected for these studies was tampered with than they initially documented.
In one study, forensic consultants hired by Harvard wrote, more than half the responses “contained entries that were modified without apparent cause,” not just the handful that the bloggers initially flagged.
Dr. Gino said it wasn’t possible for Harvard’s forensics consultants to conclude that she had committed fraud in that instance because the consultants couldn’t examine the original data, which was collected on paper and no longer exists. …
The dog ate her homework! Bad dog!
Seriously, isn't it understood by scientists that all data should be preserved? To fail to do so should be automatically disqualifying. Also, when was this study done? Fifty years ago? Who does anything only on paper anymore?
In October, dozens of Dr. Gino’s co-authors will disclose their early efforts to review their work with her, part of what has become known as the Many Co-Authors project. Their hope is to try to replicate many of the papers eventually. …
Hopefully, if there's anything to this research, they will replicate it under conditions that make fraud impossible, or at least as impossible as possible. If it can't be replicated, then it should go into the round file.
In an interview, Dr. Kahneman, the Nobel Prize winner, suggested that while the efforts of scholars like the Data Colada bloggers had helped restore credibility to behavioral science, the field may be hard-pressed to recover entirely.
“When I see a surprising finding, my default is not to believe it,” he said of published papers. “Twelve years ago, my default was to believe anything that was surprising.”
I admire and respect Kahneman, but this was just as naive an attitude twelve years ago as it would be today. Such results are surprising because they go against common sense, that is, what we believe based on a large amount of personal and collective experience. While common sense occasionally gets things wrong, it usually gets them right. For this reason, my default was always not to believe.
Notes:
- Ellipsis in the original.
- Matti Friedman, "What the Media Gets Wrong About Israel", The Atlantic, 11/30/2014. Friedman is a former AP reporter. While nine years old, this article is still relevant and also recommended reading.
- Jonah Lehrer, "The Truth Wears Off", The New Yorker, 12/5/2010.
- "Questionable Research Practices Surprisingly Common", Association for Psychological Science, 5/24/2012.
- "Optional Stopping", Framework for Open and Reproducible Research Training, 11/3/2021.
Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I have sometimes changed the paragraphing of the excerpts.
October 22nd, 2023 | Updated: 10/25/2023 (Permalink)
Fighters, Gunmen, and Militants?
[A newspaper's] primary office is the gathering of news. At the peril of its soul it must see that the supply is not tainted. Neither in what it gives, nor in what it does not give, nor in the mode of presentation must the unclouded face of truth suffer wrong. Comment is free, but facts are sacred. "Propaganda", so called, by this means is hateful.1
Many of the news stories about the pogrom in Israel on the seventh of this month referred to the murderers, rapists, and kidnappers who committed it as:
- "Fighters": "Hamas fighters' rampage through Israeli towns on Saturday was the deadliest such incursion since Egypt and Syria's attacks in the Yom Kippur war 50 years ago and has threatened to ignite another conflagration in the long-running conflict.2"
Who were they fighting? Unarmed civilians at a music festival?3 Elderly people? Babies?4 Also, "fighter" makes no distinction between those who were shooting civilians and those who were fighting to save them: both are "fighters".
- "Gunmen": "In southern Israel on Sunday, Hamas gunmen were still fighting Israeli security forces more than 24 hours after their surprise, multi-pronged assault of rocket barrages and bands of gunmen who overran army bases and invaded border towns.2"
No doubt they did have guns, and most of their victims did not, but the "Israeli security forces" trying to defend unarmed civilians were also "gunmen".
- "Militants": "According to both Netanyahu and relatives, the mother and daughter were abducted from Kibbutz Nahal Oz during the surprise assault on southern Israel carried out from Gaza by Iranian-backed Islamist militants of Hamas on Oct. 7.5"
Nick Cohen said it best eighteen years ago:
The leaders of the rail unions are 'militants' in the sense they will call out their members in the private rail companies whenever they can. They don't put bombs on trains.6
Nor do they gun down unarmed people, rape women7, murder babies, or kidnap children and elderly people8.
The first two of these three words fails to distinguish between the criminal and the victim, treating them as if they were morally equivalent. "Militant" is an adjective meaning aggressive that is so broad that "militant pacifist"9 is not an oxymoron; in this context, it's simply a euphemism for "terrorist".
The only occurrence in the first article above of the word "terrorist" as applied to the "fighters", "gunmen" and "militants" is in a quote from President Biden. These words are not casually chosen; the reporters, editors, and news outlets that use them know full well that Hamas is a terrorist organization. Moreover, this sort of evasive language is often enshrined in their stylebooks and policies10, so we can't excuse this as a one-time failing. Also, we can't necessarily blame the reporters or editors for being mealy-mouthed, though we can blame them for working for an organization that won't allow them to speak honestly.
Defining "terrorism" is not hard11: it is violence against civilians by non-governmental organizations with the purpose of terrorizing a populace for political ends. Supporters of terrorism, including some nations, have tried to obfuscate the meaning of the word for their own political purposes, but its meaning has been well-established for decades. United Nations documents on terrorism will often claim that there is no universally accepted definition of "terrorism", but this is only because state sponsors of terrorism―such as Iran, which funds Hamas―are members12. This would be like the League of Nations asking Nazi Germany to agree to a definition of "genocide".
Though there may be some vagueness in the meaning of "terrorism", the Hamas attacks are a paradigm case. The word bald is paradigmatically vague, but there's no doubt that Patrick Stewart is bald. Similarly, there's no reasonable doubt that the Hamas attacks were terrorism, and to pretend there is just pours ink upon the troubled waters.
Both of the news articles quoted above come from the Reuters news agency. Shortly after the terrorist attacks on September 11th, 2001, Reuters released a statement attempting to explain its policy against using the word "terrorism": "Throughout this difficult time we have strictly adhered to our 150-year-old tradition of factual, unbiased reporting and upheld our long-standing policy against the use of emotive terms, including the words 'terrorist' or 'freedom fighter'.13"
"Terrorism" is an emotive word because terrorism is frightening14, but so is "murder", yet Reuters doesn't avoid calling the intentional and unlawful killing of a human being a "murder" simply because the word is emotive15. Thus, the explanation it gives for its policy is bunk, and there must be some other reason why it chooses to avoid the word. Not only is the language it uses dishonest, so is its explanation of why it uses it.
I don't mean to pick on Reuters as I've no reason to think that it is especially bad or worse than any other major news outfit. I chose the first story quoted above simply because it's the first that popped up when I searched for news published within a day of the massacres. Moreover, I know that the BBC has been guilty of similar practices in the past14 and still is16.
If Reuters doesn't want to call murderers, rapists, and hostage-takers "terrorists" for whatever reason, I'd settle for it calling them "murderers", "rapists", and "hostage-takers", but none of these words occurs in either article. Does it have a policy against them, as well? Obviously not, so it seems that its policy must be to soft-pedal terrorism.
Update (10/25/2023): The BBC has now changed its policy of calling Hamas terrorists "militants" by default, and is going to start referring to Hamas itself as "a group proscribed as a terror organization by the UK Government and others"17. That's a mouthful! Also, the new policy doesn't mean that the Beeb won't use the term "militants", just that it won't do so "by default". It still won't be calling terrorists "terrorists" itself but helpfully informing us that some others do call them that.
This is a baby step in the right direction, but it appears that the BBC only acted under pressure. As recently as the 11th of this month―after the Hamas massacres―the BBC published a ludicrous defense of its previous policy by John Simpson, one of its editors18. Then, a little over a week later, Israel threatened to cut off access to the BBC if it kept referring to Hamas as "militants"19. Simpson must have felt the rug pulled out from under him when, the very next day, the new policy was announced.
There was nothing new in Simpson's defense of the indefensible, just the usual feeble arguments. He wrote: "Terrorism is a loaded word, which people use about an outfit they disapprove of morally." As I argued above, and elsewhere, "terrorism" is not, in fact, a loaded word. Of course, it's emotive, but so is the word "genocide"; would the BBC refuse to refer to the Holocaust as "genocide" because the word is emotive? Surely not. "War" is a frightening word, too, yet BBC News refers to the current situation as the "Israel-Gaza War" on its front page.
Simpson goes on to say, referring to the attacks on Israeli civilians: "It's perfectly reasonable to call the incidents that have occurred 'atrocities', because that's exactly what they are." I agree, but does he think that "atrocity" is a precise, clinical word? "Atrocity" is actually vaguer than "terrorism", and at least as emotive. If it's perfectly reasonable to call the attacks "atrocities" then it's just as reasonable to call them "terrorism".
In addition, the BBC doesn't even apply its own claimed policy consistently. The Telegraph reports that it "has discovered more than 20 instances of the BBC referring to individuals or groups as terrorists in recent years, further undermining its claim that it avoids using the word in order to maintain impartiality"20. It looks more like partiality in favor of Hamas.
I decided to check some recent BBC stories to see if I could tell whether the new policy was being carried out, so I visited the BBC's site. In the middle of an article on the front page in which reporters answered readers' questions about the Israel-Hamas conflict21 was a photograph with the caption: "An Israeli soldier displays military equipment and ammunition that Hamas and Palestinans [sic] militants used in their attack on Israel". Given that the policy doesn't actually ban the use of the word "militants" to describe Hamas terrorists, this is not a violation.
What about calling Hamas "a group proscribed as a terror organisation by the UK Government and others". There's only one occurrence of a "terror"-related word in the article, and that's in the following sentence: "Those sources of funding became much more difficult after it [Hamas] was widely designated as a terrorist entity by the US, European Union and others."
A couple of other stories I checked lacked the word "militants", but also the words "terrorist" or "terrorists" except in direct quotes, which was always allowed by the previous policy. All of this raises the question: what exactly would constitute a violation of the new policy? The new one is almost as bad as the previous one, and more difficult to enforce since it's hard to tell what counts as a violation.
Notes:
- C. P. Scott, "A Hundred Years", The Guardian, 10/23/2017.
- Maayan Lubell & Nidal Al-Mughrabi, "Israel retaliates after Hamas attacks, deaths pass 1,100", Reuters, 10/8/2023. Emphasis added.
- Saranac Hale Spencer & D'Angelo Gore, "What We Know About Three Widespread Israel-Hamas War Claims", Fact Check, 10/20/2023.
- Nur Ibrahim, "Were Israeli Babies Beheaded by Hamas Militants During Attack on Kfar Aza?", Snopes, 10/16/2023.
- Nidal Al-Mughrabi & Eric Cox, "Released by Hamas, American mother, daughter reunited with family", Reuters, 10/21/2023. Emphasis added.
- Nick Cohen, "Stop castrating the language", The Guardian, 7/16/2005.
- "Survivors of Hamas assault on music fest describe horrors and how they made it out alive", PBS News Hour, 10/10/2023.
- "Hamas hostages: Who are the people taken from Israel?", BBC, 10/21/2023.
- Albert Einstein actually called himself a "militant pacifist"; see: The Expanded Quotable Einstein, collected & edited by Alice Calaprice (2000), p. 165.
- David Mikkelson, "Reuters Proscribes 'Terrorist'", Snopes, 10/9/2001. This is from about a month after the 9/11 terrorist attacks in the United States, but it appears that Reuters has not changed its policy.
- More specifically, this is "transnational" terrorism; Hamas is a transnational organization.
- See, for instance: "Human Rights, Terrorism and Counter-terrorism", Office of the United Nations High Commissioner for Human Rights, section I.B, 10/2007.
- "City Diary", The Telegraph, 9/28/2001.
- See: The Ministry of Truth, 7/14/2005.
- For a recent example, see: Jonathan Allen, "Dutch man confesses to 2005 killing of US teen Natalee Holloway in Aruba", Reuters, 10/18/2023. For more on emotive language, see: Loaded Words.
- Ian Price, "What Do Hamas ‘Militants’ Have to Do for the BBC to Call Them Terrorists?", The Daily Skeptic, 10/17/2023.
- Craig Simpson & Ben Riley-Smith, "BBC stops calling Hamas ‘militants’ by default after backlash", The Telegraph, 10/20/2023.
- John Simpson, "Why BBC doesn't call Hamas militants 'terrorists' - John Simpson", BBC, 10/11/2023.
- "Israeli president threatens to ban BBC over ‘atrocious’ policy of calling Hamas militants", The Telegraph, 10/19/2023.
- Gordon Rayner & Jamie Bullen, "BBC refusal to call Hamas terrorists ‘unsustainable and indefensible’, says MP", The Telegraph, 10/17/2023.
- "Week 3: BBC correspondents answer your questions on the conflict between Israel and Hamas", BBC, 10/23/2023.
October 16th, 2023 (Permalink)
What's New?
The new and improved Fallacy Files! Now with more fallacies!
New Fallacy: Dangling Comparative
Check it out!
October 11th, 2023 (Permalink)
Headline Hooey
Check out the following recent headline:
Study: Black women at highest risk for suicide1
Anyone who knows anything about the sociology of suicide in the United States ought to find the above headline suspicious. While women are more likely than men to attempt suicide, men are more likely to actually kill themselves2. And the difference is not insignificant: almost four times as many American men committed suicide as women in 2021-20223. Of course, none of this rules out the possibility that a subgroup of women have a higher suicide rate than men as a whole, but it is a reason for skepticism.
Based on this headline, you'd probably assume that black women have a higher rate of suicide than women in general, not to mention men, and people taken as a whole, along with comparable subgroups, such as black men, white women, and white men. After all, if black women are at the highest risk, then there shouldn't be any other group at a higher risk.
The first sentence of the article seems to confirm the impression made by its headline: "A new study out of Boston University released on Wednesday indicated that Black women age 18 to 65 have the highest risk for suicide irrespective of their socioeconomic status." However, the article goes on:
The findings of the study…revealed that Black women in the highest income bracket had a suicide rate 20% higher than white women…. "Our findings were surprising because most studies usually show that the rate of suicide was higher in White women in the U.S.," corresponding author Temitope Ogundare, a clinical instructor of psychiatry at Boston University.4
That sounds as though high income black women were compared only to white women in general, which is an apples-to-oranges comparison, and the remainder of the report does not reveal exactly what groups were compared by the study. For that information, we will turn first to the press release which this news report was based on5.
Like much science and health reporting of new studies nowadays, this report is a cut-and-paste job of a press release, with almost every sentence copied and then their order rearranged. A light rewording was done on some sentences, perhaps to conceal the plagiarism or from a desire for brevity, but not all the changes can be explained this way.
The following table aligns sentences taken from the news report with the corresponding ones from the press release to show the extent of copying, and highlights passages in the release that were left out of the report. Omitted are quotes of the researchers in the news story which were all taken directly from the release, showing that no new reporting was done.
News Story | Press Release |
---|---|
A new study out of Boston University released on Wednesday indicated that Black women age 18 to 65 have the highest risk for suicide irrespective of their socioeconomic status. | In a new study, looking only at women, researchers from Boston University Chobanian & Avedisian School of Medicine and Howard University have identified Black women aged 18–65 years, to have the highest risk for suicide irrespective of their socioeconomic status. |
The findings of the study, which revealed that Black women in the highest income bracket had a suicide rate 20% higher than white women…. | The study found that Black women in the highest income strata had a 20% increase in the odds of suicide/self-inflicted injury compared to white women in the lowest socioeconomic strata. |
Researchers from Boston University Chobanian & Avedisian School of Medicine and Howard University reviewed the National Inpatient Sample database, the largest all-patient database in the U.S., from 2003-2015. | To determine the factors associated with suicide/self-inflicted injuries, the researchers reviewed the National Inpatient Sample database, the largest all-patient database in the U.S., from 2003-2015. |
They identified and collected demographic data―including insurance type, smoking status, and exposure to domestic violence―on women age 18-65 who were hospitalized with diagnoses of self-inflicted injuries or attempted suicide. | They identified and collected demographic data (insurance type, smoking status, exposure to domestic violence, etc.) on women age 18-65 years who were hospitalized with a diagnosis of self-inflicted injury or attempted suicide. |
The researchers then used a computer model to test how race and socioeconomic status interacted to determine suicide risk. | They then used a computer model to test how race and socioeconomic status interacted to determine the risk of suicide. |
The researchers said the findings allow healthcare experts to identify populations that have the greatest risk of suicide. | According to the researchers, these findings are important because it [sic] allows identification of which population has the greatest risk of suicide. |
Ogundare said that she believes interventions targeted at helping women who have experienced domestic violence, lack of universal health coverage, as well as addressing racial discrimination, must become part of an overall suicide prevention strategy. | Ogundare believes interventions targeted at helping women who have experienced domestic violence, lack of universal health coverage as well as addressing racial discrimination must become part of a comprehensive suicide prevention strategy. |
Suicide is the second leading cause of death in the U.S. among individuals aged 10-34 years of age and the fourth leading cause of death for individuals aged 35-44 years. | Suicide is the second leading cause of death in the U.S. among individuals aged 10-34 years of age and the fourth leading cause of death for individuals aged 35-44 years. |
As you can see, despite the fact that the news story was copied from the press release, it carefully edited out the information that the study looked only at women, and that "highest risk" referred only to black women in the highest income bracket as compared to white women in the lowest. This is an odd apples-to-oranges comparison, though one might have expected the relationship to be reversed.
This is all that we can learn from the press release, though it's enough to show that the news report was misleading. It would be easy to program a computer to do this kind of paraphrasing of press releases, and it certainly couldn't do a worse job of reporting than was done in this case. Let's turn now to the paper itself.
In the "Results" section, the paper informs us:
Compared to Whites, Blacks…and Hispanics…had lower odds of suicide/self-inflicted injury. However, when we examined the interaction between race/ethnicity and income, Blacks in the highest income quartile had 20% increased odds of suicide/self-inflicted injury than Whites in the lowest income group….2
And, again, in the Discussion section: "Blacks and Hispanics had a lower risk of suicide/self-inflicted injury in this study compared to Whites…." The only racial/ethnic groups with a higher suicide rate than whites were American Indians and "Alaska Native"s. So, it's only the odd comparison of high income black women with low income white women that underlies, but certainly does not support, the headline claim.
Notes:
- Clyde Hughes, "Study: Black women at highest risk for suicide", United Press International, 10/4/2023.
- Oluwasegun Akinyemi, et al., "Factors associated with suicide/self-inflicted injuries among women aged 18–65 years in the United States: A 13-year retrospective analysis of the National Inpatient Sample database", Plos One, 10/3/2023.
- "Suicide Data and Statistics", Centers for Disease Control and Prevention, 8/10/2023.
- Notice the capitalizing of "Black" but the inconsistent capitalizing of "white". Interestingly, the press release from which this news story was cribbed, and the study itself, capitalize both. The UPI article follows the recent racist practice of capitalizing "black" but not "white", which appears to be standard now in major American news outlets.
- "BU Study: Black Women, Aged 18–65 Years, Have Highest Suicide Risk Among Women", Boston University Chobanian & Avedisian School of Medicine, 10/4/2023.
October 2nd, 2023 (Permalink)
Rock the Voting Statistics
In 2002, in an episode of the television show The West Wing, a fictional White House press secretary for a fictional American president claimed:
18 to 24 year-olds represent 33% of the population but only account for 7% of the voters. … Decisions are made by those who show up! You gotta rock the vote!1
While The West Wing was a fictional show, I expect that most people who watched it assumed that claims such as this were not just made up. Are these claims also fictional?
There are two related claims in this sentence: first, that 18-24 year-olds are 33% of the population and, second, that 18-24 year-olds are 7% of voters. The claims were made during a scene in which the press secretary was addressing an event encouraging young adults to vote. So, the larger point was that young Americans from this age range are eligible to vote, but actually vote at a much lower rate than their representation in the population.
Unless you're an expert on voting in the United States, it's unlikely that you'll know whether the second claim is correct. Moreover, I can't think of any way of checking it for plausibility, though a fact check is certainly possible. However, you don't have to be a demographer to have doubts about the first claim. Is it really true that those seven years represent a third of the U.S. population? Are a third of the people you know between the ages of 18 and 24?2 Your internal baloney detector should be tingling.
You also don't need to be a demographer to check the claim for credibility3. Without doing any actual research, how would you do so? Give it a try, then click the button below to see my own attempt.
If you think about it, the number of people of any given age must be approximately the same as those of any other age. Of course, as people grow old, they die and so we would expect there to be fewer people who are, say, eighty years old than eight years old. However, the number of people who are 24 will be roughly the same as the number who are 18, since few die at these young ages, and those who do may be replaced by immigrants. In the absence of large population movements or devastating epidemics, the number of people of any given age will tend to be fairly stable.
So, we can estimate the number of people who are a particular age in a given area if we have two facts: the population of the area and the average lifespan. Then we simply divide the total population by the lifespan. Both of these are landmark numbers for the United States, that is, numbers that will help you navigate the statistical landscape and are, therefore, worth committing to memory. We don't need to be precise about these numbers because we're credibility checking rather than fact checking, and our goal is to see if the claim is believable. Later, we can do a fact check, and I'll do that below.
The current population of the U.S. is approximately a third of a billion and the average lifespan is around eighty. However, the episode was broadcast in 2002, so we need the population and lifespan from that year. Now, you probably don't know the population in 2002 and neither do I, but we can estimate it. Has it increased since 2002, decreased, or stayed the same? Surely it has increased, so let's guesstimate it as 300 million.
The current life expectancy in the U.S. is somewhere around eighty years. Was it the same, less, or greater in 2002? I'm guessing that it was a little less, so let's guesstimate it at 75.
Putting these two guesstimates together―that is, dividing the population by the average lifespan―we find that there will be four million Americans of any given age. Since there are seven years from 18 to 24, the number of Americans in that age group will be around 28 million. Now, remember that the claim that we're checking is that this age group represents a third of the American population, but it's only a little over 9%, so the claim is implausibly high.
I suggested above that the show's point in citing the two percentages was that young adults were voting at a rate much below their share of the population. However, assuming that the second part of the claim was correct, it appears that this group is voting at rates only slightly less than their representation in the population, that is, 7% instead of 9%. This is good news about youth voting, but bad news about the statistics in political television shows, which appear to be as fictional as the rest of the show.
Fact Check: You may wonder how accurate the above credibility check is, as well as how accurate is the second claim made by show. So did I, and here's what I found.
The American population in 2002 was 287 million4, which is a bit less than we guessed, and the lifespan was almost exactly 775, which is a little longer. Would these differences have made a difference in the credibility test? No, the number of people of a given age is a quarter million less than we estimated, and the total number for the ages 18-24 two million less, but the percentage of the population represented by those ages is just over 9%. So, our credibility check was surprisingly accurate.
What about the second claim that this age group represents only 7% of voters? Given that the show was first aired in October of 2002, prior to the midterm election that year, the most recent election for which there would be statistics is 2000. A total of 110,826,000 Americans voted in that election, and 8,635,000 of those voters were 18-246, which is closer to 8% than 7% of the electorate. So, despite what the show wanted us to think, young people were represented among voters at almost the same percentage they represent in the population.
Notes:
- Aaron Sorkin, "The West Wing: College Kids", West Wing Transcripts, original air date: 10/2/2002.
- This might be true if you yourself are within that age range, but then the people you know are not a representative sample of the population.
- For advice on how to check factual claims for credibility, see:
- Compare & Contrast, 1/7/2022
- Divide & Conquer, 2/4/2022
- Ratios, Rates & Percentages, 3/27/2022
- Ballpark Estimation, 4/21/2022
- "What was the population of the United States in 2002?", Wolfram Alpha, accessed: 9/30/2023.
- "What was the lifespan in the United States in 2002?", Wolfram Alpha, accessed: 9/30/2023.
- "Voter Registration and Turnout by Age, Gender & Race 2000", United States Election Assistance Commission, 3/20/2007.