WEBLOG
Previous Month | RSS/XML | Current | Next Month
November 29th, 2006 (Permalink)
Check 'Em Out
- (12/4/2006) Carl "The Numbers Guy" Bialik's latest report concerns the changing definition of "autism". This is a case study of the way that the redefinition of a concept can lead to the mistaken belief that its incidence is growing. Interest groups usually have strong incentives to exaggerate the numbers of the social problem they are interested in so as to attract support, and the numbers that they publicize are often based on worst case scenarios. Another effective way to attract support is if the problem appears to be increasing, which will often happen as a result of a low redefinition of the problem, or as the result of increased reporting of it. There may well be legitimate medical reasons for redefining "autism"―I'm in no position to judge otherwise, and I don't mean to be questioning the change in definition itself. Rather, the problem is the deceptive way in which advocates use redefinition and its consequences to advance their interests. No doubt fighting autism is a good cause, but that's no excuse for deception.
Source: Carl Bialik, "How Many Kids Have Autism?", The Numbers Guy, 11/30/2006
- (12/2/2006) Ben Goldacre on graphology:
You don't need me to tell you that graphology is based on the same performance principles as psychic readings, or tarot: skills like "cold reading", which are worth picking up, especially if you work in the manipulation industries such as politics, or sales. You can pick up social cues, you can tailor your assessment, you can look at what words are written, or what the person coming to you is like…. You can rely on "confirmation bias", the well-researched flaw in our reasoning apparatus that leads us to electively attend to information that confirms our beliefs, and to ignore, or undervalue, information that contradicts our beliefs, perhaps because fitting new facts to a pre-existing explanatory framework requires less cognitive effort than devising a whole new one. You can rely on "subjective validation", and the "Barnum effect", named after PT Barnum, the circus man who had "something for everyone": our tendency to find personal meaning in statements that could apply to many people. You have a need for other people to like and admire you, and yet you tend to be critical of yourself. You had an accident when you were a child involving water. I'm available for expensive psychic consultations if this is working for you.
Source: Ben Goldacre, "What analysing Gordon Brown's writing tells us about the Tories", Bad Science, 12/2/2006
- (11/30/2006) Steven Milloy on this year's hurricane season:
The point here is not that the "below-normal" 2006 hurricane season disproves the global warming hypothesis. It doesn’t any more than the "above-normal" 2005 hurricane season proves the hypothesis.
Good for Milloy, who is skeptical about global warming, for not making the same logical mistake made by those who jumped to the conclusion last year that the bad hurricane season was caused by climate change.
Source: Steven Milloy, "What Hurricane Season?", Fox News, 11/30/2006
Resource: Post Hurricane Post Hoc, 9/14/2005
- The cover story of the current Time magazine discusses the mistakes that people make in evaluating risks, and speculates about why they do it. Coincidentally, this relates to probabilistic fallacies because the notion of risk is probabilistic, and mistakes in estimating probabilities lead people to exaggerate certain risks while underestimating others. Unfortunately, the article is not very informative, and not even very clear in places. The following section makes a good point, though:
There's…the art of the flawed comparison. Officials are fond of reassuring the public that they run a greater risk from, for example, drowning in the bathtub, which kills 320 Americans a year, than from a new peril like mad cow disease, which has so far killed no one in the U.S. That's pretty reassuring―and very misleading. The fact is that anyone over 6 and under 80―which is to say, the overwhelming majority of the U.S. population―faces almost no risk of perishing in the tub. For most of us, the apples of drowning and the oranges of mad cow disease don't line up in any useful way.
However, many of the article's own comparisons are nearly as flawed as this one. For instance: "We wring our hands over the mad cow pathogen that might be (but almost certainly isn't) in our hamburger and worry far less about the cholesterol that contributes to the heart disease that kills 700,000 of us annually." The suggestion here is that we should, therefore, worry more about cholesterol than mad cow disease. This example makes the mistake of comparing something which kills mainly elderly people with something that could kill people of any age, which is not to say that mad cow disease is really as dangerous as or more dangerous than cholesterol, but that they make a poor comparison.
The article mentions in passing one important source of faulty risk assessment:
We also dread catastrophic risks, those that cause the deaths of a lot of people in a single stroke, as opposed to those that kill in a chronic, distributed way. "Terrorism lends itself to excessive reactions because it's vivid and there's an available incident," says Sunstein. "Compare that to climate change, which is gradual and abstract."
This seems to be an allusion to the anecdotal fallacy, though it's somewhat spoiled by the apples-to-oranges comparison at the end. Terrorism has actually killed people, and continues to do so on a daily basis, whereas it's doubtful whether climate change will ever kill anyone, though it may have serious economic effects. A better comparison is that between automobile and airplane travel: cars kill a lot more people than planes do, but they do it in small numbers per accident; whereas an airliner crash will kill tens or hundreds of passengers at a time. As a result, car accidents usually get covered only by the local news―unless a celebrity is involved―while airliner crashes are national news. The size and attention given to plane crashes tends to lead people to overestimate their likelihood.
It's a good thing that a major news magazine realizes that there is a problem of the misevaluation of risk, and calls public attention to it. However, the news media―including Time itself―are big contributors to people having exaggerated fears of air travel, terrorism, crime, sharks, and much else. They could do a lot to lessen the problem by changing the way they report about these and other dangers.
Source: Jeffrey Kluger, "Why We Worry About the Things We Shouldn't…and Ignore the Things We Should", Time, 11/26/2006
November 26th, 2006 (Permalink)
What's New?
I've added a new fallacy to the files: Probabilistic Fallacy! This is a generic fallacy for any common type of mistake in reasoning involving probabilities. I expect to add at least two subfallacies of it in the near future.
November 15th, 2006 (Permalink)
A Game Show Puzzle
Imagine that you are on a game show, a curtain rises and the host shows you four doors behind it. The first two doors are closed, and have a letter on the front of the door. The first door shows the letter "A" and the second the letter "B". The third and fourth doors are open and you can see what's inside: inside the third door is a beautiful model holding a goat by a leash; inside the fourth door is another beautiful model, sensuously stroking a new car. Because the last two doors are open, you can't see what letter is on the outside of the door; don't assume that they are "C" and "D" simply because the other two start the alphabet!
The host of the game show says: "We have a rule on this show that if a door has a vowel on it then there is always a goat within. The question I have for you is whether the four doors that you see violate this rule. Of course, we could check the rule by opening every closed door and checking the front of every open door, but we would like to test the rule in the easiest way possible. If you can tell me the way to test the rule that checks the fewest doors, you will win the brand new car you see behind the last door!"
What will you say to the host? Recall that checking the first two doors means opening them to see what's behind them, and checking the second two doors means closing them to see what letter is on the front of the door. What is the smallest number of doors that you need to check, and which doors are they?
November 7th, 2006 (Permalink)
The Paradox Files: The Voting Paradox
John, Jim, and Jill are the three members of the Voters Club. The way the club works is that the members get together before an election and vote amongst themselves. Whatever candidate wins this mini-election, each member agrees to vote for in the actual election. In this way, the members of the club pool their votes behind one candidate. In the current election, there are three candidates running. John belongs to the R party and prefers the R candidate to the others, but likes the I candidate least of all. Jim, in contrast, is in the D party, but prefers the I candidate to John's preferred candidate. Jill is an I voter, but is willing to vote for an R party candidate before a D one. The members of the club vote by listing their preferences in order. Here are their ballots:
John | Jim | Jill |
---|---|---|
Candidate R | Candidate D | Candidate I |
Candidate D | Candidate I | Candidate R |
Candidate I | Candidate R | Candidate D |
Now, the Club finds itself in a quandary. Based upon the three ballots, a majority of the Club prefers Candidate R to Candidate D, because both John and Jill prefer R to D, and only Jim prefers D to R. Similarly, a majority prefers D to I, because John and Jim share that preference, with Jill the odd woman out. However, a majority also prefers Candidate I to Candidate R, because both Jim and Jill prefer I to R, with only John preferring the reverse. So, based on majority rule, the Club prefers R to D, D to I, and I to R. However, since the Club prefers R to D and D to I, it also prefers R to I. Therefore, the Club prefers both I to R and R to I, which is impossible! What is wrong with this reasoning?
Preferences are transitive relations―that is, if you prefer A to B and B to C, then you also prefer A to C. It's this fact which justifies the inference above that the Club must prefer R to I. Since groups are made up of individuals who have preferences, it is tempting to think that such groups will thereby have preferences of their own. Surely we think that we can determine what the majority of a group wants, and that will represent the wants of the group as a whole. However, the voting paradox shows that it is a mistake to treat the results of an election by majority rule as if they represent the preferences of the group. The assumption that the majority vote of the Club represents its preferences leads to a contradiction, showing that that assumption is false.
We naturally tend to think of groups or organizations as if they were big people made up of a lot of little people, with all of the psychological qualities that people have, including preferences. However, this is at best a metaphor. Groups and organizations are not people; they lack minds, and therefore do not have preferences. The voting paradox confounds our expectations only because we have treated a metaphor as if it were a literal truth.
Fallacies:
Source: Nicholas Falletta, The Paradoxicon (1983), pp. 181-186.
November 6th, 2006 (Permalink)
Check it Out
John Allen Paulos' latest Who's Counting article discusses Philip Tetlock's book Expert Political Judgment, which I mentioned here late last year―you saw it here first! With the elections tomorrow, now is a good time to remind ourselves that so-called experts in politics are no better―and may even be worse―at predicting the future, including the outcomes of elections, than non-experts. Of course, experts presumably know more about history and about the current situation than non-experts, but this knowledge apparently doesn't translate into improved forecasting.
Source: John Allen Paulos, "Which 'Experts' Make Better Political Predictions?", Who's Counting, 11/5/2006
Resource: The Limits of Expertise, 12/28/2005
November 3rd, 2006 (Permalink)
Letter to the Editor
John Congdon sends the following extract from a letter to the editor:
To the editor:As a Holocaust survivor who was involved in the creation of the Manhattan College Resource Center, I feel that I must comment on the unfortunate engagement of Tony Judt by the center's director for the Oct. 17 visiting scholar forum.
To quote the director, Dr Frederick M. Schweitzer, who wrote in a letter in March 2003, "The Manhattan College Holocaust Research Center was created to promote Catholic-Jewish dialog, set forth in the conciliar document 'Nostra Aetate' of 1965 and by the many subsequent papal actions and declarations. The center seeks to educate people about the Holocaust and its significance for the present, with primary emphasis on educating teachers and future teachers."
In the 10th year of its existence, with all the good work that the center has accomplished, it is a shame that Professor Judt was asked to speak at this forum, as his philosophy is contrary to everything that the center stands for. Because of Dr. Judt's high standing in the academic community, his dangerous ideas may be taken seriously. His review of Holocaust remembrance is undeniably negative, as is his hatred for the State of Israel as it is now.
Tony Judt's affiliations are questionable. Mark Weber, who is the director of the Institute for Historical Review, quoted Professor Judt's view of the Shoah (the Holocaust):
The Shoah is frequently exploited in America and Israel to deflect and forbid any criticism of Israel. The Holocaust of Europe's Jews is exploited thrice over. It gives American Jews in particular a unique retrospective "victim identity." It allows Israel to trump any other nation's sufferings (and justify its own excesses) with the claim that the Jewish catastrophe was unique and incomparable, and (in contradiction to the first two) it is adduced as an all-purpose metaphor for evil―anywhere, everywhere, and always―and taught to schoolchildren all over America and Europe without reference to context and cause. This modern instrumentalization of the Holocaust for political advantage is ethically disreputable and politically imprudent.This statement is evidence of Tony Judt's personal agenda of bashing Israel and downplaying the significance of the Holocaust. Mr Weber's essay was written for the Mehr News agency in Tehran, Iran in 2005. The Institute for Historical Review is linked to neo-Nazi organizations and to Holocaust deniers. This is one example which shows how controversial Tony Judt is.
Source: Martin Spett, "Letters to the Editor", Riverdale Press, October 12, 2006
Here's John's comment on the letter:
The fact that Dr Judt is quoted by Mr Weber is offered as evidence of their "affiliation," which is made especially damning by the details about Mr Weber's organization, its links, and the publication of his article in a country with an avowedly anti-Israel foreign policy. This is clearly Guilt by Association.I don't know if there was any Quoting Out of Context in either Mr Weber's use of Dr Judt's words or Mr Spett's quoting of Mr Weber's article (having neither the original writing nor the later article to hand), but it is interesting that Mr Spett should take such exception to Dr Judt's words being used in this way. Perhaps we could call this "Appeal to Misleading Context"!
Solution to the Game Show Puzzle: The fewest doors that must be checked is two: the door marked "A", and the last, open door with a car behind it. You must check behind the "A" door because "A" is a vowel, and if there is no goat behind the door then the rule is violated. You needn't check the door marked "B" because "B" is not a vowel, so it doesn't matter whether there is a goat behind it or not. You also don't need to check the open door showing a goat, because it doesn't matter whether there is a vowel on the door or not: if there is a vowel, then the door obeys the rule; if there is no vowel, the door does not violate the rule. Finally, you need to check what is on the front of the open door showing a car; if there is a vowel on the front of the door, then the door violates the rule since there is no goat within.
If you got the wrong answer, you are not alone. This puzzle is a variant of a psychological test known as "the Wason selection task". The usual way in which the test is administered uses four cards with letters on one side and numbers on the other. When presented in the standard way, a majority of people get the wrong answer. When presented in more concrete terms, people do better but many still give an incorrect answer. The puzzle is somewhere between the abstract card version of the test and more concrete versions, so I assume that people will do better than on the card test but not as well as on the more common sense versions.
Why do people tend to do poorly on this type of test? There are different theories, but it may be because people have difficulty understanding conditional statements, such as the rule which the puzzle asks you to test. Many traditional logical fallacies―such as affirming the consequent, converting a conditional, etc.―result from either getting mixed up about the direction of a conditional statement or confusing a conditional with a biconditional statement.
Many people who realize that you need to check behind the "A" door mistakenly believe that you also need to check what is on the door which is open to reveal a goat. They may confuse the rule "if a door has a vowel on it then there is a goat behind it" with the converse rule "if there is a goat behind a door then there is a vowel on it"; or, they may misunderstand the rule to say "a door has a vowel on it if and only if there is a goat behind it". The only way to falsify a rule of the form "if X then Y" is to find a case of X that is not a case of Y. Therefore, the only doors that we need to check are closed ones with vowels on them and open ones that do not show goats within, that is, the first and last doors in the puzzle.
Resource: Robert Todd Carroll, "Critical Thinking Mini-Lesson 3: The Wason Card Problem", The Skeptic's Dictionary, 7/25/2004