WEBLOG

Previous Month | RSS/XML | Current | Next Month

January 31th, 2011 (Permalink)

Book Club: Wrong, Chapter 2:
The Trouble with Scientists, Part 1

If we knew what we were doing, it wouldn't be called research, would it?―Albert Einstein[?]

Freedman heads chapter 2, on scientists getting it wrong, with the above supposed Einstein quote. Einstein is a quote magnet, that is, any quote about science will inevitably get attributed to him, in the same way that Mark Twain is a magnet for instances of folksy humor. Or, as Yogi Berra may have said: "I never said most of the things I said." There is no citation in the chapter endnotes, and I couldn't find it in any quotation reference book, such as Bartlett's. Moreover, it doesn't sound like Einstein to me: what does it even mean? It sounds like a putdown of scientists given weight by being put into the mouth of the most famous scientist of all. I think it's bogus.

Freedman begins the chapter by discussing how studies can go wrong through measurement errors, specifically, by measuring something other than the target of interest. He tells the story of "the drunkard's search":

An old joke: A police officer finds a drunk man late at night crawling on his hands and knees on a sidewalk under a streetlight. Questioned, the drunk man tells her he's looking for his wallet. When the officer asks if he's sure that he dropped the wallet here, the man replies that he actually dropped it across the street. "Then why are you looking here?" asks the befuddled officer. "Because the light's better here," explains the drunk man. (P. 40)

So, apparently, we're supposed to conclude that scientists are like the drunk who looks for his wallet where the light is better rather than where he dropped it. Rather than measuring the outcome that they're interested in, they measure something else―a "proxy"―that's easier to measure. For example, Freedman discusses the use of functional magnetic resonance imaging (fMRI) as a proxy for measuring the mind (pp. 43-44). However, sometimes the proxy is not a good stand-in for the target outcome, and Freedman gives a number of examples of how this practice has led to mistakes.

Now, this is a funny story, but the drunk's behavior is not necessarily as ridiculous as it may sound. It depends on how you tell it: Is there any chance that the wallet is under the streetlight? Can the drunk see at all across the street where he thinks that he dropped the wallet? If it's such a dark night that the drunk cannot see anything across the street, and there's at least some possibility that he actually dropped the wallet under the streetlight, then it makes sense to look there first. It's often a sensible strategy to look in the easy places first.

Scientists usually measure proxies because it's either impossible―it's pitch black across the street―or difficult to measure what they're looking for, and there's some reason to think that the proxy measurement will tell them what they need to know―it may be that the wallet was dropped under the streetlight. For instance, scientists use fMRI as a way of measuring what's going on in the mind, because how else are they going to do it? You can ask people what they're thinking but, unfortunately, they're notoriously unreliable. Freedman, of course, only gives examples of when the proxy measurements turned out to be bad stand-ins, but there are probably just as many or more examples where the proxies worked well―it turned out that the drunk dropped his wallet under the streetlight after all!

It's a good point that Freedman is making, but his way of making it is not so good. He should have emphasized that one reason scientists make mistakes is that the things we are interested in are often hard or impossible to measure, and must therefore be measured indirectly, which inevitably leads to error. Instead, he makes it sound as though scientists make these mistakes out of stubborn perversity, like the drunk in the joke, insisting on looking where they know the wallet won't be found just because it's easier.

Later in the chapter, Freedman discusses how the use of animals in research can lead to mistakes (pp. 50-54), since animals are not perfect proxies for people. This is a variant of the drunkard's search, in that scientists choose to experiment on animals primarily because it's considered unethical to do so on people. Of course, this practice leads to errors, but that seems to be a price that we're willing to pay. Nonetheless, it's a good thing to keep in mind when reading reports of research in the press.

One last thing that I want to mention from this chapter is Freedman's ranking of types of scientific study by trustworthiness:

Observational study: Interesting but untrustworthy.
Epidemiological study: Somewhat trustworthy if it is large and done well.
Meta-analysis or review study: Trustworthy.
Randomized controlled trial [RCT]: Very trustworthy if large―the gold standard of evidence. (P. 59)

This is a useful exercise, but I think that RCTs are not necessarily more trustworthy than meta-analyses. Once again, Freedman is neglecting the importance of replication. Even a "gold standard" study should be replicated, and a meta-analysis will include such a study and any attempts at replication. Observational and epidemiological studies are mainly useful for generating hypotheses to test in RCTs, rather than for drawing trustworthy conclusions. Of course, meta-analyses are limited by the studies that are available to analyze, but if an RCT has been done then I would put more trust in a meta-analysis that includes it than in the RCT itself.

Previous Installments:

  1. Introduction
  2. Chapter 1: Some Expert Observations

Next Installment:
Chapter 3: The Certainty Principle


January 29th, 2011 (Permalink)

Blurb Watch: Phil Ochs: There but for Fortune

BEWARE! THE BLURB

An ad for the new documentary about folk singer Phil Ochs, There but for Fortune, includes the following blurb:

Blurb Context
"A MUST-SEE."
-ENTERTAINMENT WEEKLY
…[F]ilmmaker Kenneth Bowser…does an admirable job of conveying why Ochs’ music continues to mean so much to his fans. … Bowser has unearthed who knows how many hours of unseen footage…. These alone make the film a must-see for fans like me.
Source: Ad for Phil Ochs: There but for Fortune, The New York Times, 1/28/2011, p. C18 Source: Simon Vozick-Levinson, "'Phil Ochs: There but for Fortune,' a great documentary about an underappreciated folk singer", Entertainment Weekly, 12/10/2010

So, it's a must-see for fans of Phil Ochs like Vozick-Levinson, which is kind of an important qualification.


January 19th, 2011 (Permalink)

Guns don't kill people. Chuck Norris kills people!

A new slogan seems to be popular: "If guns kill people, then pencils misspell words." This is obviously a variant of the old slogan: "Guns don't kill people. People kill people." My favorite variant is the Chuck Norris one I've used as a title for this post.

The new slogan differs from the old in being a conditional statement; more specifically, a conditional statement similar to the familiar "monkey's uncle" type. For instance: "If guns kill people, then I'm a monkey's uncle", or "if guns kill people, then I'm Marie of Romania." Such a statement is a rhetorical way of negating its antecedent: Since pencils don't misspell words, then guns don't kill people, by Modus Tollens. Thus, the new slogan is equivalent to the first part of the old one.

Of course, guns do kill people, as do bullets, knives, cars, trains, falling trees, meteorites, and many other inanimate objects. So, what is meant by denying that they do?

According to my dictionary, the word "kill" is ambiguous between a meaning that can apply to inanimate objects such as guns, and a meaning synonymous with "commit murder". I'm doubtful of my dictionary, that is, I wonder whether it's true that "kill" is ambiguous between a broad meaning that includes inanimate objects and a narrower one restricted to people, since the single broad meaning covers both cases. However, such an ambiguity would explain the meaning of the slogan, that is, the slogan denies that guns kill people in the narrower sense of "kill". Clearly, guns cannot commit murder, just as pencils cannot misspell words.

However, why make a point of the obvious fact that guns don't commit murder? My sense is that the slogans are meant to advocate a policy of punishing people for committing crimes as opposed to attempting to prevent crime by restricting access to guns. Since a slogan is not an argument it is, a fortiori, not a fallacious argument. Therefore, it cannot, strictly speaking, commit a logical fallacy. However, I think these slogans are logical boobytraps for at least two fallacies:

  1. The Black-or-White Fallacy: There is a tendency for American politics to break into two opposed factions, one of which opposes all "gun control" and advocates harsh penalties for crime, while the other supports lenient punishment and promotes restriction of access to guns. However, there is no necessary trade-off between punishment and "gun control", and one could well believe that crime should be prevented if at all possible, but punished severely when committed. However, these simplistic slogans encourage the belief that this third position is impossible, that we cannot both blame and punish criminals if we restrict their access to guns.
  2. Straw Man: Does anyone who supports "gun control" actually believe that guns murder people? I think not, thus these slogans both tilt at a straw man, and make it harder to understand the positions of those who do support legal restrictions on guns.

Acknowledgment: Thanks to Woody NaDobhar for raising the issue.


January 15th, 2011 (Permalink)

New Book: Sleights of Mind

Sleights of Mind, subtitled "What the Neuroscience of Magic Reveals about our Everyday Deceptions", is a new book by the neurologists Stephen Macknik and Susana Martinez-Conde―together with the science writer Sandra Blakeslee. I've said before that psychologists could learn a lot from magicians, so it's nice to see two doing so.

I haven't read the whole book yet, only parts of it, but looked through the rest. While most of it has little bearing on logical fallacies or related matters, the later chapters get into cognitive illusions. For instance, the first few chapters deal with attention, and how magicians use misdirection to keep people from seeing how a trick works―which is a more powerful effect than most people realize. Visual illusions are another early topic. Later chapters, however, discuss "illusory correlations", which are false causal conclusions, as well as the gambler's fallacy and some other probabilistic errors.

Resources:


January 6th, 2011 (Permalink)

Check it Out

John Allen Paulos' latest "Who's Counting" column deals with the same subject as our current book club, namely, why scientific studies so often turn out wrong. By the way, I hope to have the next, belated installment of the club later this month.

I agree with Paulos that we shouldn't be so surprised when a study is contradicted by a later one. According to Paulos, one reason why this happens is regression to the mean. Read the whole thing.

Source: John Allen Paulos, "Study vs. Study: The Decline Effect and Why Scientific 'Truth' So Often Turns Out Wrong", Who's Counting, 1/2/2011

Resources:

Update (1/8/2010): Also check out the article by Jonah Lehrer from The New Yorker that seems to have prompted Paulos' column. Unlike Freedman in Wrong, who writes as if one study on its own should be enough to establish a result, Lehrer appreciates the importance of replication:

Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

However, Lehrer has his own problem:

For many scientists, the [decline] effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe?

Strictly speaking, theorems in logic and math are "proven", but empirical science never "proves" anything, except in a weak, everyday sense of the word. If we expect a medical study, even one that's been successfully replicated, to "prove" the effectiveness of a drug, then we're going to be shocked and disappointed when later studies show the drug to be less effective. 5% of studies may show a statistically significant effect just by chance! That's not even considering all the nonrandom ways that a study may go wrong.

Lehrer writes: "The decline effect is troubling because it reminds us how difficult it is to prove anything." Difficult? Impossible is more like it, unless you're talking about math. At the end of the article, Lehrer goes off the deep end into skepticism, but that isn't warranted by the "decline effect". Yes, science is hard: get used to it!

While regression to the mean may play a role in some of these cases of the "decline effect"―as Paulos suggests, and as is mentioned in Lehrer's article―I think that the simplest explanation in many cases is that the effect being studied is unreal. For example, in the case of ESP, the most likely explanation for Rhine's failures to replicate his early experiments is that he tightened up his controls for the later ones, thus eliminating the supposed effect. It doesn't bode well for the status of the "decline effect" as some kind of real phenomenon that Rhine is considered a prominent example.

Source: Jonah Lehrer, "The Truth Wears Off", The New Yorker, 12/13/2010


January 3rd, 2011 (Permalink)

The Puzzle of the Hyena's Alias

The Agency for Counter-Terrorism (ACT) has received information that a European terrorist known as "the Hyena" has entered the United States under an assumed name. Unfortunately, ACT has also received conflicting reports of the Hyena's alias, making it hard to track him down, especially since each of the names is a common one. Four informants were questioned and gave the following information about the alias:

  1. John Wilson
  2. James Moore
  3. Denied that the first name was "John". Last name: "Taylor".
  4. Stated that the previous three informants were each right about one name and wrong about the other.

According to the ACT, the fourth informant is the most reliable. Assuming that the fourth informant is correct, what is the Hyena's alias?

Solution

Previous Puzzles:

Solution to the Puzzle of the Hyena's Alias: The Hyena's alias is "James Wilson". There are several ways to solve this puzzle, but perhaps the simplest is to realize that, if the fourth informant is correct, then the Hyena's alias must be either "John Moore" or "James Wilson". This is because each of the first two informants must be right about one of the names but not both, therefore one must be right about the first name and the other right about the surname. Then, the third informant's information allows us to eliminate "John Moore": since we've already ruled out "Taylor" as the last name of the alias, the third informant must be right that the first name is not "John".

Source: J. A. H. Hunter & Joseph S. Madachy, Mathematical Diversions (1975). The puzzle is based on one from page 49.

Previous Month | RSS/XML | Current | Next Month