April 24th, 2015 (Permalink)

Wiki Watchee: The Persistence of Misinformation

One of the claims used in defense of the accuracy of Wikipedia is that misinformation inserted into the online "encyclopedia" is usually found and removed quickly, even in a matter of minutes. I've argued previously that this is an unjustified claim because the examples that can be pointed to are only those that have been found, which means that the sample we have is affected by survival bias―see the Wikipedia Watch for 12/21/2014, below.

Moreover, there is at least one known hoax that lasted nearly a decade before being discovered―see the watch for 3/16/2014, below. I suggested that what is needed to reliably evaluate Wikipedia for reliability is a systematic study of randomly selected articles by appropriate experts.

Now, a study of Wikipedia's accuracy has been conducted, though not the kind of general evaluation that I suggested. Nonetheless, it's a clever approach to studying the specific question of how rapidly misinformation gets corrected―see the Sources, below. What Gregory Kohs did was to insert pieces of misinformation into various Wikipedia articles and keep track of them. Unfortunately, the length of the experiment, which is measured only in months, cannot tell us how long such misinformation might last. However, it does show that the notion that most misinformation will be speedily fixed is incorrect, at least if one measures speed in terms of weeks. Many claims about the rapid repair of incorrect claims put it in terms of minutes, rather than weeks or even months, as shown by some of the quotes cited by Kohs. Yet, about two-thirds of Kohs' misleading edits lasted weeks, and half remained in place by the end of the experiment, more than two months later.

The most amusing, and at the same time sad, aspect of this experiment is what happened when it ended and Kohs tried to correct the remaining misinformation himself:

The second craziest thing of all may be that when I sought to roll back the damage I had caused Wikipedia, after fixing eight of the thirty articles, my User account was blocked by a site administrator. The most bizarre thing is what happened next: another editor set himself to work restoring the falsehoods, following the theory that a blocked editorís edits must be reverted on sight.


Previous Wikipedia Watches: 12/21/2014, 3/16/2014

April 23rd, 2015 (Permalink)


Do doctors understand test results?

An old rule of journalism goes: headlines that end with a question mark can safely be answered "no". So, it probably won't come as a surprise to you that the article with the above headline gives evidence that doctors often do not understand test results―see the Source, below.

Of course, I would hope that if doctors were readers of this website they would understand the results of medical tests better, since we've examined all of the mistakes discussed in the article:

Check it out.

Source: William Kremer, "Do doctors understand test results?", BBC News, 7/7/2014

Acknowledgment: Thanks to Lawrence Mayes for calling this article to my attention.

April 19th, 2015 (Permalink)

Rolling Stone's Worst-Case Scenario

If you had no idea things were that bad, they probably aren't.―Joel Best

Columbia University's Graduate School of Journalism recently released its report on the now retracted Rolling Stone (RS) magazine story about an alleged campus gang rape―see Source 3, below. It's an important and interesting case study of how journalism can go wrong. The report itself is long, but well worth reading. There are also a number of shorter but excellent commentaries―see Sources 4 through 6, below.

The report claims that confirmation bias played a role in what went wrong:

The problem of confirmation bias―the tendency of people to be trapped by pre-existing assumptions and to select facts that support their own views while overlooking contradictory ones―is a well-established finding of social science. It seems to have been a factor here.

I like Megan McArdle's description of "classic" confirmation bias―see Source 5, below:

Classic confirmation bias means that you ask questions that would confirm your theory, rather than ones that would disconfirm it. Say I give you a set of numbers in a set: 2, 4, 6, 8. Now, I say, tell me what the rules for inclusion in this set are. You can ask me a number, and I'll tell you whether it's in the set. Almost invariably, the next numbers people suggest are "10" and "12," and when you agree they're in the set, they proudly announce that the set is "even numbers." False: The set is "all positive integers." Why did they fail? Because they only suggested numbers that would confirm their theory, which also happen to be in the set. What they didn't do is suggest an odd number to see if it might also qualify.

While it's a good thing that people are becoming aware of confirmation bias, I'm not so sure that it played much of a role in this case. Instead, the reporter, editor, and fact-checker seem not to have insisted even on finding evidence in support of the accusation, let alone against it. They simply seem to have accepted the accuser's account, and in lieu of seeking out evidence to support or refute it, the reporter wrote and her editor edited the story in such a way as to conceal that it was based entirely upon one woman's accusations. As Jean Kaufman writes―see Source 4, below:

[RS] appear[s] to have jettisoned those time-honored procedures [of journalism] for reasons that were most likely both ideological and self-serving: the story was a perfect fit for their pre-existing biases about campus rape and its perpetrators, and the tale was so sensational that it could practically guarantee them a record number of readers. In other words, it was far too good to fact check. …Rolling Stone set out to find a particular type of narrative and [it] got a sensational one. They then were willing to suspend the journalistic standards they profess to hold dear in order to protect it from too many questions. Thatís not journalism, itís activism. For reporters, the greater their initial bias in one direction or other, more care must be taken to overcome it with more due diligence, not less….

McArdle appears to agree:

What I see when I read through the…report is the story of journalists who had an incredible story, one that would get them readers and professional acclaim, and, perhaps most important, give them the opportunity to right a great wrong. Their excitement about the story, their determination to tell it, blinded them to the problems, so that the old joke about a story being "too good to check" actually came true, with terrible consequences. And that should be a lesson to every journalist out there: The better your story, the harder you need to work to disconfirm it. Because the odds are, your brain is sending you all the wrong signals. Of course, it's not exactly news that our emotions can mislead us. That's why we have professional rules, such as "always contact the other side for comment," in the first place. Rolling Stone got taken by a fabulist. But it was not the victim of fraud; it was a co-conspirator in self-deception.

There is one factor that I think played a role in this debacle that's not mentioned in the Columbia report. Moreover, it does not involve a failure to live up to journalistic standards, but instead a standard practice in journalism, namely, that of searching for a dramatic anecdote to build a story around. Jay Rosen is the only commenter on the case that I've noticed mention this problem―see Source 6, below:

The most consequential decision Rolling Stone made was made at the beginning: to settle on a narrative and go in search of the story that would work just right for that narrative. The key term is emblematic. The report has too little to say about that fateful decision, probably because itís not a breach of procedure but standard procedure in magazine-style journalism. (Should it be?) This is my primary criticism of the Columbia report: it has too little to say about the ďemblem of…Ē problem. [Ellipsis in the original.]

Initially, RS's article was supposed to be about the general problem of rape on university campuses and how administrations tend to deal with it. The reporter then set out to find the most dramatic case she could to illustrate this general problem, and presumably settled on the gang rape story because it was the most extreme and horrifying one she found. However, there are problems with this approach to reporting:


  1. "Carl Sagan on Alien Abduction", NOVA, 2/27/1996
  2. Joel Best, Stat-Spotting: A Field Guide to Identifying Dubious Data (2008), pp. 11, 111-113
  3. Sheila Coronel, Steve Coll & Derek Kravitz, "Rolling Stone and UVA: The Columbia University Graduate School of Journalism Report", Rolling Stone, 4/5/2015
  4. Jean Kaufman, "Too Good to Fact Check", PJ Media, 4/7/2015
  5. Megan McArdle, "Rolling Stone Can't Even Apologize Right", Bloomberg View, 4/6/2015
  6. Jay Rosen, "Rolling Stoneís ĎA Rape on Campus.í Notes and comment on Columbia J-schoolís investigation.", Press Think, 4/6/2015

Fallacy: The Anecdotal Fallacy

April 17th, 2015 (Permalink)

New Version: Statistics Done Wrong

Alex Reinhart's Statistics Done Wrong, which was formerly only a website, is now a book in various formats, including paper! The new book is claimed to be three times the size of the web version. Unlike such books on statistics aimed at a general audience as Darrell Huff's and Joel Best's, it's not about the kind of statistical errors made by journalists reporting on scientific studies, or by advertisers or advocates misreporting them. Rather, it describes the mistakes that scientists themselves make, and that lead to so many false and conflicting results. Reinhart discusses the following statistical mistakes that we've met here previously: the base rate fallacy, the multiple comparisons fallacy, and the regression fallacy. I haven't read the new book, but the web version is very clearly written, and has a minimum of actual math if that sort of thing scares you. So, this is not an introduction to statistics, but it does what such introductions don't do, which is explain the logic and illogic of statistics in a way that even non-mathematicians can understand.

Source: Alex Reinhart, Statistics Done Wrong

April 15th, 2015 (Permalink)

Puzzle it Out

If you haven't racked your brain enough doing taxes, there's a clever puzzle making the rounds that you might be interested in. It's being called a "math" puzzle, perhaps because it was a problem in a math olympiad for high school students in Singapore. However, it's really just a logic puzzle, since no mathematics is required to solve it. The original version of the puzzle was controversial enough that The New York Times published an article about it―see the Source, below. The controversy seems to have been at least partly due to the original wording of the puzzle, which was ungrammatical and unclear because it was presumably written or translated by someone who was not a native speaker of English. Revised and unambiguous wording of the puzzle is given in the Times article, and the solution is also clearly explained. Check it out.

Source: Kenneth Chang, "A Math Problem From Singapore Goes Viral: When Is Cherylís Birthday?", The New York Times, 4/14/2015

April 6th, 2015 (Permalink)

The Logical Problem of Evil

And God saw every thing that he had made, and, behold, it was very good.
And the evening and the morning were the sixth day.―Genesis 1:31, KJV

Chris Cox sends the following story that I expect many readers can sympathize with:

When I was in the 6th grade in Catholic school we learned about Lucifer and how he was God's most brilliant angel. And then, through pride, he fell from God's grace and became Satan and was cast into Hell. Thence he has tempted man to sin so he can gather their souls into Hell and deny them heaven.

One day a Monsignor came around to ask us questions about what we had learned and to allow us to ask him questions. I had been thinking about how God was all-powerful and how he was all-good, as taught in the first two pages of our catechism. So I asked the Monsignor why God didn't just snap his fingers and make the Devil disappear. As I remember he paused for a moment then said, "Well, there are some things God does without our understanding them. These things will be revealed to us when we join him in heaven." Or words to that effect.

Over the the last thirty years or so I had been thinking of that classroom and why God would create Satan, Hell and all the other suffering, all because of Adam and Eve disobeying him (not to mention the fact he knew they were going to disobey him, and all the mental entanglements that gets you into). I also had been reading several books and articles in various publications. After some time I came to the conclusion that there was no god.

The argument I use comes from the first two or three pages of that first St. Joseph's catechism. In those pages we were taught that God was all-powerful, all-good, and all-knowing. Which brings me, finally, to my argument from evil that a perfectly good god can't create a universe with evil in it. Or as Paul Kurtz asked in his publication, Free Inquiry, "Why doesn't God abolish evil?

  1. He can't, and is therefore not Omnipotent, or
  2. He won't, and is therefore not Omnibeneficent."

My question concerns any fallacies in all of this: Is it sound?

You're raising a difficult problem that philosophers have written whole books about, but I'm not going to. In order not to write a whole book about it, I'll pass quickly over some complexities and carefully avoid distracting side issues. For lengthier but not book-length treatments, see the Sources, below. Also, as a logician and not a theologian or philosopher of religion, I will concentrate on a few logical points raised by your account:


Resource: Anthony Gottlieb, "Candide and Leibnizís garden", Voltaire Foundation, 2/3/2015. A brief discussion of the relationship between Candide and Leibniz. Contains some untranslated French.

Update (4/10/2015): A reader wrote to offer a version of the "free will" solution to the problem of evil, which I mentioned in the note to Source 1, above. This was one of the side issues that I was trying to avoid, but perhaps it's not obvious that it is a version of the "best of all possible worlds" defense.

The basic idea of the free will argument is that God created a world in which we have free will and that means we are free to do evil. With respect to free will and evil, there are four types of possible world that God could choose from:

  1. There is free will and there is evil. This seems to be the type of world we live in.
  2. There is free will but there is no evil. Some will claim that this type of world is not really possible, since if people have free will then it's possible that they will commit evil. However, free will does not necessitate that we commit evil, for if it did then in what sense would we be free? Therefore, it's possible that God could have created a world in which we have free will but have freely chosen not to commit evil. However, many philosophers have believed―wrongly in my opinion―that it is impossible for God to create people with free will who commit no evil, so let's put this possibility to one side.
  3. There is no free will but there is evil. This might also seem, at first glance, not to be possible but there is what's called "natural" evil, which is the evil resulting from earthquakes, volcanic eruptions, hurricanes, diseases, and so on. Thus, it's possible to have a world in which there is no "moral" evil―that is, the evil done by people with free will―but still have natural evil. However, it's arguable that a world in which we lacked free will would be one in which natural evil did not matter to us, since we would then be like "robots". For this reason, let's also put this possibility aside.
  4. There is no free will and no evil.

So, ignoring possibilities 2 and 3, the choice that God faced was between worlds of type 1 or 4, that is, between a world in which there was free will and evil (1) or one in which there is no free will and no evil (4).

Now, why would God choose 1 over 4? Presumably, because free will is of such great value that a world with both free will and evil is better than one with no free will but no evil. If that were not the case, then God would have chosen to create a worse world than He could have. Why would He do this? Only because he either could not help it, did not know any better, or did not wish to create the best world he could; in other words, only if he is not omnipotent, not omniscient, or not omnibenevolent.

Therefore, if God exists and is all three "omni"s, then Leibniz was right that this is the best of all possible worlds.

Source: Tim Holt, "The Free Will Defence", Philosophy of Religion, 2008

March 31st, 2015 (Permalink)

The Puzzle of the Mount Rushmore Four

Bank robbers seem to like wearing masks of presidents in order to conceal their identities―see the Resources, below, for a couple of previous examples. Perhaps it's because of all those pictures of presidents on the currency. Now, a new gang of four has robbed several banks while wearing masks representing presidents Washington, Jefferson, Theodore Roosevelt, and Lincoln. As a result, the police have nicknamed them "the Mount Rushmore gang".

After the gang's most recent robbery the police brought in four men―one of whom was named Cross―whom they suspect of being the Mount Rushmore thieves. Unfortunately, no physical evidence was found directly linking any of the four suspects with the robberies. They immediately lawyered up and have since refused to say a word.

The police put the four suspects in a line-up for witnesses from the bank, but because of the masks the witnesses had only been able to see the robbers' eyes. Luckily, each of the suspects had a different color of eyes―one had dark brown eyes.

Putting together all of the evidence they were able to gather, the police had only the following six clues:

  1. The suspect named Dawson had not worn the Roosevelt mask.
  2. The fourth man in the line-up was not the suspect named Ambrose.
  3. During the robbery, the suspect named Ballard entered the bank first, followed by the suspect who stood third in the line-up, then came the man who had worn the Jefferson mask, and finally the blue-eyed robber.
  4. The suspect with hazel eyes stood last in the line-up.
  5. In the line-up, Dawson stood to the immediate left of the man who had worn the Washington mask.
  6. When the second man in the line-up entered the bank he was immediately followed by the green-eyed robber.

Of course, the police are baffled by this evidence. Can you help them determine the masks each suspect wore, their eye colors, the order in which each entered the bank during the robbery, and the order in which they stood during the line-up?



March 27th, 2015 (Permalink)

Conspiracy Theorists and Other Bad Thinkers

…[N]one of us can deny that intellectual vices of one sort or another are at play in at least some of our thinking. Being alive to this possibility is the mark of a healthy mind.

That's philosopher Quassim Cassam from a very interesting article at Aeon magazine applying the theory of intellectual virtues and vices to conspiracy theorists (CTists)―see Source 2, below. If you're not familiar with intellectual virtues and vices, here's an introductory sketch―for a longer, more challenging introduction, see the Resource, below:

The theory of intellectual virtues and vices is an outgrowth of an approach to ethics that goes back as far as Plato and whose most influential exponent was Aristotle. Ethical virtues are such things as courage, honesty, self-control, and so on, while the vices are cowardice, dishonesty, self-indulgence, and the like. The intellectual virtues and vices are special types of the ethical ones related to intellectual matters having to do with the formation and maintenance of beliefs, evaluation of evidence, willingness to change beliefs, etc. Such virtues include intellectual humility, curiosity, and open-mindedness; corresponding vices are intellectual arrogance, willful ignorance, and closed-mindedness.

Here's why Cassam thinks that it's worthwhile applying the theory of intellectual virtues to conspiracy theorists:

The problem with conspiracy theorists is not…that they have little relevant information. The key to what they end up believing is how they interpret and respond to the vast quantities of relevant information at their disposal.

You can see in this short quote that Cassam's approach is to reject the idea that CTists are ignorant in favor of the notion that there is something wrong with the way they think. I agree with him that the problem with CTists is usually not a lack of information per se―typically, they have loads of information supporting their pet conspiracy theory (CT), though much of it will be false―but it may be that such false information isn't the kind of relevant information in question. Rather, what the CTists need is the information that would refute their favored CT. In my experience, CTists are frequently ignorant of exactly this kind of information, despite the masses of pseudo-evidence they can spew forth in support of their CT.

Moreover, the evidence refuting most CTs is readily available―"The truth is out there!" to quote an infamous conspiracy-mongering TV show―but CTists don't believe it. The CTist pays close attention to the evidence that supports his favorite CT, but ignores or downplays that which refutes it. He exercises little or no skepticism about evidence favorable to his theory, which is why he ends up believing so many falsehoods. In contrast, his skepticism immediately goes into overdrive when forced to confront counter-evidence, which he simply dismisses.

So, I generally agree with Cassam that CTists are bad thinkers. Nonetheless, such bad intellectual traits can lead to CTists lacking relevant information because they fail to seek it out (a lack of curiosity), fear the consequences to their pet beliefs (intellectual cowardice), and actively resist being educated (willful ignorance). For this reason, simply making correct information available will probably have little effect on CTists, since they are likely to dismiss or downplay it if they come across it. As Cassam puts it:

…[T]here remains the problem of what to do about such people as [the CTist]. If he is genuinely closed-minded then his mind will presumably be closed to the idea that he is closed-minded. Closed-mindedness is one of the toughest intellectual vices to tackle because it is in its nature to be concealed from those who have it. … What if [he] is too far gone and canít change his ways even if he wanted to? Like other bad habits, intellectual bad habits can be too deeply entrenched to change. This means living with their consequences. Trying to reason with people who are obstinately closed-minded, dogmatic or prejudiced is unlikely to be effective.

The philosopher Stephen Law has used the apt phrase "intellectual black hole" to refer to such things as CTs. Once someone has been sucked into the intellectual black hole of a CT, he may be forever unreachable. This is a discouraging fact, but it should also encourage us to do what we can to prevent people from falling into such holes. One thing we can do is to put up intellectual signposts along the way: "Caution: intellectual black hole ahead!", "Dead end", "No exit".

You might wonder what the value is of condemning CTists, pseudoscientists, and others as "bad thinkers", especially if the conclusion follows that there is little if anything we can do to make their thinking better. A negative point is that it may be a waste of time and effort to directly argue with CTists since they are often not receptive to evidence against their pet CTs. A positive point is that if we want to reduce the prevalence of conspiracy theories and pseudoscience, we need to educate people in the intellectual virtues. As Cassam writes: "If we care about the truth then we should care about equipping people with the intellectual means to arrive at the truth and avoid falsehood."


  1. "Make your own Road Construction Sign", Atom Smasher
  2. Quassim Cassam, "Bad Thinkers", Aeon, 3/13/2015

Resource: Rosalind Hursthouse, "Virtue Ethics", Stanford Encyclopedia of Philosophy

March 22nd, 2015 (Permalink)

What's New?

I've added a new contextomy to the "Familiar Contextomies" page―see the Source, below. This is another misleading quote used by 9/11 conspiracy theorists to suggest that something other than an airliner crashed into the Pentagon on September 11th, 2001.

Source: Familiar Contextomies: Danielle O'Brien

March 16th, 2015 (Permalink)

Wikipedia Watch

In previous watches, I've mentioned hoaxes that have been played on Wikipedia―see the list below. Unfortunately, hoaxing Wikipedia is not a new pastime, and some have even done it for college credit! The longest-lived hoax that I was aware of lasted for five years before it was discovered, but now a decade-long hoax has been uncovered―see the Source, below, for the details. This is just the latest record-holder, and it's likely that even longer-lived ones will eventually come to light. This is why I argued, in the most recent watch listed below, that we can't really know how fast hoaxes or other types of misinformation are exposed.

It's hard to imagine something like this happening to the Encyclopaedia Britannica, or to any other traditional encyclopedia for that matter. The fact that anyone can add something to Wikipedia would seem to make hoaxing unavoidable. This is one reason why it's not a trustworthy source of information; at the very least, you need to verify what it says with at least one independent source.

Source: Mason, "Jared Owens, God of Wikipedia", Wikipediocracy , 3/15/2015

Previous Wikipedia Watches: 5/16/2012, 1/9/2013, 12/21/2014

Reader Response (3/23/2015): Pat Heil writes:

Encyclopaedia Britannica (EB) and other "reliable" sources are at the mercy of the tendentious, the poor scholar, and the latter day scholar who accepts as authorities people whose material is out of date. I feel quite sure that editions of EB published between 1912 and 1953 treated Piltdown Man seriously as a forebear of humanity. So all authorities deserve to be questioned, not just Wikipedia.

I expect you're right about the Piltdown hoax, but that's not the sort of hoax that's been played repeatedly on Wikipedia. It is possible that a traditional encyclopedia such as the Britannica might run a hoax article, but it's far less likely to happen than to Wikipedia. I agree that it's a good idea to double-check every source, and to trust none completely, but some are more trustworthy than others. No source―not even Britannica―is 100% reliable, but some are more reliable than others, and many are more reliable than Wikipedia.

March 14th, 2015 (Permalink)

Logical Literacy: "You can't prove a negative."

I have some good news and some bad news. I'll save the good news till later. Here's the bad news: despite what you may have heard, you can too prove a negative!

One of the earliest proofs of a negative is what is sometimes referred to as "Euclid's Second Theorem", which is that there are an infinite number of prime numbers. Why is this a negative? Take a close look at that word "infinite". Do you see the "in" on the front? "In-finite", that is, not finite. Are you still not convinced? Specifically, what Euclid proved is that there is no greatest prime number; that there must be an infinite number of primes follows from this fact, because if there were a finite number of primes then there would be a greatest one.

How did Euclid prove that there is no greatest prime number? Without going into the details, he first assumed that there is a greatest prime, and then showed how to find an even bigger prime. This, of course, is a form of reductio ad absurdum (RAA), in which you prove something false by assuming that it's true and showing that a contradiction follows from that assumption, which is a common form of proof in logic and mathematics.

Euclid's Second Theorem is just one of many negative theorems in logic and math. Other famous negative theorems are Fermat's Last Theorem and Gödel's Incompleteness Theorems―note the word "incompleteness": in-completeness, that is, not complete.

Moreover, as philosopher Steven Hales points out―see the Source, below―"you can't prove a negative" is itself a negative statement, so if it's true then you can't prove it! So, how would you know that "you can't prove a negative" is true?

Let's move on to the good news: while it's not true as a general matter that you can't prove a negative, there is something to the idea. However, getting at what that something amounts to is not easy, but that's what I'll try to do in the remainder of this entry.

First of all, I was speaking of "proof" in the strict, logical sense of the word, above. Unfortunately, in this strong sense of the word, the only things you can "prove" are propositions of logic and mathematics. However, there is a weaker, common use of the word "prove" to mean something like "establish beyond a reasonable doubt". This is the sort of "proof" standard used in criminal cases in American courts. In this weaker sense, it's possible to "prove" things that are not part of logic or math, such as that the defendent is guilty. For the rest of this entry, I'll use the word "establish" for this weaker sense of "prove".

So, instead of "you can't prove a negative" I'll be discussing "you can't establish a negative", instead. However, it's still not true that you can't establish a negative. Consider the claim that all swans are white. It's easy to establish the negation of this claim, at least if you live in Australia: just produce a non-white swan, which has in fact been done since there are black swans in Australia.

In order to find any truth in the claim that you can't establish a negative we have to focus upon a specific type of negative statement, namely, a negative existential statement. An existential statement―or, more specifically, an affirmative existential statement―is just a claim that some particular thing or type of thing exists. For instance, "there is a Loch Ness monster" is an affirmative existential statement; "there are bigfeet [bigfoots?]" is another. A negative existential statement is just the negation of an affirmative one, so "there is no Loch Ness monster" and "there are no bigfeet" are both negative existentials.

Affirmative existential statements are comparatively easy to establish; at least, that is, if they are true. Consider the claim: "a black swan exists". To establish this statement, all that is necessary is to produce a black swan, which is easy if you live in Australia. In contrast, the negation of "a black swan exists"―namely, "it is not the case that a black swan exists" or "there are no black swans"―would be much harder to establish even if it were true. To establish that there were no black swans, you would have to examine every swan and see that none is black. While not completely impossible, this is for all practical purposes undoable. Thus, it can be much more difficult to establish a negative existential claim than an affirmative one.

Thus, it is reasonable to place the burden of proof on those who make affirmative existential claims rather than on those who deny them. This is one reason why some skeptics have made the "you can't prove a negative" claim: the burden of proof is not on the skeptic to disprove the existence of the Loch Ness monster, bigfeet, flying saucers, etc., but on those who claim they exist.

To sum up, you can prove a negative, but it's much harder to establish a negative existential statement than an affirmative one. As a result, the burden of proof is on those who make affirmative existential claims rather than on those who deny them.



March 3rd, 2015 (Permalink)

Sobriety Check, Part 2

In part one, we saw that the statistical claim that underage drinkers spent $22.5 billion on alcoholic beverages in the United States in 2001 was implausible―see the Resource, below. In order for this to be true, underage drinkers would have had to spend an average of over $600 apiece. However, though this is an implausibly high amount, it isn't impossible.

To perform this statistical check, all that we needed was the information included in a short New York Times article that reported the claim―see Source 2, below―as well as the statistical benchmark that approximately four million babies are born each year in the U.S. Unfortunately, the information contained in the Times report, together with statistical benchmarks, does not appear to be sufficient to show that the original paper that reported this statistic must be in error. In this sequel, we will turn to the paper itself―see Source 1, below―and use a different technique to check it.

As a general matter, not every dubious statistic that you come across in the news media can be detected by the use of statistical benchmarks. Sometimes the needed benchmarks won't be available, or a questionable number may survive a benchmark test. Luckily, there's an alternative test that sometimes works when the benchmark test fails.

Turning now to the paper itself, here's how the researchers arrived at the estimate of the amount spent by underage drinkers on alcoholic beverages:

  1. They used census data from 2000 to estimate the number of people between the ages of 12 and 20. Nowhere do they actually provide this figure, as far as I can tell, but based on the information they do give I estimate it as a conservative 40 million. See the Technical Appendix, below, if you want to know exactly how I arrived at this estimate.
  2. Using survey data, they estimated the proportion of those in this age group who are drinkers, and thus underage drinkers. Again, the precise number is not given, but based on the data I estimate it as approximately 19 million. Again, see the Technical Appendix, below, for the details of this estimate.
  3. Again using survey data, they estimated the mean number of drinks by underage drinkers in a month: 35.2.
  4. From the two previous numbers, they calculated the total number of drinks taken by underage drinkers in 2001, which they give as just over 20 billion.
  5. From this last number, together with data about the average price of alcoholic beverages, they estimated the amount of money spent on underage drinking.

Now, as I suggested above, it's unlikely that you can use benchmarks to check these statistics, but there's another way you can do it. You can check these numbers using only the information above. Moreover, you won't need any sophisticated math, though the use of a calculator would make the calculations less tedious. When you've done so, click on "Sobriety Check", below, to see the results of one such check.

Sobriety Check


  1. Susan E. Foster, Roger D. Vaughan, William H. Foster, Joseph A. Califano Jr, "Estimate of the Commercial Value of Underage Drinking and Adult Abusive and Dependent Drinking to the Alcohol Industry", Archives of Pediatrics & Adolescent Medicine, 5/2006
  2. Eric Nagourney, "Addiction: Sales Estimates Paint Portraits of Alcohol Abusers", The New York Times, 5/2/2006

Resource: Sobriety Check, 2/24/2015

Previous Entry

The gambler's fallacy extends to the financial and investment markets. There is simply nothing like a fail proof investment or "get rich scheme" in the new trend of binary options. Although there are now serious binary option providers in Germany, such as BDSwiss which is EU regulated, prospective investors are cautioned to have realistic expectations to avoid the investor's fallacy.

With that said, one fallacy proven to be untrue is that binary options online are an outright scam. EU regulated brokers such as Top Option offer legal trading services that extend from North America into the European trading market including Germany, making binary options trading a safe investment. Additionally, withdraws and transactions can be conducted through various platforms including PayPal.

College students visit Ssee2011conference.com for free essay writing help.
Graduate students use Smart Montgomery Tips for thesis and dissertation writing help.

Online dissertation services such as MastersThesisWriting may help with your thesis or dissertation.