Probabilistic Fallacy
Taxonomy: Logical Fallacy > Formal Fallacy > Probabilistic Fallacy
Subfallacies:
- The Base Rate Fallacy
- The Conjunction Fallacy
- The Gambler's Fallacy
- The Hot Hand Fallacy
- The Multiple Comparisons Fallacy
Exposition:
A probabilistic argument is one which concludes that something has some probability based upon information about probabilities given in its premisses. Such an argument is invalid when the inference from the premisses to the conclusion violates the laws of probability. Probabilistic fallacies are formal ones because they involve reasoning which violates the formal rules of probability theory. Thus, understanding probabilistic fallacies requires a knowledge of probability theory.
A Short Introduction to Probability Theory:
In the following laws of probability, the probability of a proposition, s, is represented as: P(s).
- P(s) ≥ 0.
The probability of a proposition is a real number greater than or equal to 0. In other words, zero is the lowest probability, and there are no negative probabilities. It follows from axioms 2 and 3, below, that P(s) ≤ 1, which means that a probability is a real number between 0 and 1.
- P(t) = 1, if t is a tautology.
The probability of any tautology is equal to 1. This is because a tautology is necessarily true, and 1 is the value in probability theory that represents truth.1
- If s and t are contrary propositions, then P(s or t) = P(s) + P(t).2
The probability of a disjunction of contrary propositions is equal to the sum of the probabilities of its disjuncts, where a "disjunction" is a proposition of the "s or t" form and s and t are its "disjuncts".3
A conditional probability is the probability of a proposition on the condition that some proposition is true. For instance, the probability of getting lung cancer is an unconditional probability, whereas the probability of getting lung cancer given that you smoke cigarettes is a conditional probability, as is the probability of getting lung cancer if you don't smoke. Each of these probabilities is distinct: The probability of getting lung cancer if you smoke is higher than the unconditional probability of getting lung cancer, which is higher than the probability of getting lung cancer if you don't smoke. The conditional probability of s given t is represented as: P(s | t).
There is one final axiom that governs conditional probabilities:
- P(s | t) = P(s & t)/P(t), if P(t) ≠ 0.4
The conditional probability of s given t is equal to the probability of the conjunction of s and t divided by the probability of t, where P(s & t) is the probability that the conjunction of s and t is true, where a "conjunction" is a proposition of the "s and t" form.
The above laws are logically sufficient to prove every fact within probability theory, including a theorem that is important for explaining probabilistic fallacies:
Bayes' Theorem: P(s | t) =
P(t | s)P(s) / [P(t | s)P(s) + P(t | not-s)P(not-s)].5
Proof: From axiom 4, we know that P(s | t) = P(s & t)/P(t). Since "s & t" is logically equivalent to "t & s", P(s & t) = P(t | s)P(s), again by axiom 4, which is the numerator of the fraction in Bayes' Theorem.6 To get the denominator of the fraction, "t" is logically equivalent to "(t & s) or (t & not-s)", so P(t) = P[(t & s) or (t & not-s)]. Since "(t & s)" and "(t & not-s)" are contraries, it follows that P[(t & s) or (t & not-s)] = P(t & s) + P(t & not-s), by axiom 3. By applying axiom 4 again, we have that P(t) = P(t | s)P(s) + P(t | not-s)P(not-s), which is the denominator.
Exposure:
Mistakes in reasoning about probabilities are typically not treated as formal fallacies by logicians. This is presumably because logicians usually do not make a study of probability theory, and the mathematicians who do don't generally study logical fallacies. However, in recent decades, psychologists have discovered through observation and experiment that people are prone to make certain types of error when reasoning about probabilities. As a consequence, there is now much more empirical evidence for the existence of certain fallacies about probabilities than there is for most traditional fallacies. Again, logicians are often unaware of the existence of this evidence, and they usually do not discuss it in works on logical fallacies. It is about time that logicians broadened their intellectual horizons and began to take note of discoveries in the psychology of reasoning.7
Resource: Amir D. Aczel, Chance: A Guide to Gambling, Love, the Stock Market, & Just About Everything Else (2004). About as untechnical an introduction to probability theory as you will find.
Notes:
- As you might suspect, 0 represents falsehood, so the probability of a contradiction is equal to 0, as can easily be proven from this axiom taken together with the next one.
- This is sometimes called "the addition law" or "the addition rule".
- It follows from the axioms that for any two propositions, s and t, contrary or not, that: P(s or t) = P(s) + P(t) - P(s & t).
- Equivalently, P(s & t) = P(s | t)P(t). This is often called "the multiplication rule" or "the multiplication law". If s and t are probabilistically independent―that is, if P(s | t) = P(s) and P(t | s) = P(t)―then the rule simplifies to: P(s & t) = P(s)P(t).
- There are several forms of Bayes' Theorem; this is not the simplest form―see the next note―but it is the most useful one for my purposes.
- At this point in the proof, we have proved that P(s | t) = P(t | s)P(s) / P(t), which is the simplest version of Bayes' Theorem. The rest of the proof is devoted to showing that P(t) = P(t | s)P(s) + P(t | not-s)P(not-s).
- The locus classicus is: Daniel Kahneman, Paul Slovic & Amos Tversky, editors, Judgment Under Uncertainty: Heuristics and Biases (1985).
Acknowledgment: Thanks to Emil William Kirkegaard for pointing out a problem, which has subsequently been fixed, with the wording of the informal description of the first axiom of probability.