Home
About How to Use
Taxonomy Glossary About the Author
Recommended Reading

Previous Month | RSS/XML | Current

WEBLOG

October 1st, 2025 (Permalink)

Turn Down the Heat

Yuval Levin, "Have an Argument", The Free Press, 9/14/2025

… Even most politically engaged people don't actually spend much time in active disagreement with people who have different views. We spend most of our time cocooned away with people we agree with, talking about those terrible people on the other side, but rarely actually talk to those people.

This feeds the common misimpression that disagreement is a mark of civic failure, and that the very existence of people who don't share our goals and priorities is a problem to be solved. The distinctly 21st-century institutions of our civic life―not only social media but the polarized political press, the one-party university, the one-party church, and an increasingly performative political culture―are all grounded in that misimpression. They are built to let us avoid exposure to conflicting views. …

This is a perverse distortion of the American political tradition. Our Constitution is premised on the assumption that our neighbors aren't always going to share our views, and that dealing with each other through those differences is what politics is for.

"As long as the reason of man continues fallible, and he is at liberty to exercise it, different opinions will be formed," as James Madison bluntly put it.* … The older, more traditional institutions of our politics exist to facilitate disagreement in that light. Legislatures and courtrooms are places to argue with each other. So are universities, properly understood, and newspaper opinion pages. The forms and rules of those institutions are designed to make the arguments that happen there constructive.

The same cannot be said of our digital partisan cocoons. They are not there to facilitate disagreement but to facilitate division. They separate us into distinct subcultures which they then do their best to keep from mixing. They want us surrounded by people we agree with, but obsessed with people we disagree with.

… Politically active people are at war with caricatures of their opponents, but they are not forced to actually confront those opponents as human beings with priorities of their own, or to acknowledge the possibility that what the two sides want might be the starting point for a negotiation toward an outcome they could both tolerate.

And the cultural gravity of these technologies is remaking our traditional civic spaces in their image. The culture of Congress, and of many college campuses, increasingly resembles that of social media. It fosters not disagreement (which inevitably involves mixing with the other side) but division. …

But above all, lowering the temperature will require us to recognize that the people we disagree with are not the problem to be solved. … Our politics does not consist of friends and enemies. It consists of fellow citizens who share a future in common and disagree about how best to shape that future. Those disagreements are serious. But no resolution to them could be absolute or permanent. Our political adversaries will still be here tomorrow; they will be part of any future we build. Any politics not premised in that reality will be dangerously delusional and can only point us down.

The American political system is firmly rooted in that reality. It advances the counterintuitive notion that we can turn down the temperature of our politics by disagreeing with each other more directly and concretely. Its forms, and its history, can teach us how.


* James Madison, "The Federalist Papers 10", Bill of Rights Institute (1787)


Disclaimer: I don't necessarily agree with everything in this article, but I think it's worth reading in its entirety.


September 29th, 2025 (Permalink)

The Zebra Fallacy?

There's an old saying that when you hear hoofbeats, you should expect a horse, not a zebra1. This aphorism appears to have come from the context of disease diagnosis, where it's meant to draw an analogy between diagnosis and inferring what kind of hoofed animal made the sounds heard, so the symptoms of the disease are likened to the sounds made and the disease itself to the kind of animal making those sounds. The moral that is supposed to be drawn is that when the same symptoms―hoofbeats―could be explained by more than one disease―a horse or a zebra―the diagnostician should diagnose the more common and familiar disease―the horse. Here is how a textbook on pediatric health care explains it:

The analogy of hearing hoofbeats and looking for a zebra versus a horse holds true for all medical signs and symptoms. Both zebras and horses may cause similar-sounding hoofbeats, but a look out the window is more likely to reveal a horse than a zebra.2

This maxim probably originated here in North America as it wouldn't make sense in Africa where the zebra may be more common than the horse, so that African doctors should expect the zebra. In contrast, horses are common in North America but zebras are found only in zoos, so that if you hear hoofbeats here it could be a zebra escaped from a zoo, but it's far more likely to be a horse.

Now, the maxim is not just for medical diagnosis and hoofbeat identification, but can be applied to many other situations. For instance, suppose that you get a quick glimpse of a large woodpecker in the woods of Arkansas: is it a pileated woodpecker or an ivory-billed woodpecker? The pileated is common to forested areas of the eastern part of the continent3, whereas the ivory-billed is probably extinct4. While it may not be impossible that you saw an ivory-billed, it's far more likely that you saw the similar-looking pileated.

The zebra maxim instructs us to expect the more common cause when the evidence does not favor the less common one, but what happens when we violate it? I suggest naming the error of violating the maxim "the zebra fallacy" after this adage. This name should help to remind us of the nature of the mistake, namely, adopting a less probable hypothesis to explain evidence that can be explained equally well by a more probable one.

The maxim is not just a rule of thumb, but an application of a theorem of the calculus of probabilities―see the Technical Appendix, below. In English, the theorem says that if two hypotheses explain the same evidence equally well, then that evidence has no effect on their relative probabilities, that is, the more probable hypothesis remains more probable.

Pseudoscience and conspiracy theories often use the zebra fallacy since they tend to select the less likely hypothesis to explain a phenomena. For instance, consider the notorious case of the Cottingley fairies: I won't rehearse the historical details since I've done so elsewhere5, but the two competing hypotheses were that, first, two young girls repeatedly photographed fairies in their garden that no one else saw and, second, that the girls had faked the photos and lied about it. Both hypotheses explain the evidence6, and the second is far more a priori likely than the first.

I'm seriously considering adding an entry for this fallacy to the files and Taxonomy, but I'd like to have at least a few explicit examples in addition to the woodpeckers and the fairies before I do so.


Technical Appendix: For those familiar with probability theory, here's the theorem underlying and supporting the zebra maxim, together with a proof given in the axiom system for logical probability calculus7.

The Zebra Theorem: If P(h1) > P(h2) & P(e|h1) = P(e|h2) & P(e) > 0, then P(h1|e) > P(h2|e)8.

Proof: Assume the hypothesis of the theorem. We need to show that P(h1|e) > P(h2|e).

P(h1|e) = P(h1)P(e|h1)/P(e) (by Bayes' Theorem) = P(h1)P(e|h2)/P(e) (by hypothesis) = P(h1) × P(e|h2)/P(e) (by algebra).

P(h2|e) = P(h2)P(e|h2)/P(e) (by Bayes' Theorem) = P(h2) × P(e|h2)/P(e) (by algebra).

Let P(e|h2)/P(e) = c, then we have proven that P(h1|e) = P(h1)c and P(h2|e) = P(h2)c.

∴ P(h1|e) > P(h2|e), by hypothesis and the following fact about inequalities9:
If a > b then ac > bc, where all of a, b, and c > 0.⊣

This theorem can be easily generalized to any finite number of competing hypotheses that explain the evidence equally well by applying it pairwise to every pair of hypotheses. It also can be generalized to any relationship of equality or inequality between the hypotheses. Such a generalized theorem says that evidence that is equally well explained by competing hypotheses does not change the relationship between those hypotheses' probabilities, whether of equality or inequality.


Notes:

  1. For a thorough discussion of what is known about the origin of the saying, see: "Quote Origin: When You Hear Hoofbeats Look for Horses Not Zebras", Quote Investigator, 11/26/2017. To make a long story short: no one knows for sure who created it.
  2. Catherine DeAngelis, Basic Pediatrics for the Primary Health Care Provider (1975), p. 172. Found via Google Books.
  3. "Pileated Woodpecker", All About Birds, accessed: 9/28/2025.
  4. "Ivory-billed Woodpecker", All About Birds, accessed: 9/28/2025.
  5. See: Fairy Tale, 2/6/2013.
  6. Actually, I'm giving the fairy hypothesis more credit than it deserves since the photos look fake, which means that the evidence actually supports the more likely hypothesis.
  7. For the axioms, see: Probabilistic Fallacy.
  8. In this theorem, h1 and h2 are the two hypotheses being compared, P(h1) and P(h2) represent the "prior" probabilities of the hypotheses, and the first conjunct of the antecedent of the theorem tells us that the prior probability of h1 is greater than that of h2. "e" stands for the evidence, and P(e|h) is the probability of the evidence given the hypothesis h, which represents the degree to which h explains e. So, the second conjunct of the antecedent tells us that h1 and h2 explain e equally well. The third conjunct of the antecedent is included simply because we will need in the proof to divide by the probability of the evidence, P(e), so it can't be zero. Moreover, if the probability of e were zero then it would be false and, thus, worthless as evidence of anything. Finally, the consequent says that the posterior probability of h1 given the evidence is greater than that of h2.
  9. Paul Sanders, Elementary Mathematics: A Logical Approach (1963), p. 100, Postulate XIb.

September 17th, 2025 (Permalink)

Lesson in Logic 22: Polysyllogisms and Euler Diagrams*

As discussed in lesson 20, the technique of turning a polysyllogism into a chain of categorical syllogisms can show the validity of an argument that a single standard Venn diagram could not handle. However, the circles of Leonhard Euler, introduced in the previous lesson, can be used to show validity in a single diagram. To see how this method works, let's apply it to the polysyllogism used as an example in lesson 19: All sapsuckers are woodpeckers

  1. All sapsuckers are woodpeckers.
  2. All woodpeckers are birds.
  3. All birds are animals.
  4. Therefore, all sapsuckers are animals.
All woodpeckers are birds

There are three premisses, all of which are A-type statements, that need to be represented in our diagram. Recall from the previous lesson that Euler represented such statements by drawing a circle for the subject class inside a circle for the predicate class. In this case, it doesn't matter which premiss you start with, so lets begin at the beginning with the first premiss. "Sapsuckers" is the subject class and "woodpeckers" is the predicate class, so we represent the first premiss as shown above. Euler diagram 1

Turning now to the second premiss, remember that in diagramming arguments the premisses are all represented on a single diagram, whether Venn or Euler. So, we need to show on the same diagram that the class of woodpeckers is contained within the class of birds. Since we already have a circle for woodpeckers, all that we need is a new circle for birds, and the former should be inside the latter as shown above.

Notice that the second diagram shows that the class of sapsuckers is a subclass of the class of birds―in other words, all sapsuckers are birds―which was the intermediate conclusion in the chain argument given to show this polysyllogism valid in lesson 19. The final step is to diagram the third and last premiss, which means placing the "Birds" circle within a circle representing all animals, as shown above.

The finished diagram clearly shows the logical relationships between the four classes, and you can see that the conclusion is true and, therefore, the argument is valid. In my opinion, this is far easier and more perspicuous than the chain argument of lesson 19. However, that's just one example, so let's look at another example from that lesson, this one including an E-type statement: Euler diagram 2

  1. All flickers are woodpeckers.
  2. No birds are mammals.
  3. All woodpeckers are birds.
  4. Therefore, no flickers are mammals.

Since we already know how to diagram A statements, let's consider premisses 1 and 3 first. The result of diagramming both will look like the second diagram above but with "flickers" in place of "sapsuckers". Now, to diagram the second premiss, we must add a circle representing mammals that is disjoint from the circle for birds; the result looks as shown. Again, you can see from the diagram that the conclusion of the argument is true―that no flickers are mammals―and, thus, that the argument is valid, since it shows that the classes of flickers and mammals are disjoint.

As I mentioned in the previous lesson, Euler's diagram's for particular statements―that is, I- and O-type statements―are what led to Venn's different approach to using circles to represent classes. In the next lesson, we'll see how to combine Venn's technique with Euler's to diagram polysyllogisms with particular premisses. In the meantime, here's a polysyllogism to practice diagramming:

Exercise: Use an Euler diagram to show the following polysyllogism valid:

  1. All rimshaks are sleestaks.
  2. No sleestaks are triktraks.
  3. All flipjaks are triktraks.
  4. Therefore, no flipjaks are rimshaks.

* For previous lessons in this series, see the navigation panel to your right.


Puzzle
September 6th, 2025 (Permalink)

Crack the Combination XI*

The combination of a lock is four digits long and each digit is unique, that is, each occurs only once in the combination. The following are some incorrect combinations.

  1. 8 1 7 4: One digit is correct but is in the wrong position.
  2. 5 4 9 6: No digits are correct.
  3. 7 4 3 1: Two digits are correct but neither is in the right position.
  4. 4 1 9 0: One digit is correct and in the right position.

Can you determine the correct combination from the above clues?


* Previous "Crack the Combination" puzzles: I, II, III, IV, V, VI, VII, VIII, IX, X


September 4th, 2025 (Permalink)

What's New?

The Fallacy Files Taxonomy of logical fallacies is―that is, there's a brand new version of it: just click on "Taxonomy" to your upper right. In case you're interested, the old versions are still available from the following page, where you can also read about how you might make use of the taxonomy: The History of the Taxonomy. Check it out!


Recommended Reading
September 1st, 2025 (Permalink)

To err is human but to really foul things up requires artificial intelligence


Notes:

  1. See: "Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning.", Anthropic, 8/5/2025.
  2. See: The Gee-Whiz Bar Graph, 4/4/2013.
  3. See: Half a Graph, 11/23/2024.
  4. See:

Disclaimer: I don't necessarily agree with everything in the above articles, but I think they are worth reading. I have sometimes suppressed paragraphing or rearranged the paragraphs in the excerpts to make a point.


Previous Entry