How We Decide: the Processes Behind Decision Making

This is part 2 of a 3-part guest post by long-time commenter Richard S. Russell. Read part 1 here.
All these inputs—definitions, axioms, assumptions, genetics, sensations, memories, and testimony—get fed into the hopper at one end of our factory. Then they pass through one of eight gates to undergo some kind of process that produces decisions at the other end. This brings us to the meat of the matter, the processes in the middle. There are eight of them. I’ll discuss them in order from most to least reliable.
Processes
1. Logic. Formal logic dates back to ancient Greece and is based on the concept of the syllogism. A syllogism has two explicitly stated premises (or “if” statements) which are combined using rules of logic to produce a conclusion (a “then” statement).
Example #1: If all men are fallible, and if Aristotle is a man, then Aristotle is fallible.
Logic tells us absolutely nothing about the truth of the two premises. For example, consider:
Example #2: If all squares are round, and if this triangle is a square, then this triangle is round.
Example #2 follows exactly the same structure as Example #1 and thus leads to a conclusion which is equally valid (that is, reliable according to the rules of logic). The conclusion is, of course, ludicrous, but that’s because the premises are ludicrous. However, logic will yield true conclusions from true premises. It does not and cannot, however, make any a priori statements about whether premises are true.
But, as processes go (and it’s processes we’re considering here—assumptions were covered when we looked at inputs), logic is absolutely cracker-jack—the most robust and reliable process of them all.
2. Reason. This is the standard used in law: “Under the circumstances, what would a reasonable person do?” It’s similar to logic but is heavily larded with considerations of the practical. You take a “reasonable” amount of time to gather evidence from “reasonable” sources, then use logic or a “reasonable” facsimile thereof to arrive at “reasonable” conclusions. You don’t spend an undue amount of effort on minutiae, because in real life there are always other demands on your time. A crucial element in reason is the acceptance of your own fallibility. You should admit to the possibility of error and keep an open mind toward new evidence. Having an open mind, however, does not mean you need a hole in your head. If something clearly makes no sense, you should say so and waste no more time on it.
3. Confidence. You are confident of something on the basis of its track record or the history of similar situations, without taking the time to examine the particulars of the current situation. “My car started last winter when it was 15° below zero, and I’m confident it will do so again this year.” We overtly recognize that there are degrees of confidence: “My car started easily enough last winter when it was 15° below zero, so I’m pretty sure it will do so now that it’s 30° below.”
4. Trust. This is similar to confidence, except that the way the language has evolved, we usually express confidence in things and trust in people. Additionally, because we’re social creatures and want to like other people and have them like us, we sometimes trust people who don’t necessarily have a track record of good decisions. Little children usually give parents a free ride when it comes to trust, because they’re in no position to know any better. Adults tend to trust authority figures more than is warranted. And so on. Thus I rate it lower on the scale of reliability than confidence.
5. Chance. The roll of the dice, the flip of the coin, the turn of the card—seems like a hell of a way to make up your mind, doesn’t it? However, I advance the premise that, in real-world terms, it’s not so bad. You usually don’t resort to chance as a decision-making technique unless you’ve reduced your alternatives down to two or three options, no one of which is clearly better than any of the others. In short, your time might well be more efficiently spent making some sort of decision than in further dithering over exactly which 1 will give you that extra one percent of benefit. At least the concept of chance implies the choice between alternatives, which means you haven’t just blindly accepted the first idea that popped into your head. And, of course, chance has a real role to play in experimentation, where you want to be sure that the choices of, say, which patients get real drugs and which get placebos isn’t governed by hidden assumptions or prejudices.
6. Obedience. This is like a corrupted form of trust. The implication here is that you are doing what someone else tells you to, but for reasons of pressure, coercion, immaturity, feelings of inadequacy, low self-esteem, threat, etc., rather than a free-will decision. If you choose not to obey, you may be opening yourself up to consequences worse than any bad decision which would result from obedience. In a perverse way, this means that obedience may actually produce decisions which are good from a holistic viewpoint, since giving up your lunch money (normally a bad decision) may mean that you won’t get the crap beaten out of you. Still, you may recall Lord Acton’s statement that “Power corrupts, and absolute power corrupts absolutely.” Over the long haul, giving any other person this kind of power over you is likely to lead to decisions being made on your behalf which are progressively worse for you. If you can get away with it, question authority.
7. Hope. We now start getting pretty tenuous in terms of whether these processes are at the level of conscious thought. If I were to say “My car started last winter when it was 15° below zero, and I think it will do so now that it’s 60° below,” I’ve passed out of the realm of confidence and into the domain of wishful thinking, or, in a word, hope. Pop psychology has a different buzzword for it: “denial.” Another phrase is “magical thinking.” (Rationalist: When you hear hoofbeats, don’t think zebras, think horses. Fantasist: When you hear hoofbeats, don’t think horses, think unicorns.) These are used in decision-making when the evidence points toward a conclusion that we really don’t want, so we base our decisions—for no particularly good reason—on the hope that what we do want will come true.
8. Faith. Last and least we come to faith, the decision-making tool of last resort (and thus the one most favored by the priesthood). Faith is when you want to believe something but there’s not a shred of evidence for it and quite often lots of evidence against it. Whenever you have evidence to support a conclusion, you’d use one of the seven previous decision-making methods and would have no hesitation in saying so. Faith gets hauled out only to support conclusions for which there is no reason to believe in their truth, validity, efficacy, or efficiency.
Indeed, faith is listed among these other processes only in a kind of honorary fashion, because arguably there’s no “process” involved at all: the assumptions at the input end (like “God exists”) essentially go straight thru, unmodified, and come out the other end looking not a whole lot different than when they went in (kind of like creamed corn when you’ve got the flu).
That final process, faith, is examined in detail in the final part of this essay.

They say, “In the end, everything will make sense.”
They never seem to mention that it could start making way more sense now,
if you would just question some of your fundamental assumptions.
CrustaceanSingles.com

Image credit: Derek Finch, flickr, CC

How We Decide: an Analysis of an Essential Process We Take for Granted

The Mariner IV spacecraft was built with four solar panels. The panels were folded during launch and were to spring out once the spacecraft was in space, but the fragile panels couldn’t be damaged by being abruptly snapped into place. Previous spacecraft had used dampers to control the panels’ motion, but these and other improved dampers were unsatisfactory because they contained oil that could leak or were too heavy or were unreliable. The damping problem became a great concern as the launch date came nearer. Only after investigating what would happen after a complete failure of the damper was it discovered that dampers were in fact unnecessary.
Mariner IV went to Mars in 1964 without solar panel dampers. The lightest and most reliable damper was no damper at all.
Decide decisions
We constantly make decisions in everyday life, but how can we best approach them? Can we improve this familiar process? There are different ways of going about it, some excellent, some atrocious. And that’s what this essay is about.
This is a guest post by long-time commenter Richard S. Russell. Richard is a retired research analyst (Wisconsin Department of Public Instruction); long-time activist in the realms of atheism, science fiction, and liberal politics; ballroom dancer; database developer; and generally highly opinionated person. He blogs irregularly at richardsrussell.blogspot.com.
Everybody makes hundreds of decisions every day. Some of those are good decisions, some bad, but the great majority are simply routine. You walk into the bathroom early in the morning and, without spending any noticeable time on it, decide to flip on the light switch, something you’ve done so often it’s habitual.
But somewhere along the line, perhaps on the very first morning you woke up in that house, you had to make that light-switch decision for the first time ever. At that point, you invested a little bit of thought in it. This essay is about decisions like that, the kind where you consciously work through the question before arriving at a conclusion.
It’s not about the sort of process that Malcolm Gladwell discussed in detail in his book Blink, wherein people arrive at judgments—often correct ones—in the blink of an eye but can’t articulate how or what led them to their conclusions. What Gladwell glosses over is that his chief examples are of experts in their fields—art critics looking at what purports to be an ancient Greek status but is actually a fake, or a tennis champion who went on to a long career as a coach and TV color commentator being able to predict with uncanny accuracy which of a player’s soft second serves would turn out to be double faults. These are people who have internalized their expertise into their subconscious minds. But I’ll be discussing normal de novo decisions by normal, non-expert people.
The Idea Factory
Let’s think of decision-making as being akin to an industrial plant—an idea factory, if you will—where inputs go thru a process to get turned into outputs. Because I’m mainly interested in the middle part, let’s quickly dispose of the two ends.
Inputs processes outputs
Outputs
The results of decision-making are things we do, say, think, and believe. Outputs also include several things which occur below (or perhaps outside of) the level of conscious thought: emotions, esthetics, and habits. These are, of course, a vital part of human existence, but they don’t involve conscious thought, so I’m going to skip them.
Inputs
There are several types of inputs into decisions, each with its own problems.
1. Definitions. Words are labels for concepts. We use words to make it easier to comprehend and manipulate the concepts. That makes it essential that we know which word goes with which concept, and that’s sometimes harder than it sounds. For example, consider the word “light.” It’s perhaps the most versatile word in the English language, with over a hundred meanings, in every possible part of speech. Just in the sciences, it can mean:

  • the opposite of heavy (an adjective, used in mechanics),
  • what a bird does on a limb (an intransitive verb, used in ornithology),
  • a visible form of electromagnetic radiation (a noun, used in optics), or
  • ignite, as with a Bunsen burner (a transitive verb, used in chemistry).

Often the meaning of the word will be clear from context, but it’s always a good idea, when getting into complex, knotty issues with people you might disagree with, to be sure you’ve agreed on a common set of definitions right from the outset.
2. Axioms. These are glorified assumptions. The glory comes from several different sources. Axioms are:

  • universal—they apply everywhere, and everyone agrees on them
  • reliable—no exceptions have ever been observed
  • fundamental—they can’t be explained in terms of anything simpler

Perhaps the best known set of axioms are the five axioms of Giuseppe Peano, from which the entire theory of natural numbers can be derived. Euclid’s axioms and postulates also form the basis for a complete understanding of plane geometry.
However, outside of abstract fields like mathematics and symbolic logic, axioms are very difficult to come by.
3. Ordinary Assumptions. Into absolutely every decision goes at least one assumption (usually many more). Assumptions are notoriously unreliable. You may have heard the old joke about the word “assume,” which is derived from a process which makes an “ass” of “u” and “me.” Yet they are also unavoidable. For example, into every decision you personally make about what course of action you intend to pursue is the assumption that you will be alive to pursue it. In the discussion of processes which follows, we will see that formal logic tries diligently to state its assumptions explicitly, as premises. This is by far the exception; most assumptions are unstated or implicit.
4. Genetics. “You can do anything if you want it bad enough. That is why we see so many people who can fly.” (Elden Carnahan) But you have no wings, so you can’t fly. You have no gills, so you can’t breathe underwater. It’s also likely that your brain is wired in such a way that there are some thoughts that you simply are unable to think. These probably vary from one individual to another, and they’re almost impossible to measure. To some extent, the limitations of genetics can be overcome by diligent training, but some limits imposed on us by nature we can never overcome.
5. Sensations. Sensory input is extremely valuable but also fallible. As part of a demo I use in my database classes, I hold up three pieces of paper, stapled together with a plastic overlay, and ask the class what color each piece of paper is. The top piece appears to be yellow, the middle one looks orange, and the lowest one is green.
At least, that’s the way it looks until I flip up the piece of yellow plastic in front of them, and they’re revealed to be white, pink, and blue.
colored paper
This is just one of many ways in which you might be misled by your senses. Of course, there’s a “but wait” part to the story as well, which is that it was also our senses which gave us the true picture behind the misleading façade. So rigorous attention to detail can produce a substantial improvement on our original casual sensory input. Can we ever be sure it’s perfectly accurate? No.
6. Memories. Your recollections of your own life experiences get hauled out when something trips the trigger of association in your brain that says “Hey, this new thing is like that old thing.” Memories, too, are sometimes unreliable but provide an unavoidable context for your decisions.
7. Testimony, also known as “other people’s memories.” Personal testimony is one of those curious things that has a great reputation which is completely unwarranted. Yet, despite its being colored by the testifier’s expectations and biases, it too serves as a form of input to the decision-making process—just one that we need to be cautious about.
In part 2, we consider the eight ways we process these inputs. Some are good, and some … aren’t.

I used to think I was indecisive,
but now I’m not so sure.
— seen on the internet

Bad decisions make good stories.
— seen on the internet

Image credit: Kosala Bandara, flickr, CC

13 More Puzzles

Crossword puzzleI recently explored a specific kind of puzzle in “Counterintuitive Puzzles that Should Be Easy.” I’ve explored other puzzles to shed some light on the religion question: the Monty Hall problem and the Puzzle of the Hidden Dots. There is more to be said about the odd ways the human brain works, but let’s postpone that and simply enjoy a few more puzzles for their own sake this time.
Write your answers to the puzzles that were new to you and check them with the answers below.
Got any good puzzles that you use to stump your friends? Tell us about them in the comments.
Quick ones
Let’s start with some quick ones like those in the previous post. See if the intuitive answer is correct.

  1. If fence posts are put in every 7 feet, how many posts are needed to make a fence 77 feet long?
  2. If it takes a chiming clock 3 seconds to strike 6:00, how long does it take to strike midnight? Ignore the duration of the sound of each chime. (h/t commenter Richard S. Russell)

Word sense

  1. Google’s new parser can make sense out of the following sentences. Can you? Here’s an example of a confusing sentence: “While Anna dressed the baby played in the crib.” That probably sounds odd until you mentally punctuate it like this: “While Anna dressed, the baby played in the crib.” Now try these:
  • The old man the boat.
  • While the man hunted the deer ran into the woods.
  • I convinced her children are noisy.
  • The coach smiled at the player tossed the Frisbee.
  • The cotton clothes are made up of grows in Mississippi.
  • The horse raced past the barn fell.

Easy physics puzzles
These are physics versions of the puzzles that should be easy to answer.

  1. Where does the length of a year come from?
  2. Why is it colder in the winter?
  3. A rowboat is floating in a swimming pool. Inside the rowboat is a cannonball. Take the cannonball and drop it overboard. Does the water level on the side of the pool rise, fall, or stay the same?

Something must be wrong here

  1. A friend of mine was from Iowa, and he said that there was quite a rivalry with the neighboring state of Missouri. Jokes were told in Iowa about how stupid Missourians were. They claimed that if Iowa gave the counties that bordered on Missouri to Missouri, it would raise the IQ of both states. But wait a minute—there has to be something wrong with that. Both states can’t improve, right?
  2. Proof that 1 = 2
  • Let a = b
  • Multiply both sides by a:

a2 = ab

  • Subtract b2 from each side:

a2 – b2 = ab – b2

  • Factor both sides:

(a – b)(a + b) = b(a – b)

  • Cancel (a – b) from both sides:

a + b = b

  • Substitute (remember that a = b):

a + a = a

  • Collect:

2a = a

  • Divide by a:

2 = 1 (But something has to be wrong here—what is it?)

Increasingly difficult puzzles

  1. You’re in the middle of an island covered uniformly with a dense, dry forest. Lightning sets the north end of the forest on fire, and the wind is blowing to the south. All the coast is cliff, so you can’t jump into the water to wait out the fire. The fire will reach you in an hour, and all you have is a backpack with things typically taken on a hike. What can you do to save yourself?
  2. You and I are going to meet at a cafe. The server delivers a coffee with milk on the side just as I get a text from you saying you will be 15 minutes late. Being the polite person that I am, I want to wait for you before drinking my coffee. If I want it to be as hot as possible, do I pour the milk in now or wait until you get here?
  3. Suppose we have 6-sided dice that don’t have the usual numbers 1 through 6 on them. If my die has a 6 on every face and yours has a 5 on every face, we could roll our respective die and I would beat you every time. Now suppose I change to a die with faces {6, 6, 6, 6, 1, 1}. My die is still the better one, but now I would beat your 5-faced die only 2/3 of the time. It’s easy to imagine die A being better than B, and B being better than C, but the puzzle is to make this loop around. That is, create dice such that A beats B, B beats C, and C beats A.
  4. Your company makes the metal numerals used by homeowners to identify their house number. How many of each should you make?
  5. Does the balance tip to the right, tip to the left, or remain unchanged?

Click on the Continue below for hints and then answers.
Balls and beakers

Everyone is entitled to their own opinion
but not their own facts.
— Daniel Patrick Moynihan

Image credit: stevepb, Pixabay, CC
Hints:

Insights into Bible Miracles from Magician Uri Geller

Remember Uri Geller? He was the psychic (or entertainer) who bent spoons and performed similar demonstrations in the 1970s and later. He claimed that extraterrestrials gave him paranormal powers, but wet blankets like magician James Randi stated that all of Geller’s claimed paranormal demonstrations were done with conventional stage magic. Randi showed this by publicly duplicating all of Geller’s tricks.
Geller responded: “Sure, there are magicians who can duplicate [my performances] through trickery.” In other words, just because others can do these things as tricks doesn’t mean that he’s not doing them using paranormal powers. But Randi observed that if Geller was actually doing what he claims, “He is doing it the hard way.”
Miraculous Bible claims
I was reminded of Geller when I recently read a defense of one of the Bible’s miracle claims. Maybe it was the Genesis flood story (which looks a lot like the prior Gilgamesh epic). Or the creation story (which looks a lot like the prior Sumerian creation story). Or the Jesus virgin birth story (which looks a lot like prior virgin birth claims of other great men). Or the Jesus resurrection story (which looks a lot like prior dying-and-rising stories of other gods from cultures in the eastern Mediterranean).
It doesn’t much matter which Bible story the apologist was trying to shore up—the defense is the same. It’s the Uri Geller Defense. Geller would say that just because they did his stunts through tricks doesn’t mean that he’s not doing them for real. And the Bible apologist says that just because other cultures anticipated some of the Bible’s fundamental miracle claims long before the Bible story was written doesn’t mean that that Bible story isn’t for real.
Granted. But if Randi can duplicate Geller’s demonstrations as tricks, that makes the starting hypothesis that Geller did the same, and his paranormal claims are fraudulent. Geller has the burden of proof to show that this simple and obvious natural explanation is wrong. And if we have precedents for many of the Bible’s miracle stories, that makes the null hypothesis that these are just ancient Jewish versions of well-known supernatural stories. The apologist has the burden of proof to show that, while the other stories are just myths and legends, the Bible miracles actually happened.
Yes, but those earlier stories don’t count!
Some apologists try to dismiss the earlier stories, but early church father Justin Martyr tried to spin the similarities between Jesus’s virgin birth claim and those of other gods to his advantage. He turned the tables. Why should the Greeks dismiss this miracle claim of Jesus, Justin asked, when they make similar claims about their own gods?
About the Jesus resurrection story, Justin speculated that the similar Dionysus story was planted in history by the devil himself. (I give Justin’s arguments in more detail here.)
Another angle is to emphasize that each Bible story is different from its precedents. Of course it’s different—if the Jesus story were identical to that of Dionysus (say), we’d call him Dionysus instead of Jesus. The question is: how can we trust a Jewish or Christian story as history when it came out of a culture swimming with older (false) stories with the same supernatural claims?
How we deal with similar claims.
Psychics might really be talking to the dead. Though that’s not where the evidence points, desperate customers want the psychics’ story to be true.
Crop circles might really be made by extraterrestrials. Though that’s not where the evidence points (creators have documented how they do it), crop circle enthusiasts dismiss the mundane explanation.
Uri Geller might be for real, though that’s not where the evidence points. Similarly, the Bible miracle stories might be true, but similar miracle stories in nearby cultures make copying by Bible authors the best explanation.
We can’t prove that the Bible’s miracle claims aren’t for real, despite all the precedents, but that’s the way to bet. The plausible natural explanation makes the supernatural explanation unappealing and unnecessary. Only someone with a desire to support a preconception wouldn’t follow the evidence where it leads.

I don’t want to argue 
with people who believe the world is flat.
— Episcopal bishop John Shelby Spong

Photo credit: Wikimedia
(This is an update of a post that originally appeared 6/5/13.)

An Honest Way for Christians to Separate Fact From Fiction

How do you separate fact from fiction? What procedure do you go through when confronted with a truth claim? This procedure should be practical rather than cumbersome, and it must be objective and fair rather than being biased toward a particular worldview. The goal should be finding the truth, not supporting a predefined conclusion.
I’ll go first. I only have two principles.

  • Accept the scientific or historical consensus, where there is one. In fields where there’s a high barrier to entry (becoming a physicist, for example), a layman is unqualified to evaluate evidence in that field and must rely on experts. That doesn’t mean that the experts are right, but the consensus is the best provisional approximation of the truth that we have at the moment.
  • Use the Principle of Analogy. This is the common-sense observation that we have much experience already with things that really exist (rocks, planets) and things that don’t (unicorns, legends). Let’s use this experience to find the best fit for any new claims. (I discussed the Principle of Analogy in detail here.)

Let’s take this procedure for a test drive with 15 categories of claims. (As we go through these, see what your procedure would make of them.)
Mythical animals such as unicorns, fairies, and leprechauns: fiction. Science tells us that these don’t exist and that there is no precedent for the magical powers attributed to some of these animals.
Animal surprises like the coelacanth, gorilla, Komodo dragon, and okapi: fact. Conclusive evidence for these animals was discovered only in the twentieth century.
Cryptozoological claims such as the Loch Ness Monster, Yeti, and Bigfoot: fiction. Science says that the arguments for these animals are insufficient. However, since these are (typically) claimed to have no special powers, there is a chance that evidence simply hasn’t been found. Science has been surprised before by new animals.
Curious life forms like the giraffe, blue whale, bacteria, volcanic vent life, and carnivorous plants: fact. Science tells us that they exist, even if we haven’t seen any personally.
Mythology like the Iliad, Gilgamesh, and Beowulf: fiction. Myths are sacred narratives that explain some aspect of reality (for example, the myth of Prometheus explains why we have fire and the Genesis creation myths explains where everything came from). History notes many examples of supernatural tales like these for which there is no evidence.
Legends like Merlin, Lady Godiva, and the Choking Doberman urban legend: fiction. Though they can include miracles, legends are otherwise plausible events that (unlike myths) are grounded in history. History tells us that there is insufficient evidence.
History like Alexander the Great, Hannibal, and Julius Caesar: fact. These generals won some remarkable battles. Though supernatural stories arose around these men (not surprising in a pre-scientific culture), history rejects the supernatural elements.
Novels like The Wizard of Oz and Harry Potter: fiction. Most are clearly labeled as fiction, though the label is occasional lost or ignored—Orson Welles’ Halloween, 1938 War of the Worlds radiocast is a famous example.
Individual claims of supernatural events like miracles, ghosts, and demonic possession: fiction. Though these are widespread and customized to each culture, history and science reject these for lack of evidence.
Individual claims of extraterrestrials like seeing a UFO or an alien encounters: fiction. Like claims for cryptids, these don’t rely on the supernatural, but science has insufficient evidence to conclude that they exist, particularly when other explanations (hoaxes, misidentification, etc.) are available. “But no one has debunked case X!” may be true but is hardly proof of an alien claim.
Pseudoscience like ESP; telepathy; Ouija boards; Nostradamus, Edgar Cayce, and other predictors of the future; and speaking to the dead: fiction. Insufficient evidence.
Fringe medicine like homeopathy, crystals, and Kinoki foot pads: fiction. With the stakes so high, fringe medical claims are common, whether by charlatans or people who honestly think they’ve found a new cure. But when alternative medicine provides the evidence that it works, it’s simply called “medicine.”
Science like black holes, undersea volcanoes, planets around distant stars, quantum physics, and the Big Bang: fact. Science has a remarkable track record, though, as stated above, its claims are provisional.
Conspiracy theories like the moon hoax or 9/11 as an inside job: fiction. Some conspiracies are accurate history, but many, like these examples, do not have the evidence.
Books from the other guy’s religion like Hinduism, Buddhism, Scientology, Christian Science, and Mormonism: fictionHistorians discard the supernatural. That supernatural claims were made is often history, of course, but not that the claims are accurate.
That’s 15 categories, some of which are fact and some fiction. What are your criteria for separating fact from fiction, and what do they make of these examples?
The challenge for the Christian is to have an objective list with no special pleading of the “except for my religion” sort. When Christians throw their net of truth into the water, a winnowing procedure that’s fine-meshed will pull up Christianity but also a lot of other religions. A procedure that’s coarser (like mine) will reject all religions.
What objective procedure can Christians have that will show Christianity as the only valid religion? I can imagine none.

Very few established institutions, governments and constitutions … 
are ever destroyed by their enemies 
until they have been corrupted and weakened 
by their friends. 
— Walter Lippman

(This is an update of a post that originally appeared 3/6/13.)
Photo credit: Hasin Hayder

Using Bayes Theorem to Decide How Likely the Jesus Miracle Stories

Christianity makes some fanciful claims: Mary was a virgin when she gave birth to Jesus. Jesus turned water into wine. He raised Lazarus from the dead and was resurrected from the dead himself. He is God, one with the creator of the universe.

One response to these claims is simply to dismiss them. We could put them in the same bin as the tall tales from other mythologies. Let’s instead see how probability can be applied to questions like this. Christians and non-Christians won’t easily agree because they won’t agree on the individual probabilities. Still, an understanding of conditional probability will give us a powerful analytical tool to at least better understand these claims.

Richard Carrier (whose Skepticon video was helpful in my understanding of this material) says that conditional probability “is the mathematical model for all correct reasoning about empirical claims.”

Bayes Theorem God

An example: medical test

Let’s imagine a test for a disease that is 95% accurate. That is, it is positive 95% of the time for someone who has the disease and negative 95% of the time for someone who doesn’t. Now imagine a common disease—10% of the population has it, so in a thousand people, 100 have it and 900 don’t.

Now give those thousand people the test. For the 100 sick people, the test gives 95 positives and 5 (false) negatives. And for the 900 healthy people, it gives 855 (900 × 0.95) negatives and 45 (false) positives.

Suppose the test says that you have the disease. How worrisome is that?

You must be in either of the two groups of people with positive test results. You’re either one of the 95 who actually do have it or one of the 45 who don’t but got a false positive. The chance that you’re sick is the number of sick people who test positive divided by the total number of positives: 95/(95 + 45) = 0.68.

The probability is 68% that you have the disease.

Let’s recap: what’s the probability that you (or any random person) has the disease? 10%. But what’s the probability given that you have a positive test result? It’s 68%. That’s conditional probability—the likelihood of something given (conditional upon) something else, some additional information.

Make the test ten times more accurate and a positive test results means a 96% chance that you have the disease. Instead—and here’s where it gets interesting—make the disease one tenth as common and your likelihood of having the disease given a positive test result is 16%. Make it very rare—one in a million—and that likelihood becomes just 0.005%.*

Visual approach to the same problem

Let’s explore the original problem but visually this time.

Bayes 1

This tree is just a recap of the previous problem: we start with 1000 people, then divide them into two groups based on what we know initially (the probability of a person being sick is 0.1), and finally process this with new information, the test whose probability of a right answer is 0.95.

Applying probability to the God question

Let’s move on to the God question (I’m using an example from Richard Carrier’s video).

Bayes 2

We start with 1000 universes, places where we imagine God to exist or not. In step 2, our initial assumption about the God claim is to be generous. Knowing nothing about this “God” guy, let’s start by saying that the likelihood of his existence is 50% (P(G) means “probability of God”). In step 2, this gives us two possibilities, with 500 universes in each.

In step 3, we add our new evidence. In the medical example, the new evidence was the result of a test, and here it’s the existence of evil in our world—birth defects, natural disasters that kill thousands, slavery and other immoral institutions, and so on. This evil exists, and yet no god is doing anything about it. What is the likelihood that a benevolent God could exist but still accept the evil in our world?

We have plenty of examples of benevolent beings: the noblest humans. They’re not perfect, but we could assume that a perfectly benevolent being would be at least as benevolent as a good human. Try to imagine a benevolent human (1) who could prevent bad from happening, (2) wouldn’t be harmed for taking this action, but (3) didn’t do anything. That’s pretty inconceivable. Let’s say that the probability of this happening is one in a million. Let’s be conservative and assign the same probability of standing by and doing nothing to a perfectly benevolent god.

That’s the P(e|G) = 10–6 in the diagram above: the probability (P) of the evidence of evil (e) given (|) the existence of God (G) is one in a million (10–6).

The 500 universes on the left side of the tree have to be divided given the probability of such a god existing given the existence of evil. Only one in a million could have a god (~0 means “almost zero”).

It’s easier on the right side of the tree. The likelihood of evil existing in a godless universe is 1.

Conclusion: the existence of evil makes God very improbable.

But … but God could have an excuse

In response, the Christian may say that God has an excuse for not acting. Yes, he’s benevolent, but he’s also omniscient, and our finite minds must simply be unable to understand the justification for his inaction. (This is the Hypothetical God Fallacy—starting with a presupposition of God’s existence—but let’s ignore that for now.)

“God works in mysterious ways” doesn’t help the Christian position, and the tree shows why.

Bayes 3

Consider step 3. The conditional probability is now 1. The apologist assumes some unspecified, inconceivable (by our finite brains) reason why God has his own justifiable reasons for allowing evil. But this means we’re looking for something else. We’ve gone from searching for God (G) to searching for “God who has unspecified, inconceivable reasons to allow evil” (G′).

As you can see from step 2, this simply moves the problem around. We had nothing to go on before, so we just assigned a generous 0.5 probability for God (P(G) = 0.5). But now we have a more refined goal that can be evaluated. Now, we’re looking for a very particular God (G′), a very unlikely God, a one-in-a-million God.

Conclusion: making excuses for God makes him less likely. First you must imagine (despite the lack of evidence) supernatural beings, then those with sufficient power to create the universe (deities), then assume that there are benevolent ones that interact with us, then imagine this one-in-a-million deity who has this inconceivable excuse to allow evil, even gratuitous evil like agonizing birth defects in animals.

The mathematics of conditional probability has been applied here to the question, How likely is God given the existence of evil? We could also ask, How likely is the virgin birth given the existence of other virgin birth stories that preceded Jesus that would’ve been known in Palestine? Or, How likely is the resurrection given the existence of stories of other dying-and-rising gods?

(I respond to the book The Probability of God here. That discussion looks at the many other reasons why the Christian god claim is unlikely besides the Problem of Evil.)

This approach will probably never resolve a debate between a Christian and a non-Christian because they won’t be able to agree on probabilities. However, it does give structure to the argument and highlights the unknowns.

Oh, I know He works in mysterious ways, 
but if I worked that mysteriously I’d get fired.
— caption for Bob Mankoff cartoon

(This is an update of a post that originally appeared 1/14/13.)

Appendix: Bayes’ Theorem

We have been using Bayes’ Theorem, though it is more commonly expressed as an equation. To see that this tree structured approach is an equivalent (though more intuitive) approach to the equation, let’s convert the medical test example above into equation form.

In that example, we first imagine a population of 1000 people and then (step 2) use the likelihood of the disease (10%) to divide that population into sick and well and then (step 3) further divide those populations into those who got positive and negative test results.

Our goal is P(s | p), the probability (P) of being sick (s) given (|) a positive test result (p). Bayes Theorem says that this is computed as follows:

Bayes 4

where ~s = the probability of not being sick.

This may look imposing, but you’re already familiar with these terms. Look at the numerator first, a measure of how likely s (being sick) is:

  • P(p | s) = the probability of a positive result given that you’re sick = 0.95 (that is, a likelihood of 95%)
  • P(s) = the probability of a random person being sick = 0.1 (the incidence is 10% in the population)

The denominator measures all possible results, your being sick and your being well. It’s the sum of the numerator (the sick likelihood) and its opposite (the not-sick likelihood), which is composed of:

  • P(p | ~s) = the probability of a positive test result given that you’re not sick (that is, a false positive), which is 0.05 (our example was simple, with false positives and false negatives both at 5%, but in the general case they could be different)
  • P(~s) = the probability of not being sick = 0.9. This one is not a variable since P(~s) = 1 – P(s).

Put these values into the equation: 0.95×0.1/(0.95×0.1 + 0.05×0.9) = 0.67857. This is what we got above with the simpler and more intuitive 95/(95 + 45).

* Here is the math behind those probabilities:
99.5/(99.5 + 4.5) = 0.957
9.5/(9.5 + 49.5) = 0.161
1/(1 + 20,000) = 0.00005

Photo credit: Wikimedia