I’ve written before about the discouraging studies that illustrate the Backfire Effect. If someone has a belief that is objectively wrong—that is, a belief that an unbiased observer equipped with all relevant facts would judge as false—giving the correct information isn’t likely to get them to change their mind.
But it feels so right! The other guy has come to the wrong conclusion, and once I give him the correct facts, he’ll cheerfully thank me and switch to the correct opinion, right? That sounds reasonable, but no—he will instead very likely double down on the false belief. Changing one’s opinion is painful, and this response to his error only makes the problem worse.
I explored approaches that minimize the Backfire Effect, but a recent article, “Why Facts Don’t Change Our Minds” by Elizabeth Kolbert, has an approach that should be more productive. But more on that shortly. Getting there is an interesting journey.
Study 1: biased weighing of data
The article gives a number of studies that reveal the embarrassingly inept way our minds sometimes work. In one study, half of the participants were in favor of capital punishment and half not. Each participant was given two studies that argued the two sides of the issue. These studies were actually made up, but they presented data that was equally compelling. Participants reported that the one that supported their own opinion was far more compelling than the other (this is confirmation bias). Afterwards, they were asked about their views. Unsurprisingly, they were more entrenched than they’d been at the start. This is the Backfire Effect.
Why are we susceptible to poor thinking?
This human failing enables America’s new vogue of alternative facts. But since this thinking isn’t logical, why do people do it? Why are they biased toward confirming evidence, and why does presenting disconfirming evidence force them to double down?
Since this is pretty much universal, it’s an evolved trait, but what value could it have to outweigh the downsides? Some researchers say that it developed in a society where humans had to work together. A cooperating society wants to encourage members who contribute, but it must punish freeloaders—possibly even to the point of exile. That’s a substantial punishment because, in a primitive society, living on your own is much harder than being a contributing member in a tribe.
Human reason didn’t evolve to weigh economic policy options or evaluate social safety nets, but, according to this theory, it evolved to defend one’s social status. Winning arguments is important, and self-confidence helps. Doubting your position is not a good thing. The thoughtful tribal member who says, “Well, that’s a good point—maybe my contribution to the group has been sub-par” risks exile.
(Another area of thought where we are surprisingly poor is probability—surprising because we seem to bump into simple probability questions all the time. I’ve written about the Monty Hall Problem here and about simple puzzles that reveal our imperfect thought process here.)
Study 2: explain your answer
In this study, graduate students were first asked to evaluate their understanding of everyday devices—toilets, zippers, cylinder locks, and so on. Next, they were asked to write a detailed explanation of how the devices worked. Finally, they again rated their understanding of these devices. Being confronted with their incompetence caused them to lower their self-rating.
It’s easy to think of the user interface alone and overestimate our understanding of how it works inside. This encapsulation is important for progress—you don’t understand how a calculator works but you know how to operate it. The same is true (for most of us) for a car, a computer, a cell phone, or the internet. We know how to buy hamburger or a suit, but we don’t understand the particulars of how they got to the store. This encapsulation extends into public policy—we (usually) don’t understand the intricacies of policy proposals like cap and trade or trade deals like NAFTA or TPP. Instead, we rely on trusted politicians and domain experts to convince us of the rightness of one side of the issue.
Study 3: policy questions
That brings us to one final study, modeled on the last one. Participants were asked their opinions on policy questions like single-payer health care or merit-based pay for teachers and then were asked to rate their confidence in their answers. Next, they were asked to explain in detail the impact of implementing each proposal. Finally, they were asked to reevaluate their stance. Having just struggled to explain the details of their favored proposal, they dialed back their confidence.
This finding may be relevant to our interactions with people arguing for scientific or historical claims like Creationism or the Resurrection, or for social policies like making abortions illegal or “natural marriage.” Instead of pushing back, ask them to explain their position. Let them marinate in their own confusion. Avoid the snarky retort (tempting, I know), which would trigger the Backfire Effect.
This research is equally applicable to ourselves. Find or create opportunities to explain how your favored policy, if implemented, would work and then ask yourself how this exercise changes your opinion. Is it still a no-brainer? Or have you uncovered obstacles that might make success more elusive?
Let me end with one final cautionary observation. When you ask someone, “Do you accept evolution?,” you may see this as a straightforward question about opinion or knowledge. For some, however, you’re asking about who they are. “I am a Christian,” they think, “and my kind of Christian rejects evolution.” Your straightforward question becomes in their mind, “Do you reject Jesus Christ as Lord and savior?,” to which the answer is, obviously, No. Other personal questions potentially fall into the same trap—questions about abortion or same-sex marriage or even climate change.
There is security in obscurity.
Precision invites refutation.
— Walter Kaufmann
Image credit: Wesley Eller, flickr, CC