By JOHN M. GROHOL, PSYD
Founder & Editor-in-Chief
‘The key to successful decision making is knowing when to trust your intuition and when to be wary of it. And that’s a message that has been drowned out in the recent celebration of intuition, gut feelings, and rapid cognition.’
Six years ago, Malcolm Gladwell released a book entitled Blink: The Power of Thinking Without Thinking. In his usual style, Gladwell weaves stories in-between descriptions of scientific research the support his hypothesis that our intuition can be surprisingly accurate and right.
One year ago, authors Daniel J. Simons and Christopher F. Chabris, writing inThe Chronicle of Higher Education not only had some choice words for Gladwell’s cherry-picking of the research, but also showed how intuition probably only works best in certain situations, where there is no clear science or logical decision-making process to arrive at the “right” answer. For instance, when choosing which ice cream is “best.”
Reasoned analysis, however, works best in virtually every other situation. Which, as it turns out, is most situations where big life decisions come into play.
Gladwell also argues that intuition is not always right. But it’s an argument that employs circular reasoning as exemplified in the last chapter, “Listening with your eyes.” In it, he describes how orchestra auditions moved from being un-blinded (meaning the people judging the audition saw people perform their musical pieces) to blinded (meaning the judges did not view or see who played what piece).
The argument Gladwell makes from this example is that the judge’s intuition was influenced by previously-unrecognized factors — the gender of the performer, what type of musical instrument they were playing, even their race. But that intuition was eventually corrected, because we can change what our intuition tells us:
Too often we are resigned to what happens in the blink of an eye. It doesn’t seem like we have much control over whatever bubbles to the surface from our unconscious. But we do, and if we can control the environment in which rapid cognition takes place, then we can control rapid cognition.
But this is circular reasoning. We often don’t know our intuition is wrong until long after the fact, or unless we conduct a scientific experiment that shows how truly wrong it is. For hundreds of years, conductors and other judges trusted their intuition about how to choose their orchestra players and for hundreds of years, they were horribly wrong. It was only through a freak accident of chance that they learned how wrong they were, as Gladwell describes it.
We don’t know when to trust our intuition in the future, because we have only hindsight in which to see whether we were right or not.
This hardly seems like something you can hang your hat on, that you can look to always (or even ever) reasonably “control the environment” where you’re making intuitive judgments.
As Simons and Chabris — authors of the book, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us — note, trusting your intuition can have serious consequences and even put other people’s lives at jeopardy:
Flawed intuitions about the mind extend to virtually every other domain of cognition. Consider eyewitness memory. In the vast majority of cases in which DNA evidence exonerated a death-row inmate, the original conviction was based largely on the testimony of a confident eyewitness with a vivid memory of the crime. Jurors (and everyone else) tend to intuitively trust that when people are certain, they are likely to be right.
Eyewitnesses consistently trust their own judgment and memory of events they witness. Scientific research, and now efforts like the Innocence Project, show how flawed that intuition is.
Here’s another example:
Consider talking or texting on a cellphone while driving. Most people who do this believe, or act as though they believe, that as long as they keep their eyes on the road, they will notice anything important that happens, like a car suddenly braking or a child chasing a ball into the street. Cellphones, however, impair our driving not because holding one takes a hand off the wheel, but because holding a conversation with someone we can’t see—and often can’t even hear well—uses up a considerable amount of our finite capacity for paying attention.
That is a key point, one missed by virtually everyone who insiststhey can text or talk on their cellphone. Their intuition tells them that it’s safe as long as they act like they’re paying attention. But they’re not. Their attention is clearly divided, using up precious and limited cognitive resources.
It’s like trying to take the SAT while at a rock concert of your favorite band. You may complete the SAT, but chances are you’re either going to do badly on it, or not be able to remember the playlist, much less many of the most memorable moments, of the concert.
Intuition is like that — we can’t trust it instinctually, as Gladwell suggests, because it is so often just plain wrong. And we can’t know ahead of time when it’s likely to be wrong in a really, really bad way.
Most students and professors have long believed that, when in doubt, test-takers should stick with their first answers and “go with their gut.” But data show that test-takers are more than twice as likely to change an incorrect answer to a correct one than vice versa.
In other words, reasoned analysis — not intuition — often works best. The exact opposite of Gladwell’s assertion.
As the authors note, “Gladwell (knowingly or not) exploits one of the greatest weaknesses of intuition—our tendency to blithely infer cause from anecdotes—in making his case for intuition’s extraordinary power.”
Indeed, we see this no better than in politics, and so it has special importance with the upcoming campaign season almost here. Politicians will make outrageous claims that has no basis in actual evidence or the facts. The most common claim that will be made in the upcoming presidential election, for example, will be that the Federal government can have a direct influence or impact on the economy. Short of actually spending Federal dollars to create jobs (e.g., the federal works programs of the 1930s during the GreatDepression), the government has a much more limited ability to influence the economy than most people understand.
Part of this is because even economists — the scientists who understand the complexities of modern economies — are at odds over how economies and recessions really work. If the experts can’t agree, what makes anyone think any type of government action actually produces results? And without hard data, as Simons and Chabris note, we have no idea whether government interventions actually make the recovery worse:
In a recent issue of The New Yorker, John Cassidy writes about U.S. Treasury Secretary Timothy Geithner’s efforts to combat the financial crisis. “It is inarguable,” writes Cassidy, “that Geithner’s stabilization plan has proved more effective than many observers expected, this one included.”
It’s easy for even a highly educated reader to pass over a sentence like that one and miss its unjustified inference about causation. The problem lies with the word “effective.” How do we know what effect Geithner’s plan had? History gives us a sample size of only one—in essence, a very long anecdote. We know what financial conditions were before the plan and what they are now (in each case, only to the extent that we can measure them reliably—another pitfall in assessing causality), but how do we know that things wouldn’t have improved on their own had the plan never been adopted? Perhaps they would have improved even more without Geithner’s intervention, or much less.
Anecdotes are great illustrators and help us connect with boring scientific data. But using anecdotes to illustrate only one side of the story — the story you want to sell us — is intellectually dishonest. That’s what I find authors like Gladwell doing, time and time again.
Intuition has its place in the world. But believing it is a reliable cognitive device in most situations that we should trust more often than not is sure to get you into trouble. Relying more often on intuition instead of reasoning is not something that I believe is supported by our current psychological understanding and research.
Read the full Chronicle article now (it’s lengthy, but makes for a good read): The Trouble With Intuition
Gerd Gigerenzer, director of the Max Planck Institute for Human Development and author ofGut Feelings: The Intelligence of the Unconscious (Viking, 2007), takes a more benign view of intuition: Intuitive heuristics are often well adapted to the environments in which the human mind evolved, and they yield surprisingly good results even in the modern world. For example, he argues, choosing to invest in companies based on whether you recognize their names can produce reasonably good returns. The same holds for picking which tennis player is likely to win a match. Recognition is a prime example of intuitive, rapid, effortless cognition. Gigerenzer’s book jacket describes his research as a “major source for Malcolm Gladwell’s Blink,” but the popular veneration of intuitive decision-making that sprang from Blink and similar works lacks the nuance of Gigerenzer’s claims or those of other experimental psychologists who have studied the strengths and limits of intuition.
We filmed the basketball-passing game with a single camera and, like Neisser, we had a female research assistant stroll through the game with an open umbrella. We also made a version in which we replaced the umbrella woman with a woman in a full-body gorilla suit, even having her stop in the middle of the game, turn toward the camera, thump her chest, and exit on the other side of the display nine seconds later. People might miss a woman, we thought, but they would definitely see a gorilla.
We were wrong. Fifty percent of the subjects in our study failed to notice the gorilla! Later research by others, with equipment that tracks subjects’ eye movements, showed that people can miss the gorilla even when they look right at it. We were stunned, and so were the subjects themselves. When they viewed the video a second time without counting the passes, they often expressed shock: “I missed that?!” A few even accused us of sneakily replacing the “first tape” with a “second tape” that had a gorilla added in.
The finding that people fail to notice unexpected events when their attention is otherwise engaged is interesting. What is doubly intriguing is the mismatch between what we notice and what we think we will notice. In a separate study, Daniel Levin, of Vanderbilt University, and Bonnie Angelone, of Rowan University, read subjects a brief description of the gorilla experiment and asked them whether they would see the gorilla. Ninety percent said yes. Intuition told those research subjects (and us) that unexpected and distinctive events should draw attention, but our gorilla experiment revealed that intuition to be wrong. There are many cases in which this type of intuition—a strong belief about how our own minds work—can be consistently, persistently, and even dangerously wrong.’