Saturday, April 14, 2012

The Uncertainty of Intuition

"The first principle is that you must not fool yourself—and you are the easiest person to fool."  —Richard Feynman
I'm only a novice when it comes to philosophy, but I think I've noticed a general trend within the field. First, someone comes up with a philosophical framework for explaining a certain phenomenon. Then someone else comes up with a counterexample that intuitively appears to falsify that framework. Philsophers are then faced with a couple of options: They can follow their intuitions and either modify the framework or reject it entirely, or they can continue to accept the framework and claim that it's in fact our intuition that's faulty.

Let me give a couple of examples, starting with one in the field of ethics. Utilitarianism, generally speaking, is the ethical theory that one ought to maximize the overall amount of happiness that exists. It seems like a perfectly sensible way of approaching the subject, but some versions of this concept are vulnerable to what Derek Parfit calls the Repugnant Conclusion. In the diagram below, each box represents a population; width measures group size and height measures average happiness. The Repugnant Conclusion is that according to some forms of utilitarianism, Z is preferable to A because Z's total area is greater than A's. In other words, having a massive number of people whose lives are barely worth living is preferable to having a (relatively) small number of people whose lives are extremely happy.


Intuitively, this conclusion does seem repugnant—but is it our ethical theory or our intuition that we should modify in response? Perhaps we look at Z and imagine throngs of people toiling away in a wretched struggle to survive, when what we should realize is that a life "barely worth living" is still worth living. If these people looked back at their lives in their golden years, they could honestly say they were glad to have lived. Hmm... maybe such a world wouldn't be as bad as we think.

In the previous case it was pretty easy to imagine our intuition being wrong. But now let's take on a tougher example, this time from philosophy of mind. The leading philosophical framework for understanding what constitutes a mind is called functionalism (see also the SEP). Basically, it says that what makes a mind is not any particular material (e.g. neurons), but a way of functioning: it must receive inputs which alter its internal state and produce outputs. It could be made of neurons, silicon or anything else as long as it's properly organized and functional.

Enter the China Brain. Ned Block asks us to imagine the entire population of China hooked up to one another in some way (walkie-talkies, for example), with each person corresponding to a neuron. The individuals then communicate in a rudimentary manner that mimics the firing of interconnected neural pathways. The result is sometimes known as a Blockhead.

Haha. Blockhead. Because his last name's Block.
Can this vast collection of people buzzing at each other on walkie-talkies really have mental states? Can it experience sadness or the color red? Block wants us to intuitively conclude that such possibilities are ridiculous, and certainly they seem to be. But how much of this intuition is due to the fact that we normally think of minds as embodied and centralized?

Imagine that we could somehow shrink this crowd of a billion, put them inside a human skull and attach them to the appropriate sensory inputs and motor outputs. If you had a conversation with this entity, who looks and acts exactly like a normal person, would it really be so hard to think of them as having a mind? Conversely, imagine that we could take someone's still-living brain out of their head and the stretch the neurons out across hundreds of square miles. If you walked into the middle of this silky net of microscopic axons, would it seem any more like a thinking, feeling, experiencing mind than the China brain does? Suddenly, the obvious conclusion may not be so obvious anymore.

This post is partly an excuse to share some really cool thought experiments, but I do have a point to make as well: We need to be careful about accepting intuitive philosophical arguments, because they can be engineered (intentionally or not) to push us toward an unwarranted conclusion. Daniel Dennett coined the term "intuition pump" to describe such cases. Often these arguments employ sophisticated misdirection to make us ignore factors that would dramatically change our judgment if properly understood.

Sometimes, too, an argument has at its core a subject that we as fallible humans are just flat-out bad at making judgments about, or even one that lies completely outside our realm of experience. I'm referring specifically to the cosmological argument, which I hope to eventually delve into more deeply. In arguing for Kalam, William Lane Craig proclaims that the temporal universe cannot always have existed because actual infinites cannot exist. He uses the Hilbert hotel paradox as a demonstration of this, but all he's really demonstrated is that the math of infinity is incredibly unintuitive. He also asserts that whatever begins to exist has a cause, and it again seems staggeringly unintuitive to think that the universe could have sprung up uncaused out of absolute nothingness. But a complete lack of everything—space, time, even physical laws—is in such opposition to our everyday experience that making any definitive pronouncements about its properties would be pure folly.

So here's the moral of the story: In all aspects of life, theological and ordinary alike, be skeptical about relying on intuition to solve problems. Your minds is better suited to some tasks than others, and it's beset with biases at every turn. It's easy for subtle yet crucial details to escape your notice, drastically skewing your judgment. Consider a given issue from many perspectives and try to think of what variables you may be leaving out—even when the answer seems clear-cut. Because as satisfying as it is to debunk pseudoscientists and expose charlatans, the most important part of being a skeptic isn't questioning other people. It's questioning yourself.

1 comment:

  1. Intuition is good at certain things, yet bad at others. Learning this is essential because the tool we use to think and philosophize is the brain, and if we don't know how our tool works we'll use it poorly.

    This is why I'm skeptical of relying on intuition as a good guide to navigating complex arguments, or even for establishing philosophical axioms. Like WLC's Cosmological argument. Sure, his necessary god intuitively makes sense of our universe, but thinking probabilistically the universe actually favors either a non-all powerful god or atheism. Our universe is actually a pretty strong refutation of WLC's all powerful god from a probability perspective.

    Intuition is not suited to think probabilistically. There are so many studies that demonstrate how poorly we think about probability (like the Monty Hall Problem, Oliver's Blood, the case of "Linda", etc.) that to not learn about it is to potentially fall prey to fallacious religious thinking.

    For example, this is a common religious argument:

    "If I had some disease that had a 1 in a million chance of survival and I survived it, it's not because I was that one in a million, it's because God did it"

    This is a Base Rate Fallacy.

    Yet, this makes intuitive sense because there is a high probability that you would survive the disease given that god wanted it. But this is ignoring the base rate -- or prior probability -- of god's existence. Which has to be so close to zero that it might as well be zero. And even if you assume that it's not, it's either a condemnation of god (he can only save 1 out of a million people?) or the actual chance of surviving the disease (the total probability) is not one in a million but zero, which refutes the original logic of the argument.

    ReplyDelete