"The first principle is that you must not fool yourself—and you are the easiest person to fool." —Richard FeynmanI'm only a novice when it comes to philosophy, but I think I've noticed a general trend within the field. First, someone comes up with a philosophical framework for explaining a certain phenomenon. Then someone else comes up with a counterexample that intuitively appears to falsify that framework. Philsophers are then faced with a couple of options: They can follow their intuitions and either modify the framework or reject it entirely, or they can continue to accept the framework and claim that it's in fact our intuition that's faulty.
Let me give a couple of examples, starting with one in the field of ethics. Utilitarianism, generally speaking, is the ethical theory that one ought to maximize the overall amount of happiness that exists. It seems like a perfectly sensible way of approaching the subject, but some versions of this concept are vulnerable to what Derek Parfit calls the Repugnant Conclusion. In the diagram below, each box represents a population; width measures group size and height measures average happiness. The Repugnant Conclusion is that according to some forms of utilitarianism, Z is preferable to A because Z's total area is greater than A's. In other words, having a massive number of people whose lives are barely worth living is preferable to having a (relatively) small number of people whose lives are extremely happy.
Intuitively, this conclusion does seem repugnant—but is it our ethical theory or our intuition that we should modify in response? Perhaps we look at Z and imagine throngs of people toiling away in a wretched struggle to survive, when what we should realize is that a life "barely worth living" is still worth living. If these people looked back at their lives in their golden years, they could honestly say they were glad to have lived. Hmm... maybe such a world wouldn't be as bad as we think.
In the previous case it was pretty easy to imagine our intuition being wrong. But now let's take on a tougher example, this time from philosophy of mind. The leading philosophical framework for understanding what constitutes a mind is called functionalism (see also the SEP). Basically, it says that what makes a mind is not any particular material (e.g. neurons), but a way of functioning: it must receive inputs which alter its internal state and produce outputs. It could be made of neurons, silicon or anything else as long as it's properly organized and functional.
Enter the China Brain. Ned Block asks us to imagine the entire population of China hooked up to one another in some way (walkie-talkies, for example), with each person corresponding to a neuron. The individuals then communicate in a rudimentary manner that mimics the firing of interconnected neural pathways. The result is sometimes known as a Blockhead.
|Haha. Blockhead. Because his last name's Block.|
Imagine that we could somehow shrink this crowd of a billion, put them inside a human skull and attach them to the appropriate sensory inputs and motor outputs. If you had a conversation with this entity, who looks and acts exactly like a normal person, would it really be so hard to think of them as having a mind? Conversely, imagine that we could take someone's still-living brain out of their head and the stretch the neurons out across hundreds of square miles. If you walked into the middle of this silky net of microscopic axons, would it seem any more like a thinking, feeling, experiencing mind than the China brain does? Suddenly, the obvious conclusion may not be so obvious anymore.
This post is partly an excuse to share some really cool thought experiments, but I do have a point to make as well: We need to be careful about accepting intuitive philosophical arguments, because they can be engineered (intentionally or not) to push us toward an unwarranted conclusion. Daniel Dennett coined the term "intuition pump" to describe such cases. Often these arguments employ sophisticated misdirection to make us ignore factors that would dramatically change our judgment if properly understood.
Sometimes, too, an argument has at its core a subject that we as fallible humans are just flat-out bad at making judgments about, or even one that lies completely outside our realm of experience. I'm referring specifically to the cosmological argument, which I hope to eventually delve into more deeply. In arguing for Kalam, William Lane Craig proclaims that the temporal universe cannot always have existed because actual infinites cannot exist. He uses the Hilbert hotel paradox as a demonstration of this, but all he's really demonstrated is that the math of infinity is incredibly unintuitive. He also asserts that whatever begins to exist has a cause, and it again seems staggeringly unintuitive to think that the universe could have sprung up uncaused out of absolute nothingness. But a complete lack of everything—space, time, even physical laws—is in such opposition to our everyday experience that making any definitive pronouncements about its properties would be pure folly.
So here's the moral of the story: In all aspects of life, theological and ordinary alike, be skeptical about relying on intuition to solve problems. Your minds is better suited to some tasks than others, and it's beset with biases at every turn. It's easy for subtle yet crucial details to escape your notice, drastically skewing your judgment. Consider a given issue from many perspectives and try to think of what variables you may be leaving out—even when the answer seems clear-cut. Because as satisfying as it is to debunk pseudoscientists and expose charlatans, the most important part of being a skeptic isn't questioning other people. It's questioning yourself.