CHAPTER 6: KNOWLEDGE
From Great Issues
in Philosophy, by James Fieser
Home:
www.utm.edu/staff/jfieser/120
Copyright 2008,
updated 4/1/2011
CONTENTS
A. Skepticism
Radical Skepticism
Criticisms of Radical Skepticism
B. Sources of Knowledge
Experiential Knowledge
Non-Experiential Knowledge
Rationalism and Empiricism
C. The Definition of Knowledge
Justified True Belief
The Gettier Problem
D. Truth, Justification and Relativism
Theories of Truth
Theories of Justification
What’s so Bad about Relativism?
E. Scientific Knowledge
Confirming Theories
Scientific Revolutions
For Reflection
1. What reasons might you have for doubting that an object
in front of you, such as a table, actually exists?
2. What are the common ways in which you acquire knowledge,
and which is most reliable?
3. If you claim to know something, do you need to have
evidence to back up your claim?
4. What does it mean for a statement to be true?
5. Everyone believes that George Washington existed; what
other beliefs are that based upon, and what, in turn, are those based upon?
6. Are scientific theories merely the collective opinions of
scientists, or do such theories give us genuine knowledge of the real world?
Some years ago, 39 members of an organization called
Heaven's Gate committed suicide in the belief that they were shedding their earthly
bodies to join an alien spaceship that was following the path of a comet. The
centerpiece of the cult's belief system was that there are superior beings out
there in the universe that exist on a higher level than we do on earth. They
have perfect bodies, roam the galaxy in spacecrafts, and have mastered time
travel. Occasionally these aliens send an away team to earth, who temporarily
take the form of human beings, and assist willing human students in transforming
to this higher level. Jesus and his disciples, they believed, were an earlier
away team of alien teachers. Another away team appeared in the late 20th
century. After the right training, human students would need to kill their
physical bodies, which would release their spirits into the atmosphere. Nearby
alien spacecrafts would then retrieve the spirits and provide them with perfect
bodies.
It's one thing to have an interesting idea about
how the universe runs. It is quite another thing to know that the idea
is true. Heaven's Gate members indeed claimed to know the truth of their views.
That knowledge, they explained, begins when superior aliens implant special
wisdom in the minds of select human students; this knowledge is further refined
as they study under their alien teachers. But does this count as genuine
knowledge? One of the central concerns of philosophy is to understand the
concept of knowledge, which might help us distinguish between the convictions
of a Heaven's Gate believer and the convictions of, say, a scientist. We want
to see what precisely it means to "know" something, and what the
legitimate avenues are for gaining knowledge. We'd also like to know how to
respond to skeptics who say that all knowledge claims – including scientific
ones -- are just as uncertain as the views of Heaven's Gate believers. These
are the primary concerns in the philosophical study of the concept of
knowledge, which goes by the name epistemology – from the Greek words episteme
(knowledge) and logos (study).
There are two main ways that we normally use the
term "knowledge." First I might say that I know how to do some task,
like fix a flat tire on my car or run a program on my computer. This is procedural
knowledge, which involves skills that we have to perform specific chores.
Second, I might say that I know some proposition, such as that "Paris is the capital of France." This is propositional knowledge – knowledge
about some fact or state of affairs in the world. As important as procedural
knowledge is in our daily lives, it is propositional knowledge that interests
philosophers and will be the focus of this chapter.
A. SKEPTICISM
According to Heaven's Gate believers, their knowledge about
the superior alien race came principally from the aliens themselves who took
the form of human teachers. However, once becoming human, the alien teachers
were stripped of their previous memories and knowledge. All that remained for
them was a hazy image of the higher level, which they struggled to convey to
their human students. Ironically, they explain, the aliens purposefully imposed
this knowledge restriction on themselves since "too much knowledge too
soon could potentially be an interference and liability to their plan."
Immediately we should be suspicious about the belief system of the Heaven's
Gate cult since the sources of their knowledge are so shaky. Not only must the
human students blindly trust the statements of their supposed alien teachers,
but the alien teachers themselves have no clear memories of their previous
alien lifestyle. Genuine knowledge must have some evidence to back it up, which
we don't see here. It's thus pretty natural for us to be skeptical about cults
like Heaven's Gate that make extravagant claims with little concrete evidence.
If we didn't have this built-in suspicion we'd be suckered into every
hair-brained scheme that came along.
But how far should our skepticism go? As long as
there have been philosophers on this planet, there have been skeptics who have
cast doubt on even our most natural beliefs, such as my belief that the table
in front of me actually exists. One ancient philosopher, for example, believed
that everything in the world changed so rapidly that, when someone spoke to him,
he couldn't trust that the words meant the same thing by the time they reached
his ears. He thus wouldn't verbally reply to anyone, but would only wiggle his
finger indicating that he heard something. While this is quite an extreme
reaction, it vividly illustrates the notion of philosophical skepticism
-- the view that there are grounds for doubting claims that we typically take
for granted.
Radical Skepticism. There are many
kinds of philosophical skepticism, and one distinguishing factor involves the
extent of the skeptic's doubt. Local skepticism focuses on a particular
claim, such as the belief that God exists, or that there is a universal
standard of morality, or that there is intelligent life elsewhere in the
universe. In each case, the skeptic would argue that we should doubt the specific
claim in question. Many of us are local skeptics about at least some beliefs
that others hold, and while we are skeptical about some issues we might be full
believers on others. For example, I might be a religious skeptic about God's
existence, but not be a moral skeptic about a universal standard of morality.
And then there is radical skepticism, which maintains that all of
our beliefs are subject to doubt. For any belief that we propose, we cannot
know with certainty whether that belief is true or false. This is the type of
skepticism that has attracted the most interest among philosophers. A couple
centuries ago traditional thinkers argued that this kind of skepticism is a
danger to everything that we hold sacred and it threatens to set civilization
adrift on an ocean of chaos. The foremost task of philosophers, they argued,
should be to combat radical skepticism and establish the certainty of our most
important beliefs. Time has shown, though, that this was a false alarm: radical
skepticism has not done any apparent damage to society. In fact, radical
skeptics have maintained that there is a special benefit to skepticism: it can
make us more tolerant of others when we realize that we ourselves can't claim
to have superior knowledge.
There are three general strategies for defending
radical skepticism, each named after its originator. The first is Pyrrhonian
skepticism, which was inspired by the ancient Greek philosopher Pyrrho (c.365-c.275
BCE). While Pyrrho wrote nothing, through his teachings he started a skeptical
tradition whose aim was to suspend belief on every possible issue. The
Pyrrhonian position is this: for any so called fact about the world, there are
countless ways of interpreting it, none of which we can prefer above another;
we should thus suspend belief about the nature of that thing. Take, for example
a red ball that's in front of me. My eyes tell me one thing about it, but my
sense of touch tells me an entirely different thing. To someone else who is
color blind or has chapped hands, it will have a different set of features. To
a dog it will appear even more differently. Suppose that someone was shrunk to
the size of a molecule and sitting on the ball: the ball's surface would seem
flat, not round. Suppose someone else expanded to the size of a mountain and
was looking down on the ball: it would appear to be a speck with no
recognizable features at all. We get used to the way that we perceive things
like a red ball, and we assume that the ball actually has the features that we
perceive. According to the Pyrrhonian skeptic, there's no basis for preferring
our individual perspective over any other one. Arguments supporting any claim
to knowledge will always be counter-balanced by opposing arguments, thus
forcing the suspension of judgment on the original knowledge claim. Thus, views
of the physical world, God, morality and everything else are all merely a
matter of perspective, and the wisest course of action for us is to abstain
from believing those views. Doubting everything, Pyrrhonians argue, will give
us a sense of peace since we'll no longer be pulled back and forth in
controversies about science, God, morality, politics, or anything else.
The second approach to radical doubt is Humean
skepticism, defended by the Scottish philosopher David Hume (1711-1776).
According to this view, the human reasoning process is inherently flawed and
this undermines all claims to know something. The problem is that when we list
the reasons for our various beliefs about the world, we'll find that many of
the explanations are contradictory. For example, if I follow one course of
reasoning, I'll come to the conclusion that the ball in front of me really is
round. But if I reason in another way, I'll conclude that the ball's roundness
is just a matter of my perspective. Maybe the ball really is round; then again,
maybe it's not. It makes no difference what the truth of the matter is since we
now can't trust anything that human reason tells us. It's like being tested on
a math problem: it makes no difference if you accidentally come up with the
right answer. Once you've made a mistake in your calculations, your solution to
the math problem is wrong, and you get no partial credit. Similarly, human
reasoning is defective, and it's irrelevant whether it accidentally leads us to
the truth. After exposing a series of contradictions within the human
reasoning process, Hume makes this dismal assessment:
The intense
view of these manifold contradictions and imperfections in human reason has so
wrought upon me, and heated my brain, that I am ready to reject all belief and
reasoning, and can look upon no opinion even as more probable or likely than
another. [Treatise of Human Nature, 1.4.7.8]
Thus, for Hume, everything that we reason about is based on
faulty mental programming, and we need to regularly remind ourselves of this
before we get too confident about what we claim to be true.
The third approach to radical doubt is Cartesian
skepticism, named after French philosopher René Descartes (1596-1650). On
this view, our entire understanding of the world may just be an illusion, and
this possibility casts doubt on any knowledge claim that we might make.
Descartes himself was not a skeptic, but he tentatively used a compelling argument
for radical skepticism as a tool for developing a non-skeptical philosophical
system. Descartes speculates: what if he was just a mind without any body,
bobbing around in the spirit realm, and everything he perceived about the world
was implanted in his mind by a powerful evil demon? Everything he assumes about
the world, then, would be false. He describes this scenario here:
I will suppose that ... some evil demon
with extreme power and cunning has used all his energies to deceive me. I will
consider that the sky, the air, the earth, colors, shapes, sound, and all other
external things are nothing but deluded dreams, which this genius has used as
traps for my judgment. I will consider myself as having no hands, no eyes, no
flesh, no blood, nor any senses, but falsely believing that I have all these things.
[Meditations, 1]
Descartes didn't actually believe that he was being
manipulated by an evil demon. His point is that this is a theoretical
possibility that undermines all of our knowledge claims. I look at a ball in
front of me; while it seems to really be there, I can't know this for sure
since my experience might be an illusion imposed on me by the evil demon.
Of the three approaches to radical skepticism,
the Cartesian version has captured people's imagination the most. Science
fiction movies galore play off this theme. For example, in the film The
Matrix, people's bodies are suspended in tubs of goo and their brains are
wired into a massive computer that generates an artificial reality. Similarly,
a commonly used example in contemporary philosophy is that of the brain in a
vat: a mad scientist puts a person's brain into a glass jar and wires it to
a supercomputer that creates an artificial reality. Whether the mechanism is an
evil demon, the Matrix, or a mad scientist, the victims' experiences are so
convincing that, from their perspective, it's impossible to tell that the
perceived reality is fake.
Criticisms of Radical Skepticism.
With arguments as shocking as these, traditional philosophers wasted no time
trying to stamp out the fire of radical skepticism. Four arguments were
commonly used. First, even if there are ample reasons for me to doubt
everything, there is still one truth that is irrefutable: my own existence.
For, even if I say "I doubt that I exist," I must still be present to
do the doubting. The act of doubting itself requires a doubter, and so my own
existence will always be immune to skeptical doubts. This was the criticism
that Descartes himself made of radical skepticism, which he encapsulated in the
expression "I think, therefore I am." But radical skeptics have not
been impressed by this maneuver. The problem with Descartes' solution is that
it assumes too many things about what the "I" is behind all those
doubts. Most importantly, it takes for granted that the "I" is a unified,
conscious thing that continues intact as time moves on. But, according to the
skeptic, this conception of the "I" relies too heavily on memory. I
assume that I'm the same person now that I was a few moments ago because that's
how it seems in my memory. And memory is a very easy target of doubt. Imagine
that, every half second, an evil demon wiped clean all of my memories and gave
me entirely new ones. One moment I think I'm a farmer, a half-second later a
caveman, a half-second later a frog. For all I know, says the skeptic, that's
what's actually happening to me right now and in that situation it would seem
pretty meaningless for me to assert that "I exist".
A second common attack on radical skepticism is
that we can't live as skeptics in our normal lives. Sure, there is the
occasional odd ball, like the finger-wiggling ancient philosopher described
earlier. But if we persistently doubted everything, then we wouldn't eat when
hungry, move from the path of speeding cars, or a thousand other things that we
do during a typical day. We'd hesitate and question everything, but never act.
Radical skeptics have not been impressed with this argument either. According
to Hume, we have natural beliefs that direct our normal behavior and override
our skeptical doubts. As legitimate as radical skepticism is, nature doesn't
give us the option to act on it. He makes this point here:
Most fortunately it happens, that
since reason is incapable of dispelling these clouds [of skepticism], nature
herself suffices to that purpose, and cures me of this philosophical melancholy.
. . . I dine, I play a game of backgammon, I converse, and am merry with my
friends; and when, after three or four hours' amusement, I would return to
these speculations, they appear so cold, and strained, and ridiculous, that I
cannot find in my heart to enter into them any further. [Treatise,
1.4.7]
Thus, according to Hume, we waver back and forth between
skepticism and natural beliefs. When we realize how philosophically unjustified
natural beliefs are, we are led down the path of skepticism. The doorbell then
rings, and we're snapped out of our philosophical speculations and back to our
normal routines and natural beliefs.
A third attack on radical skepticism is that the
skeptic's position is logically self-refuting. The skeptic's main point is
this:
- We cannot know any belief with certainty.
Let's call this "the skeptic's thesis." However,
if I put forward the skeptic's thesis, then I am implying that I know it with
certainty. It is like saying this:
- We know with certainty that we cannot know any belief
with certainty.
The skeptic's thesis itself seems to be an exception to the
very point that it is making. Thus, the skeptic's thesis is logically
inconsistent with itself and we should reject it. But skeptics have a response
to this criticism, which they sometimes explain using the metaphor of a
digestive laxative. We take laxatives to rid our digestive system of unwanted
stuff. But as the laxative takes effect, the laxative itself is expelled from
the digestive system along with everything else. The skeptic's thesis, then, is
like a laxative: we take it to rid our minds of all unjustified beliefs, and in
the process we expel the skeptic's thesis itself. It's a higher level of
skepticism in which we set aside everything, including the skeptic’s thesis.
A fourth criticism of radical skepticism is that
it rests on an unrealistically high standard of evidence. There are two basic
levels of evidence: complete and partial. The skeptic assumes that genuine
knowledge requires complete evidence, but complete evidence is not achievable. Try
as we might, says the skeptic, we can never prove any assertion with absolute
certainty and some skeptical argument cast doubt on that assertion. The
solution is to this skeptical challenge is to reduce the qualifications for
knowledge and be content with partial evidence. To illustrate, suppose I want
to gather enough evidence to support the claim that "I know that there is
a ball in front of me." I first get evidence through my senses: I perceive
the ball with my eyes. I could then get supporting evidence by having others
stand in front of the ball and report whether they see it too. I could get even
stronger evidence by using scientific equipment that would measure the ball's
density and detect the light spectrum reflected off the ball. This may seem
extreme, but even then the evidence is still not complete. I could bring in a
team of physicists to study the ball and write up an exhaustive report. I could
hire a second team to do more tests. But even this is not complete since there
are always more tests that I could run. Complete evidence is not possible, and
the radical skeptic knows this. What, though, if we lower the requirements for
what counts as knowledge? We could allow partial evidence, but not require the
evidence to be complete. In the case of the ball, it might be enough to simply
rely on the evidence that I gain about it through my senses, as incomplete as
it is. This would put radical skepticism to rest. The problem with this
solution, though, is that it doesn't refute radical skepticism, but surrenders
to it. It concedes the impossibility of ever having genuine knowledge with
absolute certainty. What we're left with is a version of knowledge that's so diluted
that it doesn't count for much more than a personal conviction. After viewing
the ball with my eyes, I may as well say "I have a partially supported
belief that a ball is in front of me." Inserting the word
"knowledge" here would add nothing.
While radical skepticism seems excessive, it
nevertheless poses a challenge to genuine knowledge that can't be easily
combated. It may well be impossible to ever refute radical skepticism and so it
might forever remain the archenemy of knowledge. While attempts to destroy this
villain may ultimately fail, struggling with the issue helps illuminate the
nature of knowledge itself. It's much like research into seemingly incurable
diseases: even if scientists can't discover a cure for cancer, the
investigation still gives them a greater insight into human physiology. As we
move on to explore the concept of knowledge in more detail, skepticism will
always be lurking in the background, often forcing us to reject some theories
and revise others.
B. SOURCES OF KNOWLEDGE
We claim to know a lot of facts, for example, that fire is
hot, that George Washington was the first U.S. president, and, in the case of
Heaven's Gate believers, that superior aliens are roaming the galaxy. Our knowledge
claims vary dramatically, and frequently we claim to know something that we
really don't know. One way of understanding the concept of knowledge is to look
at the different ways in which we acquire knowledge.
Philosophers have traditionally maintained that
there are two types of knowledge from two entirely different sources. First,
there is knowledge through experience: seeing something, hearing about
something, feeling something. This goes by the Latin term a posteriori which
literally means knowledge that is posterior to – or after experience.
Second, there is knowledge that does not come from experience, but perhaps
instead from reason itself, such as logical and mathematical truths. This is
called a priori knowledge, which, from Latin, literally means knowledge
that is prior to experience.
Experiential Knowledge. Experiential
(a posteriori) knowledge is of many types, the most obvious of which
involves perception. Each of our five senses is like a door to the
outside world; when we throw them open, we are flooded with an endless variety
of sights, sounds, textures, smells and tastes. When I look at a cow in front
of me and say "I know that it is brown," the source of this knowledge
rests upon my visual perception of the brown cow. While perception is perhaps
the dominant source of experiential knowledge, it can also be
misleading. There are optical illusions, such as a stick which appears bent
when in water; there are mirages, such as the appearance of water puddles on
hot roads. Skeptics, as we’ve seen, have exposed endless problems with
perceptual knowledge such as these.
A second source of experiential knowledge is introspection,
which involves directly experiencing our own mental states. Introspection is
like a sixth sense that looks into the most intimate parts of our minds, which
allows us to inspect how we are feeling and how our thoughts are operating. If
I go to my doctor complaining of an aching back, she'll ask me to describe my
pain. Through introspection I then might report, "Well, it’s a sharp pain
that starts right here and stops right here." The doctor herself cannot
directly experience what I do and must rely on my introspective description.
Like perception, introspection is not always reliable. When surveying my mental
states, I may easily misdescribe feelings, such as mistaking a feeling of
disappointment for a feeling of frustration. Other mental states seem to defy
any clear descriptions at all, such as feelings of love or happiness.
A third source of experiential knowledge is memory.
My memory is like a recording device that captures events that I experience
more or less in the order that they occur. I remember my trip to the doctor and
the pain that I described to her at the time. This recollection itself
constitutes a new experience. Again, experiential knowledge through memory is
not always reliable. For example, I might wrongly recollect that there's pizza
in the refrigerator, completely forgetting that I ate it all last night. Also,
sometimes overbearing people like police investigators can make us think that
we remember something that never happened. And then there's the phenomenon of deja
vu, the feeling that we've encountered something before when we really
haven't.
A fourth source of experiential knowledge is the testimony
of other people. Take, for example, my knowledge that George Washington was the
first U.S. president. Since Washington died centuries before I was born, I
couldn't know this through direct perception. Instead, I rely on the statements
in history books. The authors of those books, in turn, rely on accounts from
earlier records, and eventually it traces back to the direct experience of
eyewitnesses who personally knew George Washington. A large portion of our
knowledge rests on testimony – facts about people we've never seen our places
we've never been to. While it's convenient for us to trust the testimony of
others, there is often a high likelihood of error. This is particularly so with
word-of-mouth testimonies: talk is cheap, and we're often sloppy in the
accounts that we convey to others. Testimonies from written sources are usually
more reliable than oral sources, but much depends on the integrity of the
author, publisher, and the methods of fact-gathering. With oral or written
sources, the longer the chain of testimony is, the greater the chance is of
error creeping in.
Perception, introspection, memory, and testimony:
these are the four main ways of acquiring knowledge through experience. Did we
leave any out? There are a few contenders, one of which is extrasensory
perception, or ESP. For example, you might telepathically access my mind
and know what I'm thinking. Or, through clairvoyance, you might be aware of an
event taking place far away without seeing it or hearing about it. If ESP
actually worked, we might indeed classify it among the other sources of
experiential knowledge. But does it? Typical studies into ESP involve subjects
guessing symbols on cards that are hidden from view. If the subject does better
than a chance percentage, this is presumed to be evidence of ESP. However, the most
scientifically rigorous experiments of this sort have failed to produce
anything better than a chance percentage. While we regularly hear rumors of
people having ESP, we have little reason to take them seriously. The safe
route, then, would be to leave ESP off the list of sources of experiential
knowledge.
Consider next religious experiences. Believers
sometimes say that they receive prophecies from God, or are guided by him, or
know something through faith. Christian theologian John Calvin even spoke of a
sense of the divine that we all have, which informs us that God exists. Might
any of this count as experiential knowledge? The question is a complex one
considering the wide range of religious experiences that believers report.
Let's narrow the question to two representative types: knowledge through faith
and prophetic knowledge. Regarding faith, as typically understood, faith
involves belief without evidence, such as faith that God exists, or that the
bodies of the dead will be resurrected in the future, or that our souls will be
reincarnated in different bodies. These faith beliefs may important for in our
personal religious lives, but there is a problem when we to claim to know
something through faith. One of the chief requirements for something to count
as “knowledge” is that there is evidence to support it—as we'll see more
clearly in the next section. But since faith is belief without evidence then, technically
speaking faith wouldn't qualify as knowledge. Prophetic knowledge faces the
same challenge as ESP: are prophecies any more successful than educated
guesses? Imagine an experiment that we might conduct in which half of the
subjects were prophets, and the other half non-prophets. We then asked both
groups to make predictions about the upcoming year; at the end of the year we
then checked the results. How would the prophets do? The odds are slim that we
could even conduct the experiment since prophets would say that they can't
prophesize on demand: it's a unique and unpredictable revelatory experience.
They might also say that their revelations from God are not the sort of things
that can be confirmed in the newspaper. If prophetic experiences are genuine
sources of knowledge, the burden of proof seems to be on the believer. In the
mean time, it would be premature to include it among the normal sources of
experiential knowledge.
Non-Experiential Knowledge. Turning
next to non-experiential (a priori) knowledge, this source of
information is much more difficult to describe. Some philosophers depict it as
knowledge that flows from human reason itself, unpolluted by experience. We
presumably gain access to this knowledge through rational insight. Usual
examples of non-experiential knowledge are mathematics and logic. Take, for
example, 2+2=4. Indeed, I might learn from experience that two apples plus two
more apples will give me four apples. Nevertheless, I can grasp the concept
itself without relying on any apples; I can also expand on the notion in ways
that I could never experience, such as with the equation 2,000,000 + 2,000,000
= 4,000,000. Logic is similar; take for example the following argument:
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
When we strip this argument of all its empirical parts –
men, morality, Socrates – the following structure is revealed:
All X are Y
Z is an X
Therefore Z is a Y
This logical structure is something that we know
independently of experience. In addition to math and logic, there are other
truths that we know non-experientially, such as these:
·
All bachelors are unmarried men.
·
A sister is a female sibling.
·
Red is a color.
In each of the above cases, the truth depends entirely on
the concepts within these statements. In the first, "unmarried men"
is part of the definition of "bachelor"; the statement is thus true
by definition, irrespective of our experiences.
Two concepts have been important in fleshing out
the notion of non-experiential knowledge. First is necessity:
non-experiential truths are necessary in that they could never be false,
regardless of how differently the world was constructed. 2+2 would equal 4 in
every conceivable science fiction scenario of the universe. Even if no human
being ever existed, it would still be true that "All bachelors are
unmarried men" based on the meaning of the words themselves. Experiential
knowledge, though, is different in that it is contingent, as
opposed
to nessary: it could be false if the world had unfolded differently.
Take the
statement "George Washington was the first U.S. president," which is
an item of experiential knowledge. It is of course true as things stand
now.
But we can imagine a thousand different things that might have prevented
Washington from becoming president. What if he was sent to an orphanage
for chopping down
the family cherry tree? What if he choked to death on his wooden teeth
prior to
his inauguration? The truth of all experiential knowledge hinges on the
precise
construction of the world as it currently is.
The other concept embedded in the notion of
non-experiential knowledge is that of an analytic statement: a statement
that becomes self-contradictory if we deny it. Take, for example, the statement
"All bachelors are unmarried men." Its denial would be this:
It is not the case that all
bachelors are unmarried men.
This is clearly self-contradictory since it would be like
claiming that there exists some bachelor who is married, which is impossible.
Many traditional philosophers have held that non-experiential knowledge is
analytic in the above sense. Denying math or logic would produce a
self-contradiction. Experiential knowledge, on the other hand, is synthetic:
denying it won't produce a self-contradiction. Take again the statement
"George Washington was the first U.S. president," which we know is
true from experience. Its denial would be this:
It is not the case that George
Washington was the first U.S. president.
While this statement is false as things actually
stand, it
isn't self-contradictory since, if the world had unfolded differently,
the U.S. might well have had a different first president.
Rationalism and Empiricism. An
important philosophical war took place in the 17th and 18th
centuries between two schools of thought. Most briefly, first there were rationalists
from continental Europe who were critical of sense experience and felt that
genuine knowledge was acquired non-experientially through reason. The leaders
on this side were René Descartes, Benedict Spinoza, and Gottfried Leibniz.
Second there were empiricists from the British Isles who felt that
non-experiential reasoning would give us nothing, and experience was the only
path to knowledge. John Locke, George Berkeley and David Hume were the leaders
on this side. The war finally ended when Immanuel Kant proposed a compromise:
true knowledge depends on a mixture of experiential and non-experiential
knowledge. We need both, Kant argued, otherwise our whole mental system will
not operate properly.
Let's return to the rationalist position,
particularly the version championed by Descartes. Sense experience, he argued,
is seriously flawed and cannot be the source of important ideas that we have.
Take, for example, the idea of a triangle. Look around the world and you'll
never see a perfect triangle, whether it's a shape that we draw on a piece of
paper or the side of a pyramid in Egypt. On close inspection, they'll all have
irregular lines. The fact remains, though, that we do have conceptions of
perfectly-shaped triangles. Rationalism, according to Descartes, offers the
best explanation of how we get those perfect ideas. There are two central
components to the rationalist position: innate ideas and deductive reasoning. Innate
ideas, according to Descartes, are concepts that we have from birth that
serve as a foundation for all of our other ideas. While they are inborn, we
only become aware of them later in life – when we reach the "age of
reason" as one philosopher called it. Innate ideas are in a special class
of their own: we know them with absolute certainty, and it's impossible for us
to acquire them through experience. While rationalists were reluctant to offer
a complete list of innate ideas, the most important ones include the ideas of
God, infinity, substance and causality. Regarding deductive reasoning,
Descartes held that from our innate ideas we deduce other ideas. It's like in
geometry where we begin with foundational concepts of points and lines, and
deduce elaborate propositions from these about all kinds of geometrical shapes.
Descartes was in fact inspired by the deductive method of geometry and
maintained that we deduce ideas in the same way. Through deduction, the
certainty that we have of innate ideas transfers to the other ideas that we
derive from these. Mistakes creep in only when our deductions become so long
that they rest on memory. All knowledge, he argued, including scientific
knowledge, proceeds from innate ideas and deductive demonstration.
Turn now to empiricism, particularly Locke's
version. Locke's first task was to challenge the theory of innate ideas: none
of our concepts, he argued, are inborn. Our mind is from birth like a blank
sheet of paper, and it is only through experience that we write anything on it.
One problem with innate ideas is that we can explain the origin of each one of
them through experience. The idea of God, for example, is not innate as
Descartes supposed, but comes from our perceptions of the world around us.
There's thus no reason to put forward the theory of innate ideas when
experience explains these notions just fine. Locke also found fault with the
rationalist position that we don't become aware of innate ideas until later in
life. It's not clear how such ideas can linger in our minds for so many years
before we can be conscious of them. And by that time our minds have been
flooded with experience, and a late-blooming innate idea wouldn't contribute
anything to our knowledge of the world. Empiricists also challenged the
rationalists' emphasis on deductive demonstration. We don't expand our
knowledge by deducing new concepts from foundational ones, as mathematicians
do. Geometry is the wrong role model to follow. Instead, we acquire new
knowledge through induction, such as making generalizations from our
experiences. I hit ten light bulbs with a hammer and each breaks; I generalize
from this that all similar light bulbs that I hit with a hammer will also
break. We first perceive, then we generalize. We perceive some more, then
generalize some more. That's how we push knowledge forward.
And then comes along Kant, the great mediator in
the rationalism-empiricism debate. Kant was sympathetic with empiricism but
thought that it suffered from a serious problem: it doesn't offer a good
explanation for how we acquire non-experiential knowledge, such as mathematics
and logic. Complex mathematical formulas in particular could not come from
sense perception. There is a quality of self-evidence and certainty that they
have, which fallible experience could never produce. Kant's solution was not
to resurrect the old theory of innate ideas. Instead, he argued that there are
innate organizing structures in our minds that automatically systematize our
raw experiences – sort of like a skeleton that gives shape to flesh. For
example, as I watch someone hit a light bulb with a hammer, raw sensory
information rushes in through my eyes. My mind immediately reconstructs this
information into a three-dimensional image and puts it on a timeline. My mind
then imposes other organizational schemes on the sensory information. It makes
me see the hammer and light bulb as separate things, rather than just a single
blob of stuff. It then makes me see the hammer as the cause of the light
bulb breaking. My experience of the world, then, is a fusion of innate
structures and raw experience. The innate part is a concession to rationalism, and
the experience part a concession to empiricism.
Rationalism and empiricism in their original
forms are outdated theories today, in part because of Kant’s insights.
Nevertheless, they still are useful for depicting two fundamentally different
ways in which we assess the sources of knowledge. Rationalism will continue to
be attractive whenever we have knowledge that cannot be easily explained by
experience. Empiricism will be attractive whenever the claims of innateness
look fishy.
C. THE DEFINITION OF KNOWLEDGE
Throughout our discussion of knowledge so far, certain
concepts have appeared again and again. There's the question of the truth
of a claim. There is also the matter of our personal belief conviction
for a claim. There are also issues about the evidence or justification
that we have for a claim. Tradition has it that these are the three key
elements to knowledge: truth, belief and justification. For example, when I say
"I know that Paris is the capital of France", this means
It is true that Paris is the capital of France.
I believe that Paris is the capital of France.
I am justified in believing
that Paris is the capital of France.
For short, contemporary philosophers call this definition of
knowledge justified true belief – often abbreviating it "JTB".
The crucial point about this definition is that all three components must be
present: if any one of the three is absent, then it doesn't count as knowledge.
Justified True Belief. To better
understand the JTB definition of knowledge, let's go through each of the three
elements. First is that the statement must be true. I can't claim to
know that Elvis Presley is alive, for example, if he is in fact dead. Knowledge
goes beyond my personal feelings on the matter and involves the truth of things
as they actually are. Some critics of the JTB definition of knowledge question
whether truth is always necessary in our claim to know something. For example,
based on the available evidence of the time, scientists in the middle ages claimed
to know that the earth was flat. Even though we understand now that it isn't,
at the time they had knowledge of something that was false. Didn't they? In
response, it may have been reasonable for scientists back then to believe the
world was flat, but they really didn't know that it was. Their knowledge
claims were premature in spite of how strong their convictions were. This is a
trap that we fall into all the time. While talking with someone I may say
insistently, "I know that Joe's car is blue!" When it turns
out that Joe's car is in fact red, I have to apologize for overstating my
conviction. Truth, then, is an indispensable component of knowledge.
Second, I must believe the statement in
order to know it. For example, it's true that Elvis Presley is dead, and there is
enormous evidence to back this up. But if I still believe that he is alive, I
couldn't sincerely say that I know that he is dead. Part of the concept
of knowledge involves our personal belief convictions about some fact,
irrespective of what the truth of the matter is. Critics of the JTB definition
of knowledge sometimes think that belief isn't always required for our claims
to know something. For example, I might say "I know I'm growing old, but I
don't believe it!" In this case, I have knowledge of a particular fact
without believing that fact. In response, if I say the previous sentence, what
I actually mean is that I'm not capable of imagining myself getting old or I
haven't yet emotionally accepted that fact. I just make my point more
dramatically by saying "I don't believe it!" Instead I really do
believe it, but I don't like it.
Third, I must be justified in believing
the statement insofar as there must be good evidence in support of it. Suppose
that I randomly pick a card out of a deck without seeing it. I believe it is
the Queen of Hearts, and it actually is that card. In this case I couldn't
claim to know that I've picked the Queen of Hearts; I've only made a
lucky guess. Critics question whether evidence is really needed for knowledge.
For example, a store owner might say "I know that my employees are
stealing from me, but I can't prove it!" Here the store owner has
knowledge of a particular fact without any evidence for it. In response, the
store owner is really saying that he strongly believes that his employees are
stealing from him, but doesn't have enough evidence to press charges. Evidence,
then, is indeed an integral part of knowledge.
The Gettier Problem. For centuries
philosophers took it for granted that knowledge consists of justified true
belief. In 1963 a young philosophy professor named Edmund Gettier published a
three-page paper challenging this traditional view. He argued that there are
some situations in which we have justified true belief, but which do not count
as knowledge. This was dubbed "The Gettier Problem" and discussions
of it quickly dominated philosophical accounts of knowledge. Gettier's actual
illustrations of the problem are rather complex, but a more simple one makes
the same point.
Suppose that a ball in front of me appears to be
red. First, I believe it is red. Second, I'm justified in this
belief since that's how the ball appears to me. Third, it's also true
that the ball is red. I thus have a justified true belief that the ball is red.
However, it turns out that the ball is illuminated by a red light which casts a
red tint over it – a fact that I'm unaware of. Although the ball in reality is
red, under the light it would appear red to me even if the ball was a different
color. Consequently, I can't claim to know that the ball is red even
though I have a justified true belief that it is. I was fooled by the effects
of the red light, but made a lucky guess anyway. Again, the point of this
counterexample is to show that some instances of justified true belief do not
count as genuine knowledge. This suggests that the traditional JTB definition
of knowledge is seriously flawed.
What can we do to rescue the JTB account of
knowledge from the Gettier problem? A common response is to add a stipulation
to the definition of knowledge that would weed out counterexamples like the red
ball. Most of the Gettier-type counterexamples involve a case of mistaken
identity. In our current example, I mistake the appearance of a red-illuminated
ball for an actual red ball. Perhaps, then, we can stipulate that knowledge is
justified true belief except in cases of mistaken identity. More precisely, we
can add a fourth condition to the definition of knowledge in this way:
I know that the ball is red when,
(1) It is true that the ball
is red;
(2) I believe that the ball
is red;
(3) I am justified in my
belief that the ball is red;
(4) There is no additional fact
that would make my belief unjustified (for example, a fact about a red light).
According to the above, my belief about the red ball would
not count as knowledge since it wouldn't pass the fourth condition. That is,
there is indeed an additional fact regarding the red light that would make my
initial belief about the ball unjustified. That additional fact undermines --
or defeats – my original justification. We've thus saved the JTB definition of
knowledge, although cluttering it a little with a fourth condition. This
strategy is called the no-defeater theory (also called the indefeasibility
theory). A problem with this strategy, though, is that there are
possible counter examples even to this – that is, situations in which we have undefeated
justified true belief that don’t count as true knowledge. This, in fact, is a
problem with most proposed solutions to the Gettier problem: if we get creative
enough, we will likely find a new counter example that defies the solution.
D. TRUTH, JUSTIFICATION AND RELATIVISM
Truth and justification, we've seen, are two of the key
components of knowledge. They are also concepts that need some explanation
themselves. Let’s first look at the notion of truth.
Theories of Truth. The concept of
truth has many possible meanings. We talk about having true friends,
owning a true work of art, or someone being a true genius. In all
of these cases the word "true" means genuine or authentic. In
philosophy, though, the notion of truth is restricted to statements or beliefs
about the world – such as the statement that "My car is white" or
"Paris is the capital of France". While we all have gut feelings
about what it means for a statement to be true, philosophers have been
particularly keen on arriving at a precise definition of truth. Here's one
suggestion from a classic song:
"What is truth?" you ask
and insist,
"Correspondence to things that
exist?"
The answer, you fool, requires no
sleuth:
Whatever I say is the truth.
Want proof of the truth? I say so!
So there!
Purveyors of falsehood beware:
I'm sick of your lies, and, truth
be told,
I am the truth, behold!
The above account of truth is clearly satirical since no one
would seriously grant that the truth of all statements is grounded in the
assertions of one individual person. But what are the more serious alternatives
for definitions of truth? As usual in philosophy, there's much disagreement
about what the correct definition is. We will consider the three leading
candidates here.
The first and most famous definition of truth is
the correspondence theory: a statement is true if it corresponds
to fact or reality. This is the most commonsensical way of looking at the
notion of truth and is how standard dictionaries define the concept. A true
statement simply reflects the way things really are. Take the statement
"My car is white.” This statement is true if it conforms to how the world
actually is, specifically whether my car is in fact painted white. As
compelling as the correspondence theory of truth seems, skeptics immediately
see one major flaw with it: we don’t have access to the world of facts. In
spite of my best efforts to discover the way things really are, I’m at the
mercy of my five senses, which, we’ve seen, are unreliable. While my senses
tell me that my car is white, the color receptors in my eyes may not be working
properly and my car may be a shade of yellow. For that matter, I may be living
in a world of hallucinations and don’t even own a car. The sad fact is I can
never reach beyond my perceptions and see the world as it really is.
With trivial issues, such as the truth concerning
the color of my car, I may be willing to simply pretend that I have direct
access to the world of facts and blindly trust my senses. This may serve my
immediate needs perfectly well. It isn’t so easy to pretend, though, when I
investigate the truth of more serious statements, such as whether “Bill
murdered Charlie.” Even if I have a mountain of evidence that implicates Bill,
such as fingerprints and eyewitness testimony, it’s impossible for me to turn
back the hands of time and directly access the scene of Charlie’s murder. I
only have hints about what the reality is. Similarly, if I’m investigating the
truth of the statement “God exists,” I can’t directly access the reality of an
infinitely powerful deity, even if God did exist and stood right in front of
me. The best I would have is some imperfect evidence that the mysterious being
standing before me was indeed God. Thus, the correspondence theory would not
permit us to say either that “It is true that Bill murdered Charlie” or “It is
true that God exists.”
A second famous definition of truth is the coherence
theory, which aims to address the shortcomings of the correspondence
theory. According to the coherence theory, a statement is true if it coheres
with a larger set of beliefs. Rather than attempting to match up our statements
with the actual world of facts, we instead try to see if our statements mesh
with a larger web of beliefs that support them. For example, the statement “my
car is white” is true if it coheres with a collection of other beliefs such as
“many cars are painted white,” “I perceive that my car is white,” and “other
people invariably report that my car is white.” With the coherence theory, we
avoid skeptical obstacles such as the unreliability of our senses and the
possibility that we are hallucinating. What matters is our web of beliefs,
which we all have access to -- in contrast with a hidden world of facts that is
blurred by the limits of our sensory perceptions. We also can even investigate
statements such as “It is true that Bill murdered Charlie” or “It is true that
God exists.” What matters here is whether these statements consistently fit
with other beliefs that we have -- beliefs about the pieces of evidence against
Bill and beliefs about the evidence regarding a divine being.
Unfortunately the coherence theory faces serious
criticisms, the most important of which is that it is relativistic. That is, it
grounds truth in the changeable beliefs of human beings, rather than in an
unchanging external reality. According to the coherence theory, the standard
for all truth is the larger web of beliefs that people hold – beliefs about
white cars, criminal evidence, evidence for God’s existence, and countless
other issues. The problem is that belief systems come and go. Take beliefs
about criminal evidence as just one example. Many cultures throughout history
based criminal convictions on the evidence of supernatural omens: prophetic
visions, the flight path of birds, patterns in the guts of sacrificed animals.
That was their belief system which they relied on. In other cultures the
testimony of one eye witness is sufficient to prove guilt. In our culture today
we have fingerprints, DNA samples and psychological profiles which all
contribute to our belief system about criminal guilt. The statement “Bill
murdered Charlie” could cohere with some belief systems, but not with others.
We typically think about truth as being absolute: either Bill murdered Charlie
or he didn’t. If truth hinges on a changeable belief system, though, truth is
no longer absolute.
The problems with the correspondence and coherence
theories are so serious that many contemporary philosophers have abandoned
both. In fact some philosophers have even abandoned the concept of “truth” as
being completely unnecessary. This brings us to our third theory, the deflationary
theory of truth: to assert that a statement is true is just
to assert the statement itself. Compare these two statements:
- My car is white.
- It is true that my car is white.
What is the difference between the two? Nothing of
substance. The phrase “it is true that” seems to be just repeating something
that is already assumed in the phrase “my car is white.” In that sense, I am
being redundant if I use the phrase “it is true that.” At times it may be
rhetorically helpful to use the phrase “it is true that” in an effort to convince
someone of my belief. Suppose you say to me “I don’t believe that your car is
white.” I might respond by saying, “You’re wrong: it’s absolutely true that my
car is white”. Again, I’ve not added anything of substance by injecting the
notion of truth into my response; I’ve just stood up to you more forcefully. In
short, according to the deflationary theory, the quest for a clear conception
of truth—such as correspondence or coherence—will not succeed because it is
ultimately a quest for something that doesn’t really exist.
But the deflationary theory also faces problems,
one of which is that the notion of truth is built into our normal expectations
of what we assert. When I say that “my car is white” you have an expectation
that what I’m saying is true. Occasionally I do say something that is false, but
when that happens we all recognize that I’m doing something that is incorrect. The
normal expectation, then, is that my assertion will be truthful. And this
creates a problem for the deflationary theory: by eliminating the notion of
truth, it cannot adequately account for our normal expectation of truthfulness.
Theories of Justification. Of the
three components of knowledge, justification is the one that has attracted the most
attention among contemporary philosophers. For centuries most philosophers
followed a theory of justification called foundationalism. On this view,
our justified beliefs are arranged like bricks in a wall, with the lower ones
supporting the upper ones. These lowest bricks are called “basic beliefs”, and
the ones they support are “non-basic” beliefs. Take this example:
*My car is white (non-basic belief)
This belief rests upon some supporting ground-level basic
beliefs, including these:
*I recognize the car in front of me
as my car (basic belief)
*I remember what white things look
like (basic belief)
*The car in front of me looks white
(basic belief)
There are two distinct elements to this foundationalist
theory of justification. First, our ground-level basic beliefs are self-evident,
or self-justifying, and thus require no further justification. When we have
such beliefs, we cannot be mistaken about them, we cannot doubt them, and we
cannot be corrected in our beliefs about them. For example, if I am perceiving
the color white, then my belief that I am perceiving white is self-evident in
this way. Even if I am hallucinating at the moment, my belief that I am
perceiving the color white cannot be called into question. The second element
of foundationalism is that justification transfers up from my foundational
basic beliefs to those non-basic beliefs that rest upon them. Think of it like the
mortar between bricks that begins at the very bottom level, locks them solid,
and moves upwards to lock the higher bricks into place. For example, if I have
the three basic beliefs about my car and whiteness listed above, then I am
justified in inferring the non-basic belief that “my car is white.”
While foundationalism holds a respected place in
the history of philosophy, it faces a major problem: it is not clear that there
really are any self-evident basic beliefs that form the foundation of other
beliefs. Foundationalists themselves have mixed views about what exactly our
lowest-level foundational beliefs are. Descartes, for example, argued that
there is only one single brick at the foundation of my wall of beliefs, namely,
my belief that I exist. Every other belief I have rests on this. Locke, on the
other hand, held that our most foundational beliefs are simple perceptions such
as blue, round, sweet, smooth, pleasure, motion. These combine together to make
more complex ideas. Contemporary philosophers resist both Descartes’ and
Locke’s depiction of our most foundational beliefs. Some offer examples such as
“I see a rock” (a basic belief about one’s perception), “I ate cornflakes this
morning” (a basic belief about one’s memory), or “That person is happy” (a
basic belief about another person’s mental state). But even these are
questionable since they seem to rely on beliefs or perceptions that are more
ground-level. If there really are ground-level foundational beliefs that are
self-evident or self-justifying, you’d think that philosophers would have
agreed along time ago about exactly which ones they are. But there is no such
agreement.
An alternative to foundationalism is coherentism:
justification is structured like a web where the
strength of any given area depends on the strength of the surrounding areas.
Thus, my belief that my car is white is justified by a web of related beliefs,
such as these:
*I recognize the car in front of me
as my car
*I remember what white things look
like
*The car in front of me looks white
These, though are not foundational, but
instead depend on another web of beliefs related to them, which includes these:
*I remember
purchasing my car
*People seem to agree
that I use the term “white” properly
*Nothing is
abnormally coloring my vision, such as a pair of sun glasses
Each of these, in turn, rests on an
ever-widening web of related beliefs. At no point do we reach a bottom-level
foundation to these beliefs; the justification of each belief rests on the
support it receives from the surrounding web of beliefs that relates to it.
Coherentism is closely associated with the coherence theory of truth. With
truth we determine that a proposition is true if it coheres with a larger web
of beliefs. With justification, we determine that a belief is justified if it
is supported by a larger web of beliefs. Coherentism’s similarities with
the coherence theory of truth make it vulnerable to the same fundamental charge
of relativism: not everyone’s belief system is the same, so a particular belief
might find justification within your larger web of beliefs, but not within
mine. Your belief system might justify the belief that “Bill killed Charlie,”
that “God exists,” or that “abortion is immoral,” while my belief system might
not justify any of these. We’d like to think that justification is a bit more
universal and not dependent on the peculiarities of a particular person’s
belief system.
Given the liabilities of both foundationalism and
coherentism, many contemporary philosophers hold a third position called reliabilism:
justified beliefs are those that are the result of a reliable process,
such as a reliable memory process or a reliable perception process. It’s like
how we depend on a reliable clock to tell us what time it is. As long as we
have confidence in the clock mechanism itself, then that’s all we need in order
to trust the time that it tells us. We don’t have to inspect the internal gears
of the clock and see how they relate to the movement of the clock’s hands.
Similarly, to justify my beliefs, I don’t need to inspect how each belief
connects with surrounding beliefs that are beneath them or next to them; I just
trust the reliability of my mental process that gives me the belief. If my
memory process is on the whole reliable, then I’m justified in my belief that I
ate cornflakes this morning for breakfast. If my perceptual process is on the
whole reliable, then I’m justified in my belief that my car is white. What
matters is the reliability of the larger processes upon which my beliefs rest,
not my other beliefs that border them. According to reliabilism, the fault with
both foundationalism and coherentism is that they rely too much on
introspection: presumably, with our mind’s eye, we can see the strength of our
specific beliefs and how they gain support from other beliefs that are
connected to them (either like bricks in a wall or strands in a web). But, says
the reliabilist, this approach places too much confidence in our ability to
internally witness the connections between our specific beliefs. Our standard
of justification should not depend on what our mysterious mind’s eye internally
perceives, but, instead, upon more external standards and mental processes that
we know are reliable through our life experiences. I am justified in believing
that I ate cornflakes for breakfast because that’s what I remember, and I trust
my memory since it is a reliable process of supplying me with information about
the past.
What’s so Bad about Relativism? Twice
so far the issue of relativism has raised its ugly head, and how we assess
theories of truth and justification hinges greatly on how we feel about
relativism. The relativist position in general is that knowledge is always
dependent upon some particular conceptual framework (that is, a web of
beliefs), and that framework is not uniquely privileged over rival frameworks.
The most famous classical statement of relativism was articulated by the Greek
philosopher Protagoras (c. 490–c. 420 BCE), who said that
“Man is
the measure of all things.” His point was that human beings are the
standard of
all truths, and it’s a futile task to search for fixed standards of
knowledge beyond
our various and ever-flexible conceptual frameworks. Knowledge in
medieval England depended on the conceptual framework of that place and
time. Knowledge for us today
depends on our specific conceptual frameworks throughout the world and
throughout our wide variety of social environments.
Our initial reactions to relativism are usually
negative. “The truth is the truth,” I might say, “and it shouldn’t make any
difference what my individual conceptual framework is. Some conceptual
frameworks are simply wrong, and others may be a little closer to the truth.”
But is relativism really so bad that it warrants this negative reaction?
The first step to answering this question is to
recognize that there are different types of relativism, some of which may be
less sinister than others. The most innocent and universally accepted type is etiquette
relativism, the view that correct standards of protocol and good manners
depend on one’s culture. When I meet people for the first time, should I bow to
them or shake hands? If I make the wrong decision, I might offend that person,
rather than befriend them. Clearly, that depends on the social environment that
you’re in, and it makes no sense to seek for an absolute standard that applies
in all situations. Etiquette by its very nature is relative. There is also
little controversy regarding aesthetic relativism, the view that
artistic judgments depend on the conceptual framework of the viewer. We
commonly feel that there is no absolute right and wrong when it comes to art,
and it’s largely a matter of opinion. I might enjoy velvet paintings of dogs
playing cards, while that might offend your aesthetic sensibilities. In many
cases, perceptual relativism is also no big issue: one’s sensory
perceptions depend on the perceiver. Something might appear red to me but green
to you. There are people known as “supertasters” who experience
flavors with far greater intensity than the average person, so much so that
they need to restrict themselves to food that you or I would find bland. How we
perceive sensations depends on our physiology, which we readily acknowledge may
differ from person to person.
The types of relativism that we often resist,
though, are those connected specifically with the two components of knowledge
that we’ve discussed above, namely, truth and justification. Truth
relativism is the view that truth depends upon one’s conceptual framework.
This amounts to a denial of the correspondence theory of truth and acknowledges
our inability to access an objective and independent reality. Justification
relativism is the view that what counts as evidence for our beliefs depends
upon one’s conceptual framework. This is a denial of foundationalism and an
acknowledgement of coherentism. The German philosopher Friedrich Nietzsche
(1844-1900) boldly embraced truth and justification relativism, as we see here:
Positivism stops at phenomena and
says, “These are only facts and nothing more.” In opposition to this I would
say: No, facts are precisely what is lacking, all that exists consists of interpretations.
We cannot establish any fact “in itself”: it may even be nonsense to desire to
do such a thing. . . . To the extent to which knowledge has any sense at all,
the world is knowable: but it may be interpreted differently, it has not one
sense behind it, but hundreds of senses. “Perspectivity.” [Will to Power,
481]
For Nietzsche, then, there are many perspectives from which
the world can be interpreted when we make judgments. Some justification
relativists even go so far as to deny the universal nature of so-called laws of
logic; even these, they maintain, are grounded in mere social conventions.
A standard criticism of truth and justification
relativism is that it leads to absurd consequences that no rational person
would accept. By surrendering to relativism, we abandon any stable notion of
reality and place ourselves at the mercy of cultural biases, fanatical social
groups, and power hungry tyrants who are more than happy to twist our
conceptual frameworks to their benefits. Everything, then, becomes a matter of
customs that are imposed on us, even in matters of science. Scottish philosopher
James Beattie (1768–1790) makes this point in a fictional story where he
describes a crazy scientist who attempts to put relativism into practice:
[The scientist] was watching a
hencoop full of chickens, and feeding them with various kinds of food, in
order, as he told me “that they might [give birth to live offspring and] … lay
no more eggs,” which seemed to him to be a very bad custom. . . . “I have also,”
continued he, “under my care some young children, whom I am teaching to believe
that two and two are equal to six, and a whole less than one of its parts; that
ingratitude is a virtue, and honesty a vice; that a rose is one of the ugliest,
and a toad one of the most beautiful objects in nature.” [James Beattie, “The Castle of Skepticism”]
According to Beattie, if we took the relativist’s position
seriously, we’d be forced to accept absurd views like “it is just a matter of
custom that chickens lay eggs,” or that “it’s possible that 2+2=6.” Thus, even
if we acknowledge a certain level of relativism with etiquette, aesthetics and
perception, we need to draw the line when it comes to standards of truth and
justification.
How might the relativist respond to this
criticism? One approach is to hold that not all conceptual schemes are on equal
footing, and some indeed are better than others. Nietzsche argues that there
are competing perspectives of the world, and the winner is the one whose
conceptual framework succeeds the best:
It is our needs that interpret the
world; our instincts and their impulses for and against. Every instinct is a
sort of thirst for power; each has its point of view, which it would gladly
impose upon all the other instincts as their norm. [Will to Power, 481]
Nietzsche presents the conflict as a kind of power
struggle
among competing conceptual frameworks, where the winner takes all. A
more
gentle approach, though, would be to hold that the winner is the one
that best
assists us in our life’s activities and allows us to thrive. If people
today
held that “it is just a matter of custom that chickens lay eggs,” or “it
is
possible that 2+2=6”, their underlying conceptual framework would not
enable
them to succeed very well in the world. For that matter, such a
conceptual
framework would not have allowed people to thrive very well in medieval
England or any other pre-modern period of human history. While there may
be an underlying
objective reality that molds our conceptual frameworks in successful
ways, that
possibility is irrelevant since, according to the relativist, we could
never
know such an objective reality even if it existed. What we do know is
how our conceptual frameworks enable us to succeed in the world, and that’s the
real litmus test for truth and justification.
Thus, with many ordinary life beliefs, relativist
theories of truth and justification work reasonably well, without leading us
down the path to absurd consequences. What, though, of more scientific
theories? In medieval times people thought mental illness was caused by demon
possession; today we think that it is caused by physiological brain disorders.
The medieval theory worked well in its own day; does that mean that it was true
back then – supported by its own web of beliefs – but not now? In scientific
matters, people feel uncomfortable with relativism and instead believe that our
knowledge of physics, chemistry and biology has a fixed and objective reference
point. We will next examine the issue of scientific knowledge in more detail.
E. SCIENTIFIC KNOWLEDGE
Every child knows the tale of Isaac Newton’s inspiration for
his views on gravity: while sitting beneath a tree he saw an apple fall, which
prompted him to wonder why things always fell downward rather than sideways or
upward. In time Newton formulated his theory of universal gravitation, which
described the attraction between massive bodies. Less known is the rival theory
of intelligent falling, devised by the satirical newspaper The Onion.
According to this view, things fall downward “because a higher intelligence,
'God' if you will, is pushing them down.” As proof for their view they cite a
passage from the Old Testament book of Job: “But mankind is born to trouble, as
surely as sparks fly upwards.” Accordingly, a defender of intelligent falling
theory remarks, “If gravity is pulling everything down, why do the sparks fly
upwards with great surety? This clearly indicates that a conscious intelligence
governs all falling.” The theory of intelligent falling is obviously not a real
theory, but rather a parody of the religiously-based intelligent design theory.
Nevertheless, we can ask the basic theoretical question, why is universal
gravitation a better account of natural events than intelligent falling? The job
of science is to explain how the natural world works, to give us knowledge of
the underlying mechanics of natural phenomena. That knowledge does not come
easy, though, and it seems that science has to wrench nature’s secrets out of her.
As scientists put forward rival theories, how do we determine which are closer
to the truth?
Confirming Theories. The starting
point for discussion is to distinguish between three related scientific
concepts: a hypothesis, a theory, and a law. The weakest of these is the scientific
hypothesis, which is any proposed explanation of a natural event. It is a
provisional notion whose worth requires evaluation. Newton’s account of gravity
began as a humble hypothesis, and even the theory of intelligent falling
qualifies as a hypothesis. While hypotheses may be inspired by natural
observations, they don’t need to be, and virtually anything goes at this level.
One step up from this is a scientific theory, which is a well confirmed
hypothesis. It is not mere guess, like a hypothesis may be, but is a contention
supported by experimental evidence. When Newton proposed his account of
gravity, he accompanied it with a wealth of observational evidence, which
quickly elevated it to the status of a theory. This, though, is where the
theories of gravity and intelligent falling part company: there’s no scientific
evidence in support of intelligent falling, and thus it fails as a theory.
Lastly, there is a scientific law which is a theory that has a great
amount of evidence in its support. Indeed, laws are confirmed by such a strong
history of evidence that they cannot be overturned by any singular piece of
evidence to the contrary; rather, we assume instead that that singular piece of
contrary evidence is flawed. As compelling as Newton’s theory of gravity was,
it took well over 100 years before it was confirmed to the point that it gained
status as a law.
We see that confirmation is the critical
component in establishing a scientific claim: it is what elevates a hypothesis
to a theory, and a theory to a law. There are several different ways of
confirming scientific notions. The first factor in the confirmation process is
that it exhibit simplicity; that is, when evaluating two rival theories,
the simplest theory is the one most likely to be true. This
doesn’t guarantee that it’s true, but, all things being equal, it’s the one
that we should prefer. Compare, for example, universal gravity and intelligent
falling. Universal gravity involves a single gravitational force that is
inherent to all physical bodies. Intelligent falling, on the other hand,
involves countless divine actions that guide individual bodies downwards. We
should thus prefer universal gravity as the correct explanation since it is not
burdened by such an abundance of distinct divine actions.
A second component of confirmation is unification,
that is, the ability to explain a wide range of phenomena. The rule of thumb
here is that the more information explained by a theory, the better. Science is
an immense interrelated system of facts, laws, and theories, and scientific
contentions gain extra weight when they contribute to the scheme of
unification. It is unification that gave an initial boost to Newton’s theory of
universal gravitation. Prior to Newton, Astronomers assumed that planets and
other celestial objects followed their own unique set of laws that were
distinct from those on earth. However, Newton showed how the motions of the
planets were governed by precisely the same rules of gravity and motion that
physical bodies on earth obey.
A third factor in scientific confirmation is successful
prediction. Good scientific theories should not simply organize collections
of facts, but should be able to reach out and predict new phenomena. This is
what bumped Newton’s theory of gravity up to the status of a law. Astronomers
in the early 19th century noticed some strange movements in the
orbital pattern of the planet Uranus, and they hypothesized that the
irregularities were caused by the gravitational tugging of an undiscovered
eighth planet. Applying Newton’s formulas of gravity and motion, they
pinpointed a location in space where the large body must be. Then, pointing
their telescopes at the spot, they discovered the mystery planet, which was
subsequently named “Neptune.” Scientific predictions like these don’t happen
too often, but when they do they do much to confirm a theory. Einstein's theory
of relativity, for example, was confirmed with the prediction of bent star
light during a solar eclipse.
A fourth and final factor in scientific
confirmation is falsifiability: it must be theoretically possible for a
scientific claim to be shown false by an observation or a physical experiment.
This doesn’t mean that the scientific claim is actually false, but only that it
is capable of being disproved. The criterion of falsifiability is important for
distinguishing between genuine scientific claims that rest on tests and
experimentation, and pseudo-scientific claims that are completely disconnected
with testing. Take, for example, the views of Heaven’s Gate believers that we
examined at the outset of this chapter. According to them, aliens come down to
earth in the form of teachers, but once becoming human they are stripped of
their previous memories and knowledge. Thus, we can’t test the claims of these
teachers about their previous alien lives, since they can’t remember anything
about them. “So tell me a little about your home planet” I might ask one
alleged alien. He then replies “Sorry, I can’t remember anything about it, but
I’m still an Alien.” To make things worse, Heaven’s Gate believers claim that
the aliens purposefully imposed this knowledge restriction on themselves since
"too much knowledge too soon could potentially be an interference and
liability to their plan." In short, their claims about the aliens are
completely resistant to refutation. Fortune tellers are another good example of
this, as the philosopher Karl Popper explains here:
[B]y making their interpretations
and prophesies sufficiently vague they were able to explain away anything that
might have been a refutation of the theory had the theory and the prophesies
been more precise. In order to escape falsification they destroyed the
testability of their theory. It is a typical soothsayer’s trick to predict
things so vaguely that the predictions can hardly fail: that they become
irrefutable. [Conjectures and Refutations]
Legitimate scientific theories, by contrast, always hold
open the possibility of being refuted by new data or a new experiment. By
putting forth their theories, scientists take a risk that what they’re
proposing might be disproved by the facts. Even universal gravitation is
vulnerable to refutation if some future experiments produce compelling evidence
against it. Thus, a good theory is always potentially falsifiable, although it
has not been actually falsified.
Scientific Revolutions. Scientists
continually push the boundaries of knowledge, and on a daily basis we see new
theories about the spread of diseases, healthy eating habits, or the
environment. We also read about new studies that challenge previously accepted
scientific views. For example, in contrast to earlier claims by scientists, the
accepted wisdom now is that vitamin C does not help prevent colds, and fiber in
our diets does not help prevent colon cancer. Science thus moves ahead in baby
steps, occasionally taking a step backwards to correct an erroneous theory. All
the while, though, the larger body of scientific knowledge seems secure and
well established. But then sometimes a new scientific theory comes along that
is so radical and far reaching in its consequences that it forces scientists to
throw out many of their underlying assumptions about the world and set things
on a dramatically new course. These are scientific revolutions. The most
dramatic example is the shift from the earth-centered view of the heavens,
championed by the ancient Greek astronomer Ptolemy, to a sun-centered system
which was defended by Copernicus in the 1500s. This “Copernican Revolution,” as
it is now called, did more than simply swap the earth with the sun in the model
of celestial objects. It also had the effect of overturning medieval theories
about matter and motion, and ultimately replacing them with Newton’s laws of
motion. Other important revolutions were sparked by Darwin’s account of
evolution, Einstein’s account of general relativity, and the Big Bang theory.
The most probing philosophical analysis of
scientific revolutions was offered by American historian of science Thomas Kuhn
(1922-1996). Kuhn argued that scientific revolutions are the result of changing
paradigms – that is, the web of scientific beliefs held in common by
members of the scientific community. When major paradigms are overthrown and
replaced with new ones, such as replacing the Ptolemaic with the Copernican paradigm,
we have a scientific revolution. What triggers the paradigm shift, according to
Kuhn, is that scientists run into inconsistencies with the old paradigm that
cannot easily be explained away. Scientific theories will always face some
irregularities – such as with an experiment that seems to contradict an
accepted theory. If the theory is well established, a few irregularities here
or there won’t matter; in fact, scientists often chalk these up to an
acceptable level of error that’s built into the enterprise of scientific
investigation. However, sometimes irregularities pile up to such a degree that
it throws science into a condition of crisis. Seeking resolution to the crisis,
scientists then replace an old scientific paradigm with a new one that better
resolves the irregularities.
Kuhn argues that scientific revolutions have much
in common with political revolutions: rebel groups think that the ruling
institution is inadequate, which they then overthrow and replace with a new one:
Political revolutions are
inaugurated by a growing sense, often restricted to a segment of the political
community, that existing institutions have ceased adequately to meet the
problems posed by an environment that they have in part created. In much the
same way, scientific revolutions are inaugurated by a growing sense, again
often restricted to a narrow subdivision of the scientific community, that an
existing paradigm has ceased to function adequately in the exploration of an
aspect of nature to which that paradigm itself had previously led the way. In
both political and scientific development the sense of malfunction that can
lead to crisis is prerequisite to revolution.
Kuhn warns that the transition from the old to new paradigm
is not a smooth one. Many scientists will hold fast to the old paradigm, and
the new paradigm needs to attract an ever-growing number of supporters before
it can finally overthrow the old one. The upshot of Kuhn’s theory is that
science is not cumulative: our present theories are not built upon the secure
foundation of past theories. Instead, scientific knowledge shifts according to
our current paradigms, and, once again, the issue of relativism arises. These
paradigms are webs of belief that are held by the scientific community at the
time. Truths in science, then, are relative to these shifting webs of belief.
Kuhn’s account of scientific revolutions has its
critics, particularly among those who believe that science, when done properly,
is grounded in objective truth, and not in shifting belief paradigms of the
scientific community. One criticism is that Kuhn has over dramatized the
sweeping nature of most scientific revolutions. Sure, the Copernican revolution
was indeed a major one that resulted in overthrowing old scientific models that
were rooted in superstitious conceptions of the world and sloppy
experimentation. In fact, the older models were so engrained with religious
mythology and metaphysics, it’s overly generous to even call them “scientific.”
Since the time of Copernicus, however, we’ve not seen any scientific
revolutions that “overthrow” entire paradigms. Rather, new mini-revolutions
seek to encompass much of the theory and data of previous scientific
investigations while at the same time setting a new direction for future
investigation. For example, Newton’s laws of motion were not overthrown by Einstein’s
theory of relativity; instead scientists try to incorporate both into a larger
scientific vision of reality that unifies all of nature’s forces. Other mini-revolutions,
such as Darwinian evolution, quickly put an end to rival theories of biological
development, such as Lamarck’s, that had little or no supporting evidence to
begin with. Thus, contrary to Kuhn’s position, when science is done properly
our knowledge of the natural world is cemented into a fixed and objective
reference point.
This chapter began with a discussion of radical
skepticism, and while that may not be the most cheery way of investigating the
nature of knowledge, in many ways it sets the right tone. No matter what we say
to clarify the characteristics of knowledge, warning flags immediately go up. All
of our sources of knowledge have serious limitations. The very definition of
knowledge can be picked apart by an endless variety of Gettier-type counter
examples. Theories of truth and justification seem to be either naively
optimistic, or they lean towards relativism. While scientific knowledge
attempts to move progressively towards unchanging truth, it is always cradled
by a potentially changing web of beliefs held by scientists. Achieving genuine
knowledge is in some ways like playing a video game where the winning score is
infinitely high: no matter how close we move towards it, it remains at a
distance. If the human effort to gain knowledge was merely a leisure activity like
playing an impossible game, we’d certainly give it up for a more attainable diversion.
But the pursuit of knowledge is a matter of human survival that we can’t casually
set aside. Philosophical discussions of knowledge are an important reality
check as we routinely gather facts and construct theories about how the world
operates. The hope of acquiring a fixed body of knowledge is very seductive,
and the problems of knowledge that we’ve covered in this chapter help us resist
that temptation.
For Review
1. What are the three main arguments for radical skepticism?
2. What are the four main criticisms of radical skepticism?
3. What are the four main sources of experiential knowledge?
4. What are the key features of non-experiential knowledge?
5. What are the key features of rationalism and empiricism?
6. Describe the "JTB" definition of knowledge.
7. What is the point of the Gettier problem?
8. Name and describe the different theories of truth.
9. Name and describe the different theories of
justification.
10. Name and describe the different types of relativism.
11. What are the different ways in which scientific theories
and laws gain confirmation?
12. What is Kuhn’s view of scientific revolutions?
For Analysis
1. Write a dialogue between a radical skeptic who thinks
that he’s living in an artificial reality, and a non-skeptic who thinks we have
knowledge of the common sense world that we perceive.
2. Explain the different features of rationalism and
empiricism, and try to defend one position over the other.
3. Explain the foundationalist theory of justification, and
try to defend it against one of the criticisms.
4. Write a dialogue between a relativist respond and
non-relativist regarding the argument against relativism from absurd
consequences.
5. Explain the notion of falsification, and then describe whether
a religious view like creationism, intelligent design theory, or intelligent
falling theory can be falsified.
6. Explain the criticism of Kuhn at the end of the chapter
and try to defend Kuhn against it.
REFERENCES AND FURTHER READING
Works Cited in Order of Appearance.
Information
about the Heaven’s Gate cult can be found at http://www.rickross.com/groups/heavensgate.html
Sextus
Empiricus, Outlines of Pyrrhonism (c. 200 CE). A recent translation is
by J. Annas and J. Barnes (Cambridge: Cambridge University Press, 1994).
Hume, David, Treatise of Human Nature (1739-1740). The standard edition is by David
Fate Norton, Mary J. Norton (Oxford: Clarendon Press, 2000).
Descartes, René,
Meditations (1641). A recent translation by J. Cottingham is in The
Philosophical Writings of Descartes (Cambridge: Cambridge University Press,
1984).
Locke, John, Essay
Concerning Human Understanding (1689). The standard edition is by P.H. Nidditch
(Oxford: Oxford University Press, 1975).
Kant, Immanuel, Critique
of Pure Reason (1781). A recent translation of this is by P. Gruyer and
A.W. Wood, Cambridge: Cambridge University Press, 1998.
Gettier, Edmund,
“Is Justified True Belief Knowledge?” Analysis (1963), Vol. 23, pp.
121-123.
Nietzsche, Friedrich,
Will to Power (1906). A standard translation of this is by Walter
Kaufmann, (New York: Viking, 1967).
Beattie, James, “The
Castle of Skepticism” (1767). Manuscript transcribed in James Fieser, ed., Early
Responses to Hume’s Life and Reputation (Bristol: Thoemmes Press, 2004).
"Evangelical
Scientists Refute Gravity with New 'Intelligent Falling' Theory," The
Onion, August 17, 2005, Issue 41.33.
Popper, Karl, Conjectures and
Refutations (London: Routledge, 1963).
Kuhn, Thomas, The
Structure of Scientific Revolutions (Chicago: University of Chicago Press,
1962), Ch. 9.
Further Reading.
Feldman, Richard, Epistemology,
(Englewood Cliffs, NJ: Prentice Hall, 2002).
Lemos, Noah, An
Introduction to the Theory of Knowledge (Cambridge, UK: Cambridge University Press, 2007).
Moser, Paul K., The
Oxford Handbook of Epistemology (Oxford, UK: Oxford University Press, 2002).
Sosa, Ernest,
and Kim, Jaegwon, Epistemology: An Anthology (Malden, MA: Blackwell,
2000).