Friday, May 8, 2015

Wars of the World, Chapter Review from the Book: Introduction to Global Politics

Chapter Review "The World Wars" from the Part 1 of the Book: Introduction to World Politics
 By
 Richard W. Mansbach and Kirsten L. Rafferty




The book is contributing a truly global approach to International Relations and Political Science, Introduction to Global Politics, Brief Edition, brings together an expert team of international scholars to provide students with a current, engaging, and non-U.S. point of view on global politics.
Introduction to Global Politics is a major new textbook which introduces students to the key changes in current global politics in order to help them make sense of major trends that are shaping our world. The emphasis on change in global politics helps students recognize that truly new developments require citizens to change their beliefs and that new problems may appear even as old ones disappear. This text is designed to encourage students to think ahead in new, open-minded ways, even as they come to understand the historical roots of the present. the book has been comprise of six parts in which various phases of world politics has been discussed profoundly along with their theories and approaches.

This part of the book tells the story of how states first emerged in Europe and formed an interstate system that came to dominate global affairs. It describes the birth and evolution of the territorial state, and discusses how these political leviathans were transformed from the personal property of kings into communities owned by their citizens. It describes the rise of nationalism, especially during and after the French Revolution, and how state and nation became linked in communities that attracted the passions and highest loyalties of citizens who were willing to die in their name.
After describing the emergence and evolution of the state in Europe, the author has examined the evolution of two international systems that did not feature territorial states – imperial China and medieval Islam.

This chapter focuses on the world wars: World War I (1914–18), then called the Great War,
the events of the war, and its consequences, including World War II (1939–45). Studying these wars will allow us to step back and see in action many of the issues of war and peace that the author has discussed in later chapters, including the relationship between politics and war. Analyzing the causes of the world wars also demonstrates efforts to build theory and explain war by reference to levels of analysis.
These events are important in another respect as well: which began the modern era of global politics, including many of the problems that the world face today. The chapter opens by examining the events leading up to World War I, particularly those that so boosted fear and hostility in Europe that war seemed unavoidable. The chapter then analyzes the many sources of the war according to their levels of analysis and considers how political scientists have used this case to generalize about war. It then describes how World War I permanently altered global politics and leaded in the modern world. It reviews the sequence of events during what has been called the “twenty years’ crisis” following World War I that led to the next world war: the harsh treatment of Germany in the Versailles Treaty, the failure of the League of Nations, and the policy of appeasement practiced by the West in the series of crises in the 1930s. The chapter closes by assessing the sources of World War II Untangling the causes of World War II allows political scientists to generalize further about war and to identify the similarities it may have with other wars. In a section of this chapter, the author has reviewed some of the causes  of World War II by  various levels of analysis. An individual-level explanation for World War II might focus on Hitler’s ambitions and his racist ideology. An alternative explanation, at the state level of analysis, is that significant challenges within European states contributed to the outbreak of World War II. For instance, economic collapse led to the rise of the Nazi regime within Germany, driving voters to Hitler. In Great Britain, appeasement was thought a realistic strategy, given scarce economic resources At the global level, explanations for the war focus on the Versailles Treaty system, the balance of power in Europe, the failure of collective security, and the spread of extremist ideologies. This chapter has examined the events leading up to two world wars and has analyzed the sources of war according to their level of analysis. This has seen that both world wars can be attributed to numerous, reinforcing, causes at each level of analysis. Several prominent theoretical explanations exist for each war, but no single explanation is sufficient. At the end of this chapter,  guide lines for students activities  have been set by the author. Moreover, further reading to the related topics have also been suggested in the end.


Tuesday, January 15, 2013

Research Methodology: An Introduction

                  Research Methodology:An Introduction



MEANING OF RESEARCH

Research in common parlance refers to a search for knowledge. Once can also define research as
a scientific and systematic search for pertinent information on a specific topic. In fact, research is an
art of scientific investigation. The Advanced Learner’s Dictionary of Current English lays down the
meaning of research as “a careful investigation or inquiry specially through search for new facts in
any branch of knowledge.”

1-Redman and Mory define research as a “systematized effort to gain
new knowledge.”

2-Some people consider research as a movement, a movement from the known to
the unknown. It is actually a voyage of discovery. We all possess the vital instinct of inquisitiveness
for, when the unknown confronts us, we wonder and our inquisitiveness makes us probe and attain
full and fuller understanding of the unknown. This inquisitiveness is the mother of all knowledge and
the method, which man employs for obtaining the knowledge of whatever the unknown, can be
termed as research.
Research is an academic activity and as such the term should be used in a technical sense.
According to Clifford Woody research comprises defining and redefining problems, formulating
hypothesis or suggested solutions; collecting, organising and evaluating data; making deductions and
reaching conclusions; and at last carefully testing the conclusions to determine whether they fit the
formulating hypothesis. D. Slesinger and M. Stephenson in the Encyclopaedia of Social Sciences
define research as “the manipulation of things, concepts or symbols for the purpose of generalising to
extend, correct or verify knowledge, whether that knowledge aids in construction of theory or in the
practice of an art.”

3-Research is, thus, an original contribution to the existing stock of knowledge
making for its advancement. It is the persuit of truth with the help of study, observation, comparison
and experiment. In short, the search for knowledge through objective and systematic method of
finding solution to a problem is research. The systematic approach concerning generalisation and the
formulation of a theory is also research. As such the term ‘research’ refers to the systematic method


consisting of enunciating the problem, formulating a hypothesis, collecting the facts or data, analysing
the facts and reaching certain conclusions either in the form of solutions(s) towards the concerned
problem or in certain generalisations for some theoretical formulation

OBJECTIVES OF RESEARCH
The purpose of research is to discover answers to questions through the application of scientific
procedures. The main aim of research is to find out the truth which is hidden and which has not been
discovered as yet. Though each research study has its own specific purpose, we may think of
research objectives as falling into a number of following broad groupings:
1. To gain familiarity with a phenomenon or to achieve new insights into it (studies with this
object in view are termed as exploratoryor formulativeresearch studies);
2. To portray accurately the characteristics of a particular individual, situation or a group
(studies with this object in view are known as descriptive research studies);
3. To determine the frequency with which something occurs or with which it is associated
with something else (studies with this object in view are known as diagnosticresearch
studies);
4. To test a hypothesis of a causal relationship between variables (such studies are known as
hypothesis-testingresearch studies).

MOTIVATION IN RESEARCH
What makes people to undertake research? This is a question of fundamental importance. The
possible motives for doing research may be either one or more of the following:
1. Desire to get a research degree along with its consequential benefits;
2. Desire to face the challenge in solving the unsolved problems, i.e., concern over practical
problems initiates research;
3. Desire to get intellectual joy of doing some creative work;
4. Desire to be of service to society;
5. Desire to get respectability.
However, this is not an exhaustive list of factors motivating people to undertake research studies.
Many more factors such as directives of government, employment conditions, curiosity about new
things, desire to understand causal relationships, social thinking and awakening, and the like may as
well motivate (or at times compel) people to perform research operations.

TYPES OF RESEARCH
The basic types of research are as follows:
(i) Descriptive vs. Analytical: Descriptive research includes surveys and fact-finding enquiries
of different kinds. The major purpose of descriptive research is description of the state of
affairs as it exists at present. In social science and business research we quite often use
Research Methodology: An Introduction 3
the term Ex post facto researchfor descriptive research studies. The main characteristic
of this method is that the researcher has no control over the variables; he can only report
what has happened or what is happening. Most ex post facto researchprojects are used
for descriptive studies in which the researcher seeks to measure such items as, for example,
frequency of shopping, preferences of people, or similar data. Ex post facto studiesalso
include attempts by researchers to discover causes even when they cannot control the
variables. The methods of research utilized in descriptive research are survey methods of
all kinds, including comparative and correlational methods. In analytical research, on the
other hand, the researcher has to use facts or information already available, and analyze
these to make a critical evaluation of the material.
(ii) Applied vs. Fundamental:Research can either be applied (or action) research or
fundamental (to basic or pure) research. Applied researchaims at finding a solution for an
immediate problem facing a society or an industrial/business organisation, whereas fundamental
researchis mainly concerned with generalisations and with the formulation of a theory.
“Gathering knowledge for knowledge’s sake is termed ‘pure’ or ‘basic’ research.”

Research
concerning some natural phenomenon or relating to pure mathematics are examples of
fundamental research. Similarly, research studies, concerning human behaviour carried on
with a view to make generalisations about human behaviour, are also examples of
fundamental research, but research aimed at certain conclusions (say, a solution) facing a
concrete social or business problem is an example of applied research. Research to identify
social, economic or political trends that may affect a particular institution or the copy research
(research to find out whether certain communications will be read and understood) or the
marketing research or evaluation research are examples of applied research. Thus, the
central aim of applied research is to discover a solution for some pressing practical problem,
whereas basic research is directed towards finding information that has a broad base of
applications and thus, adds to the already existing organized body of scientific knowledge.
(iii) Quantitative vs. Qualitative:Quantitative research is based on the measurement of quantity
or amount. It is applicable to phenomena that can be expressed in terms of quantity.
Qualitative research, on the other hand, is concerned with qualitative phenomenon, i.e.,
phenomena relating to or involving quality or kind. For instance, when we are interested in
investigating the reasons for human behaviour (i.e., why people think or do certain things),
we quite often talk of ‘Motivation Research’, an important type of qualitative research.
This type of research aims at discovering the underlying motives and desires, using in depth
interviews for the purpose. Other techniques of such research are word association tests,
sentence completion tests, story completion tests and similar other projective techniques.
Attitude or opinion research i.e., research designed to find out how people feel or what
they think about a particular subject or institution is also qualitative research. Qualitative
research is specially important in the behavioural sciences where the aim is to discover the
underlying motives of human behaviour. Through such research we can analyse the various
factors which motivate people to behave in a particular manner or which make people like
or dislike a particular thing. It may be stated, however, that to apply qualitative research in

practice is relatively a difficult job and therefore, while doing such research, one should
seek guidance from experimental psychologists.
(iv) Conceptual vs. Empirical:Conceptual research is that related to some abstract idea(s) or
theory. It is generally used by philosophers and thinkers to develop new concepts or to
reinterpret existing ones. On the other hand, empirical research relies on experience or
observation alone, often without due regard for system and theory. It is data-based research,
coming up with conclusions which are capable of being verified by observation or experiment.
We can also call it as experimental type of research. In such a research it is necessary to
get at facts firsthand, at their source, and actively to go about doing certain things to
stimulate the production of desired information. In such a research, the researcher must
first provide himself with a working hypothesis or guess as to the probable results. He then
works to get enough facts (data) to prove or disprove his hypothesis. He then sets up
experimental designs which he thinks will manipulate the persons or the materials concerned
so as to bring forth the desired information. Such research is thus characterised by the
experimenter’s control over the variables under study and his deliberate manipulation of
one of them to study its effects. Empirical research is appropriate when proof is sought that
certain variables affect other variables in some way. Evidence gathered through experiments
or empirical studies is today considered to be the most powerful support possible for a
given hypothesis.
(v) Some Other Types of Research:All other types of research are variations of one or more
of the above stated approaches, based on either the purpose of research, or the time
required to accomplish research, on the environment in which research is done, or on the
basis of some other similar factor. Form the point of view of time, we can think of research
either as one-time research or longitudinal research. In the former case the research is
confined to a single time-period, whereas in the latter case the research is carried on over
several time-periods. Research can be field-setting research or laboratory research or
simulation research, depending upon the environment in which it is to be carried out.
Research can as well be understood as clinical or diagnostic research. Such research
follow case-study methods or indepth approaches to reach the basic causal relations. Such
studies usually go deep into the causes of things or events that interest us, using very small
samples and very deep probing data gathering devices. The research may be exploratory
or it may be formalized. The objective of exploratory research is the development of
hypotheses rather than their testing, whereas formalized research studies are those with
substantial structure and with specific hypotheses to be tested. Historical researchis that
which utilizes historical sources like documents, remains, etc. to study events or ideas of
the past, including the philosophy of persons and groups at any remote point of time. Research
can also be classified as conclusion-orientedand decision-oriented. While doing conclusion-oriented research, a researcher is free to pick up a problem, redesign the enquiry as he
proceeds and is prepared to conceptualize as he wishes. Decision-oriented research is
always for the need of a decision maker and the researcher in this case is not free to
embark upon research according to his own inclination. Operations research is an example
of decision oriented research since it is a scientific method of providing executive departments
with a quantitative basis for decisions regarding operations under their control.

Research Approaches
The above description of the types of research brings to light the fact that there are two basic
approaches to research, viz., quantitative approach and the qualitative approach. The former
involves the generation of data in quantitative form which can be subjected to rigorous quantitative
analysis in a formal and rigid fashion. This approach can be further sub-classified into inferential,
experimental and simulation approaches to research. The purpose of inferential approachto
research is to form a data base from which to infer characteristics or relationships of population. This
usually means survey research where a sample of population is studied (questioned or observed) to
determine its characteristics, and it is then inferred that the population has the same characteristics.
Experimental approach is characterised by much greater control over the research environment
and in this case some variables are manipulated to observe their effect on other variables. Simulation
approachinvolves the construction of an artificial environment within which relevant information
and data can be generated. This permits an observation of the dynamic behaviour of a system (or its
sub-system) under controlled conditions. The term ‘simulation’ in the context of business and social
sciences applications refers to “the operation of a numerical model that represents the structure of a
dynamic process. Given the values of initial conditions, parameters and exogenous variables, a
simulation is run to represent the behaviour of the process over time.”
5
Simulation approach can also
be useful in building models for understanding future conditions.
Qualitative approachto research is concerned with subjective assessment of attitudes, opinions
and behaviour. Research in such a situation is a function of researcher’s insights and impressions.
Such an approach to research generates results either in non-quantitative form or in the form which
are not subjected to rigorous quantitative analysis. Generally, the techniques of focus group interviews,
projective techniques and depth interviews are used. All these are explained at length in chapters
that follow.
Significance of Research
“All progress is born of inquiry. Doubt is often better than overconfidence, for it leads to inquiry, and
inquiry leads to invention” is a famous Hudson Maxim in context of which the significance of research
can well be understood. Increased amounts of research make progress possible. Research inculcates
scientific and inductive thinking and it promotes the development of logical habits of thinking
and organisation.
The role of research in several fields of applied economics, whether related to business or
to the economy as a whole, has greatly increased in modern times. The increasingly complex
nature of business and government has focused attention on the use of research in solving operational
problems. Research, as an aid to economic policy, has gained added importance, both for government
and business.
Research provides the basis for nearly all government policies in our economic system.
For instance, government’s budgets rest in part on an analysis of the needs and desires of the people
and on the availability of revenues to meet these needs. The cost of needs has to be equated to
probable revenues and this is a field where research is most needed. Through research we can
devise alternative policies and can as well examine the consequences of each of these alternatives.

Decision-making may not be a part of research, but research certainly facilitates the decisions of the
policy maker. Government has also to chalk out programmes for dealing with all facets of the country’s
existence and most of these will be related directly or indirectly to economic conditions. The plight of
cultivators, the problems of big and small business and industry, working conditions, trade union
activities, the problems of distribution, even the size and nature of defence services are matters
requiring research. Thus, research is considered necessary with regard to the allocation of nation’s
resources. Another area in government, where research is necessary, is collecting information on the
economic and social structure of the nation. Such information indicates what is happening in the
economy and what changes are taking place. Collecting such statistical information is by no means a
routine task, but it involves a variety of research problems. These day nearly all governments maintain
large staff of research technicians or experts to carry on this work. Thus, in the context of government,
research as a tool to economic policy has three distinct phases of operation, viz., (i) investigation of
economic structure through continual compilation of facts; (ii) diagnosis of events that are taking
place and the analysis of the forces underlying them; and (iii) the prognosis, i.e., the prediction of
future developments.
Research has its special significance in solving various operational and planning problems
of business and industry. Operations research and market research, along with motivational research,
are considered crucial and their results assist, in more than one way, in taking business decisions.
Market research is the investigation of the structure and development of a market for the purpose of
formulating efficient policies for purchasing, production and sales. Operations research refers to the
application of mathematical, logical and analytical techniques to the solution of business problems of
cost minimisation or of profit maximisation or what can be termed as optimisation problems. Motivational
research of determining why people behave as they do is mainly concerned with market characteristics.
In other words, it is concerned with the determination of motivations underlying the consumer (market)
behaviour. All these are of great help to people in business and industry who are responsible for
taking business decisions. Research with regard to demand and market factors has great utility in
business. Given knowledge of future demand, it is generally not difficult for a firm, or for an industry
to adjust its supply schedule within the limits of its projected capacity. Market analysis has become
an integral tool of business policy these days. Business budgeting, which ultimately results in a
projected profit and loss account, is based mainly on sales estimates which in turn depends on
business research. Once sales forecasting is done, efficient production and investment programmes
can be set up around which are grouped the purchasing and financing plans. Research, thus, replaces
intuitive business decisions by more logical and scientific decisions.
Research is equally important for social scientists in studying social relationships and in
seeking answers to various social problems. It provides the intellectual satisfaction of knowing a
few things just for the sake of knowledge and also has practical utility for the social scientist to know
for the sake of being able to do something better or in a more efficient manner. Research in social
sciences is concerned both with knowledge for its own sake and with knowledge for what it can
contribute to practical concerns. “This double emphasis is perhaps especially appropriate in the case
of social science. On the one hand, its responsibility as a science is to develop a body of principles
that make possible the understanding and prediction of the whole range of human interactions. On
the other hand, because of its social orientation, it is increasingly being looked to for practical guidance
in solving immediate problems of human relations.”

In addition to what has been stated above, the significance of research can also be understood
keeping in view the following points:
(a) To those students who are to write a master’s or Ph.D. thesis, research may mean a
careerism or a way to attain a high position in the social structure;
(b) To professionals in research methodology, research may mean a source of livelihood;
(c) To philosophers and thinkers, research may mean the outlet for new ideas and insights;
(d) To literary men and women, research may mean the development of new styles and creative
work;
(e) To analysts and intellectuals, research may mean the generalisations of new theories.
Thus, research is the fountain of knowledge for the sake of knowledge and an important source
of providing guidelines for solving different business, governmental and social problems. It is a sort of
formal training which enables one to understand the new developments in one’s field in a better way.

Research Methods versus Methodology
It seems appropriate at this juncture to explain the difference between research methods and research
methodology. Research methods may be understood as all those methods/techniques that are used
for conduction of research. Research methods or techniques*, thus, refer to the methods the researchers
*At times, a distinction is also made between research techniques and research methods. Research techniques refer to the behaviour and instruments we use in performing research operations such as making observations, recording data, techniques of processing data and the like. Research methods refer to the behaviour and instruments used in selecting and constructing research technique. For instance, the difference between methods and techniques of data collection can better be understood from the details given in the following chart—Type Methods Techniques

1. Library (i) Analysis of historical Recording of notes, Content analysis, Tape and Film listening and
Research records analysis.
(ii) Analysis of documents Statistical compilations and manipulations, reference and abstract
guides, contents analysis.
2. Field (i) Non-participant direct Observational behavioural scales, use of score cards, etc.
Research observation
(ii) Participant observation Interactional recording, possible use of tape recorders, photo graphic
techniques.
(iii) Mass observation Recording mass behaviour, interview using independent observers in
public places.
(iv) Mail questionnaire Identification of social and economic background of respondents.
(v) Opinionnaire Use of attitude scales, projective techniques, use of sociometric scales.
(vi) Personal interview Interviewer uses a detailed schedule with open and closed questions.
(vii) Focused interview Interviewer focuses attention upon a given experience and its effects.
(viii) Group interview Small groups of respondents are interviewed simultaneously.
(ix) Telephone survey Used as a survey technique for information and for discerning
opinion; may also be used as a follow up of questionnaire.
(x) Case study and life history Cross sectional collection of data for intensive analysis, longitudinal
collection of data of intensive character.
3. Laboratory Small group study of random Use of audio-visual recording devices, use of observers, etc.
Research behaviour, play and role analysis
From what has been stated above, we can say that methods are more general. It is the methods that generate techniques.
However, in practice, the two terms are taken as interchangeable and when we talk of research methods we do, by
implication, include research techniques within their compass.

use in performing research operations. In other words, all those methods which are used by the
researcher during the course of studying his research problem are termed as research methods.
Since the object of research, particularly the applied research, it to arrive at a solution for a given
problem, the available data and the unknown aspects of the problem have to be related to each other
to make a solution possible. Keeping this in view, research methods can be put into the following
three groups:
1. In the first group we include those methods which are concerned with the collection of
data. These methods will be used where the data already available are not sufficient to
arrive at the required solution;
2. The second group consists of those statistical techniques which are used for establishing
relationships between the data and the unknowns;
3. The third group consists of those methods which are used to evaluate the accuracy of the
results obtained.
Research methods falling in the above stated last two groups are generally taken as the analytical
tools of research.
Research methodologyis a way to systematically solve the research problem. It may be
understood as a science of studying how research is done scientifically. In it we study the various
steps that are generally adopted by a researcher in studying his research problem along with the logic
behind them. It is necessary for the researcher to know not only the research methods/techniques
but also the methodology. Researchers not only need to know how to develop certain indices or tests,
how to calculate the mean, the mode, the median or the standard deviation or chi-square, how to
apply particular research techniques, but they also need to know which of these methods or techniques,
are relevant and which are not, and what would they mean and indicate and why. Researchers also
need to understand the assumptions underlying various techniques and they need to know the criteria
by which they can decide that certain techniques and procedures will be applicable to certain problems
and others will not. All this means that it is necessary for the researcher to design his methodology
for his problem as the same may differ from problem to problem. For example, an architect, who
designs a building, has to consciously evaluate the basis of his decisions, i.e., he has to evaluate why
and on what basis he selects particular size, number and location of doors, windows and ventilators,
uses particular materials and not others and the like. Similarly, in research the scientist has to expose
the research decisions to evaluation before they are implemented. He has to specify very clearly and
precisely what decisions he selects and why he selects them so that they can be evaluated by others also.
From what has been stated above, we can say that research methodology has many dimensions
and research methods do constitute a part of the research methodology. The scope of research
methodology is wider than that of research methods. Thus, when we talk of research methodology
we not only talk of the research methods but also consider the logic behind the methods we use
in the context of our research study and explain why we are using a particular method or
technique and why we are not using others so that research results are capable of being
evaluated either by the researcher himself or by others. Why a research study has been undertaken,
how the research problem has been defined, in what way and why the hypothesis has been formulated,
what data have been collected and what particular method has been adopted, why particular technique
of analysing data has been used and a host of similar other questions are usually answered when we
talk of research methodology concerning a research problem or study.

Research and Scientific Method
For a clear perception of the term research, one should know the meaning of scientific method. The
two terms, research and scientific method, are closely related. Research, as we have already stated,
can be termed as “an inquiry into the nature of, the reasons for, and the consequences of any
particular set of circumstances, whether these circumstances are experimentally controlled or recorded
just as they occur. Further, research implies the researcher is interested in more than particular
results; he is interested in the repeatability of the results and in their extension to more complicated
and general situations.”
7
On the other hand, the philosophy common to all research methods and
techniques, although they may vary considerably from one science to another, is usually given the
name of scientific method. In this context, Karl Pearson writes, “The scientific method is one and
same in the branches (of science) and that method is the method of all logically trained minds … the
unity of all sciences consists alone in its methods, not its material; the man who classifies facts of any
kind whatever, who sees their mutual relation and describes their sequences, is applying the Scientific
Method and is a man of science.”8
Scientific method is the pursuit of truth as determined by logical
considerations. The ideal of science is to achieve a systematic interrelation of facts. Scientific method
attempts to achieve “this ideal by experimentation, observation, logical arguments from accepted
postulates and a combination of these three in varying proportions.”
9
In scientific method, logic aids
in formulating propositions explicitly and accurately so that their possible alternatives become clear.
Further, logic develops the consequences of such alternatives, and when these are compared with
observable phenomena, it becomes possible for the researcher or the scientist to state which alternative
is most in harmony with the observed facts. All this is done through experimentation and survey
investigations which constitute the integral parts of scientific method.
Experimentation is done to test hypotheses and to discover new relationships. If any, among
variables. But the conclusions drawn on the basis of experimental data are generally criticized for
either faulty assumptions, poorly designed experiments, badly executed experiments or faulty
interpretations. As such the researcher must pay all possible attention while developing the experimental
design and must state only probable inferences. The purpose of survey investigations may also be to
provide scientifically gathered information to work as a basis for the researchers for their conclusions.
The scientific method is, thus, based on certain basic postulates which can be stated as under:
1. It relies on empirical evidence;
2. It utilizes relevant concepts;
3. It is committed to only objective considerations;
4. It presupposes ethical neutrality, i.e., it aims at nothing but making only adequate and correct
statements about population objects;
5. It results into probabilistic predictions;
6. Its methodology is made known to all concerned for critical scrutiny are for use in testing
the conclusions through replication;
7. It aims at formulating most general axioms or what can be termed as scientific theories.

Thus, “the scientific method encourages a rigorous, impersonal mode of procedure dictated by
the demands of logic and objective procedure.”

Accordingly, scientific method implies an objective,
logical and systematic method, i.e., a method free from personal bias or prejudice, a method to
ascertain demonstrable qualities of a phenomenon capable of being verified, a method wherein the
researcher is guided by the rules of logical reasoning, a method wherein the investigation proceeds in
an orderly manner and a method that implies internal consistency.
Importance of Knowing How Research is Done
The study of research methodology gives the student the necessary training in gathering material and
arranging or card-indexing them, participation in the field work when required, and also training in
techniques for the collection of data appropriate to particular problems, in the use of statistics,
questionnaires and controlled experimentation and in recording evidence, sorting it out and interpreting
it. In fact, importance of knowing the methodology of research or how research is done stems from
the following considerations:
(i) For one who is preparing himself for a career of carrying out research, the importance of
knowing research methodology and research techniques is obvious since the same constitute
the tools of his trade. The knowledge of methodology provides good training specially to the
new research worker and enables him to do better research. It helps him to develop disciplined
thinking or a ‘bent of mind’ to observe the field objectively. Hence, those aspiring for
careerism in research must develop the skill of using research techniques and must thoroughly
understand the logic behind them.
(ii) Knowledge of how to do research will inculcate the ability to evaluate and use research
results with reasonable confidence. In other words, we can state that the knowledge of
research methodology is helpful in various fields such as government or business
administration, community development and social work where persons are increasingly
called upon to evaluate and use research results for action.
(iii) When one knows how research is done, then one may have the satisfaction of acquiring a
new intellectual tool which can become a way of looking at the world and of judging every
day experience. Accordingly, it enables use to make intelligent decisions concerning problems
facing us in practical life at different points of time. Thus, the knowledge of research
methodology provides tools to took at things in life objectively.
(iv) In this scientific age, all of us are in many ways consumers of research results and we can
use them intelligently provided we are able to judge the adequacy of the methods by which
they have been obtained. The knowledge of methodology helps the consumer of research
results to evaluate them and enables him to take rational decisions.

Research Process
Before embarking on the details of research methodology and techniques, it seems appropriate to
present a brief overview of the research process. Research process consists of series of actions or
steps necessary to effectively carry out research and the desired sequencing of these steps. The
chart shown in Figure 1.1 well illustrates a research process.


Fig. 1.1
Review concepts
and theories
Review previous
research finding
Formulate
hypotheses
Design research
(including
sample design)
Collect data
(Execution)
Analyse data
(Test hypotheses
if any) F F
Review the literature
II
III IV
V
VI
VII
Interpret
and report
Define
research
problem
I
FF
F
FF
FF
F Where = feed back (Helps in controlling the sub-system
to which it is transmitted)
= feed forward (Serves the vital function of
providing criteria for evaluation)

RESEARCH PROCESS IN FLOW CHART

The chart indicates that the research process consists of a number of closely related activities,
as shown through I to VII. But such activities overlap continuously rather than following a strictly
prescribed sequence. At times, the first step determines the nature of the last step to be undertaken.
If subsequent procedures have not been taken into account in the early stages, serious difficulties
may arise which may even prevent the completion of the study. One should remember that the
various steps involved in a research process are not mutually exclusive; nor they are separate and
distinct. They do not necessarily follow each other in any specific order and the researcher has to be
constantly anticipating at each step in the research process the requirements of the subsequent
steps. However, the following order concerning various steps provides a useful procedural guideline
regarding the research process: (1) formulating the research problem; (2) extensive literature survey;
(3) developing the hypothesis; (4) preparing the research design; (5) determining sample design;
(6) collecting the data; (7) execution of the project; (8) analysis of data; (9) hypothesis testing;
(10) generalisations and interpretation, and (11) preparation of the report or presentation of the results,
i.e., formal write-up of conclusions reached.
A brief description of the above stated steps will be helpful.
1. Formulating the research problem:There are two types of research problems, viz., those
which relate to states of nature and those which relate to relationships between variables. At the
very outset the researcher must single out the problem he wants to study, i.e., he must decide the
general area of interest or aspect of a subject-matter that he would like to inquire into. Initially the
problem may be stated in a broad general way and then the ambiguities, if any, relating to the problem
be resolved. Then, the feasibility of a particular solution has to be considered before a working
formulation of the problem can be set up. The formulation of a general topic into a specific research
problem, thus, constitutes the first step in a scientific enquiry. Essentially two steps are involved in
formulating the research problem, viz., understanding the problem thoroughly, and rephrasing the
same into meaningful terms from an analytical point of view.
The best way of understanding the problem is to discuss it with one’s own colleagues or with
those having some expertise in the matter. In an academic institution the researcher can seek the
help from a guide who is usually an experienced man and has several research problems in mind.
Often, the guide puts forth the problem in general terms and it is up to the researcher to narrow it
down and phrase the problem in operational terms. In private business units or in governmental
organisations, the problem is usually earmarked by the administrative agencies with whom the
researcher can discuss as to how the problem originally came about and what considerations are
involved in its possible solutions.
The researcher must at the same time examine all available literature to get himself acquainted
with the selected problem. He may review two types of literature—the conceptual literature concerning
the concepts and theories, and the empirical literature consisting of studies made earlier which are
similar to the one proposed. The basic outcome of this review will be the knowledge as to what data
and other materials are available for operational purposes which will enable the researcher to specify
his own research problem in a meaningful context. After this the researcher rephrases the problem
into analytical or operational terms i.e., to put the problem in as specific terms as possible. This task
of formulating, or defining, a research problem is a step of greatest importance in the entire research
process. The problem to be investigated must be defined unambiguously for that will help discriminating
relevant data from irrelevant ones. Care must, however, be taken to verify the objectivity and validity
of the background facts concerning the problem. Professor W.A. Neiswanger correctly states that

the statement of the objective is of basic importance because it determines the data which are to be
collected, the characteristics of the data which are relevant, relations which are to be explored, the
choice of techniques to be used in these explorations and the form of the final report. If there are
certain pertinent terms, the same should be clearly defined along with the task of formulating the
problem. In fact, formulation of the problem often follows a sequential pattern where a number of
formulations are set up, each formulation more specific than the preceeding one, each one phrased in
more analytical terms, and each more realistic in terms of the available data and resources.
2. Extensive literature survey:Once the problem is formulated, a brief summary of it should be
written down. It is compulsory for a research worker writing a thesis for a Ph.D. degree to write a
synopsis of the topic and submit it to the necessary Committee or the Research Board for approval.
At this juncture the researcher should undertake extensive literature survey connected with the
problem. For this purpose, the abstracting and indexing journals and published or unpublished
bibliographies are the first place to go to. Academic journals, conference proceedings, government
reports, books etc., must be tapped depending on the nature of the problem. In this process, it should
be remembered that one source will lead to another. The earlier studies, if any, which are similar to
the study in hand should be carefully studied. A good library will be a great help to the researcher at
this stage.
3. Development of working hypotheses:After extensive literature survey, researcher should
state in clear terms the working hypothesis or hypotheses. Working hypothesis is tentative assumption
made in order to draw out and test its logical or empirical consequences. As such the manner in
which research hypotheses are developed is particularly important since they provide the focal point
for research. They also affect the manner in which tests must be conducted in the analysis of data
and indirectly the quality of data which is required for the analysis. In most types of research, the
development of working hypothesis plays an important role. Hypothesis should be very specific and
limited to the piece of research in hand because it has to be tested. The role of the hypothesis is to
guide the researcher by delimiting the area of research and to keep him on the right track. It sharpens
his thinking and focuses attention on the more important facets of the problem. It also indicates the
type of data required and the type of methods of data analysis to be used.
How does one go about developing working hypotheses? The answer is by using the following
approach:
(a) Discussions with colleagues and experts about the problem, its origin and the objectives in
seeking a solution;
(b) Examination of data and records, if available, concerning the problem for possible trends,
peculiarities and other clues;
(c) Review of similar studies in the area or of the studies on similar problems; and
(d) Exploratory personal investigation which involves original field interviews on a limited scale
with interested parties and individuals with a view to secure greater insight into the practical
aspects of the problem.
Thus, working hypotheses arise as a result of a-priori thinking about the subject, examination of the
available data and material including related studies and the counsel of experts and interested parties.
Working hypotheses are more useful when stated in precise and clearly defined terms. It may as well
be remembered that occasionally we may encounter a problem where we do not need working

hypotheses, specially in the case of exploratory or formulative researches which do not aim at testing
the hypothesis. But as a general rule, specification of working hypotheses in another basic step of the
research process in most research problems.
4. Preparing the research design:The research problem having been formulated in clear cut
terms, the researcher will be required to prepare a research design, i.e., he will have to state the
conceptual structure within which research would be conducted. The preparation of such a design
facilitates research to be as efficient as possible yielding maximal information. In other words, the
function of research design is to provide for the collection of relevant evidence with minimal expenditure
of effort, time and money. But how all these can be achieved depends mainly on the research
purpose. Research purposes may be grouped into four categories, viz., (i) Exploration, (ii) Description,
(iii) Diagnosis, and (iv) Experimentation. A flexible research design which provides opportunity for
considering many different aspects of a problem is considered appropriate if the purpose of the
research study is that of exploration. But when the purpose happens to be an accurate description of
a situation or of an association between variables, the suitable design will be one that minimises bias
and maximises the reliability of the data collected and analysed.
There are several research designs, such as, experimental and non-experimental hypothesis
testing. Experimental designs can be either informal designs (such as before-and-after without control,
after-only with control, before-and-after with control) or formal designs (such as completely randomized
design, randomized block design, Latin square design, simple and complex factorial designs), out of
which the researcher must select one for his own project.
The preparation of the research design, appropriate for a particular research problem, involves
usually the consideration of the following:
(i) the means of obtaining the information;
(ii) the availability and skills of the researcher and his staff (if any);
(iii) explanation of the way in which selected means of obtaining information will be organised
and the reasoning leading to the selection;
(iv) the time available for research; and
(v) the cost factor relating to research, i.e., the finance available for the purpose.
5. Determining sample design:All the items under consideration in any field of inquiry constitute
a ‘universe’ or ‘population’. A complete enumeration of all the items in the ‘population’ is known as
a census inquiry. It can be presumed that in such an inquiry when all the items are covered no
element of chance is left and highest accuracy is obtained. But in practice this may not be true. Even
the slightest element of bias in such an inquiry will get larger and larger as the number of observations
increases. Moreover, there is no way of checking the element of bias or its extent except through a
resurvey or use of sample checks. Besides, this type of inquiry involves a great deal of time, money
and energy. Not only this, census inquiry is not possible in practice under many circumstances. For
instance, blood testing is done only on sample basis. Hence, quite often we select only a few items
from the universe for our study purposes. The items so selected constitute what is technically called
a sample.
The researcher must decide the way of selecting a sample or what is popularly known as the
sample design. In other words, a sample design is a definite plan determined before any data are
actually collected for obtaining a sample from a given population. Thus, the plan to select 12 of a

city’s 200 drugstores in a certain way constitutes a sample design. Samples can be either probability
samples or non-probability samples. With probability samples each element has a known probability
of being included in the sample but the non-probability samples do not allow the researcher to determine
this probability. Probability samples are those based on simple random sampling, systematic sampling,
stratified sampling, cluster/area sampling whereas non-probability samples are those based on
convenience sampling, judgement sampling and quota sampling techniques. A brief mention of the
important sample designs is as follows:
(i) Deliberate sampling:Deliberate sampling is also known as purposive or non-probability
sampling. This sampling method involves purposive or deliberate selection of particular
units of the universe for constituting a sample which represents the universe. When population
elements are selected for inclusion in the sample based on the ease of access, it can be
called convenience sampling. If a researcher wishes to secure data from, say, gasoline
buyers, he may select a fixed number of petrol stations and may conduct interviews at
these stations. This would be an example of convenience sample of gasoline buyers. At
times such a procedure may give very biased results particularly when the population is not
homogeneous. On the other hand, in judgement samplingthe researcher’s judgement is
used for selecting items which he considers as representative of the population. For example,
a judgement sample of college students might be taken to secure reactions to a new method
of teaching. Judgement sampling is used quite frequently in qualitative research where the
desire happens to be to develop hypotheses rather than to generalise to larger populations.
(ii) Simple random sampling:This type of sampling is also known as chance sampling or
probability sampling where each and every item in the population has an equal chance of
inclusion in the sample and each one of the possible samples, in case of finite universe, has
the same probability of being selected. For example, if we have to select a sample of 300
items from a universe of 15,000 items, then we can put the names or numbers of all the
15,000 items on slips of paper and conduct a lottery. Using the random number tables is
another method of random sampling. To select the sample, each item is assigned a number
from 1 to 15,000. Then, 300 five digit random numbers are selected from the table. To do
this we select some random starting point and then a systematic pattern is used in proceeding
through the table. We might start in the 4th row, second column and proceed down the
column to the bottom of the table and then move to the top of the next column to the right.
When a number exceeds the limit of the numbers in the frame, in our case over 15,000, it is
simply passed over and the next number selected that does fall within the relevant range.
Since the numbers were placed in the table in a completely random fashion, the resulting
sample is random. This procedure gives each item an equal probability of being selected. In
case of infinite population, the selection of each item in a random sample is controlled by
the same probability and that successive selections are independent of one another.
(iii) Systematic sampling:In some instances the most practical way of sampling is to select
every 15th name on a list, every 10th house on one side of a street and so on. Sampling of
this type is known as systematic sampling. An element of randomness is usually introduced
into this kind of sampling by using random numbers to pick up the unit with which to start.
This procedure is useful when sampling frame is available in the form of a list. In such a
design the selection process starts by picking some random point in the list and then every
nth element is selected until the desired number is secured.

(iv) Stratified sampling:If the population from which a sample is to be drawn does not constitute
a homogeneous group, then stratified sampling technique is applied so as to obtain a
representative sample. In this technique, the population is stratified into a number of non-overlapping subpopulations or strata and sample items are selected from each stratum. If
the items selected from each stratum is based on simple random sampling the entire procedure,
first stratification and then simple random sampling, is known as stratified random sampling.
(v) Quota sampling:In stratified sampling the cost of taking random samples from individual
strata is often so expensive that interviewers are simply given quota to be filled from
different strata, the actual selection of items for sample being left to the interviewer’s
judgement. This is called quota sampling. The size of the quota for each stratum is generally
proportionate to the size of that stratum in the population. Quota sampling is thus an important
form of non-probability sampling. Quota samples generally happen to be judgement samples
rather than random samples.
(vi) Cluster sampling and area sampling:Cluster sampling involves grouping the population
and then selecting the groups or the clusters rather than individual elements for inclusion in
the sample. Suppose some departmental store wishes to sample its credit card holders. It
has issued its cards to 15,000 customers. The sample size is to be kept say 450. For cluster
sampling this list of 15,000 card holders could be formed into 100 clusters of 150 card
holders each. Three clusters might then be selected for the sample randomly. The sample
size must often be larger than the simple random sample to ensure the same level of
accuracy because is cluster sampling procedural potential for order bias and other sources
of error is usually accentuated. The clustering approach can, however, make the sampling
procedure relatively easier and increase the efficiency of field work, specially in the case
of personal interviews.
Area samplingis quite close to cluster sampling and is often talked about when the total
geographical area of interest happens to be big one. Under area sampling we first divide
the total area into a number of smaller non-overlapping areas, generally called geographical
clusters, then a number of these smaller areas are randomly selected, and all units in these
small areas are included in the sample. Area sampling is specially helpful where we do not
have the list of the population concerned. It also makes the field interviewing more efficient
since interviewer can do many interviews at each location.
(vii) Multi-stage sampling:This is a further development of the idea of cluster sampling. This
technique is meant for big inquiries extending to a considerably large geographical area like
an entire country. Under multi-stage sampling the first stage may be to select large primary
sampling units such as states, then districts, then towns and finally certain families within
towns. If the technique of random-sampling is applied at all stages, the sampling procedure
is described as multi-stage random sampling.
(viii) Sequential sampling:This is somewhat a complex sample design where the ultimate size
of the sample is not fixed in advance but is determined according to mathematical decisions
on the basis of information yielded as survey progresses. This design is usually adopted
under acceptance sampling plan in the context of statistical quality control.
In practice, several of the methods of sampling described above may well be used in the same
study in which case it can be called mixed sampling. It may be pointed out here that normally one

should resort to random sampling so that bias can be eliminated and sampling error can be estimated.
But purposive sampling is considered desirable when the universe happens to be small and a known
characteristic of it is to be studied intensively. Also, there are conditions under which sample designs
other than random sampling may be considered better for reasons like convenience and low costs.
The sample design to be used must be decided by the researcher taking into consideration the
nature of the inquiry and other related factors.
6. Collecting the data:In dealing with any real life problem it is often found that data at hand are
inadequate, and hence, it becomes necessary to collect data that are appropriate. There are several
ways of collecting the appropriate data which differ considerably in context of money costs, time and
other resources at the disposal of the researcher.
Primary data can be collected either through experiment or through survey. If the researcher
conducts an experiment, he observes some quantitative measurements, or the data, with the help of
which he examines the truth contained in his hypothesis. But in the case of a survey, data can be
collected by any one or more of the following ways:
(i) By observation:This method implies the collection of information by way of investigator’s
own observation, without interviewing the respondents. The information obtained relates to
what is currently happening and is not complicated by either the past behaviour or future
intentions or attitudes of respondents. This method is no doubt an expensive method and
the information provided by this method is also very limited. As such this method is not
suitable in inquiries where large samples are concerned.
(ii) Through personal interview:The investigator follows a rigid procedure and seeks answers
to a set of pre-conceived questions through personal interviews. This method of collecting
data is usually carried out in a structured way where output depends upon the ability of the
interviewer to a large extent.
(iii) Through telephone interviews:This method of collecting information involves contacting
the respondents on telephone itself. This is not a very widely used method but it plays an
important role in industrial surveys in developed regions, particularly, when the survey has
to be accomplished in a very limited time.
(iv) By mailing of questionnaires:The researcher and the respondents do come in contact
with each other if this method of survey is adopted. Questionnaires are mailed to the
respondents with a request to return after completing the same. It is the most extensively
used method in various economic and business surveys. Before applying this method, usually
a Pilot Study for testing the questionnaire is conduced which reveals the weaknesses, if
any, of the questionnaire. Questionnaire to be used must be prepared very carefully so that
it may prove to be effective in collecting the relevant information.
(v) Through schedules:Under this method the enumerators are appointed and given training.
They are provided with schedules containing relevant questions. These enumerators go to
respondents with these schedules. Data are collected by filling up the schedules by
enumerators on the basis of replies given by respondents. Much depends upon the capability
of enumerators so far as this method is concerned. Some occasional field checks on the
work of the enumerators may ensure sincere work.

The researcher should select one of these methods of collecting the data taking into
consideration the nature of investigation, objective and scope of the inquiry, finanical resources,
available time and the desired degree of accuracy.Though he should pay attention to all these
factors but much depends upon the ability and experience of the researcher. In this context Dr A.L.
Bowleyvery aptly remarks that in collection of statistical data commonsense is the chief requisite
and experience the chief teacher.
7. Execution of the project:Execution of the project is a very important step in the research
process. If the execution of the project proceeds on correct lines, the data to be collected would be
adequate and dependable. The researcher should see that the project is executed in a systematic
manner and in time. If the survey is to be conducted by means of structured questionnaires, data can
be readily machine-processed. In such a situation, questions as well as the possible answers may be
coded. If the data are to be collected through interviewers, arrangements should be made for proper
selection and training of the interviewers. The training may be given with the help of instruction
manuals which explain clearly the job of the interviewers at each step. Occasional field checks
should be made to ensure that the interviewers are doing their assigned job sincerely and efficiently.
A careful watch should be kept for unanticipated factors in order to keep the survey as much
realistic as possible. This, in other words, means that steps should be taken to ensure that the survey
is under statistical control so that the collected information is in accordance with the pre-defined
standard of accuracy. If some of the respondents do not cooperate, some suitable methods should be
designed to tackle this problem. One method of dealing with the non-response problem is to make a
list of the non-respondents and take a small sub-sample of them, and then with the help of experts
vigorous efforts can be made for securing response.
8. Analysis of data:After the data have been collected, the researcher turns to the task of analysing
them. The analysis of data requires a number of closely related operations such as establishment of
categories, the application of these categories to raw data through coding, tabulation and then drawing
statistical inferences. The unwieldy data should necessarily be condensed into a few manageable
groups and tables for further analysis. Thus, researcher should classify the raw data into some
purposeful and usable categories. Codingoperation is usually done at this stage through which the
categories of data are transformed into symbols that may be tabulated and counted. Editingis the
procedure that improves the quality of the data for coding. With coding the stage is ready for tabulation.
Tabulationis a part of the technical procedure wherein the classified data are put in the form of
tables. The mechanical devices can be made use of at this juncture. A great deal of data, specially in
large inquiries, is tabulated by computers. Computers not only save time but also make it possible to
study large number of variables affecting a problem simultaneously.
Analysis work after tabulation is generally based on the computation of various percentages,
coefficients, etc., by applying various well defined statistical formulae. In the process of analysis,
relationships or differences supporting or conflicting with original or new hypotheses should be subjected
to tests of significance to determine with what validity data can be said to indicate any conclusion(s).
For instance, if there are two samples of weekly wages, each sample being drawn from factories in
different parts of the same city, giving two different mean values, then our problem may be whether
the two mean values are significantly different or the difference is just a matter of chance. Through
the use of statistical tests we can establish whether such a difference is a real one or is the result of
random fluctuations. If the difference happens to be real, the inference will be that the two samples

come from different universes and if the difference is due to chance, the conclusion would be that
the two samples belong to the same universe. Similarly, the technique of analysis of variance can
help us in analysing whether three or more varieties of seeds grown on certain fields yield significantly
different results or not. In brief, the researcher can analyse the collected data with the help of
various statistical measures.
9. Hypothesis-testing:After analysing the data as stated above, the researcher is in a position to
test the hypotheses, if any, he had formulated earlier. Do the facts support the hypotheses or they
happen to be contrary? This is the usual question which should be answered while testing hypotheses.
Various tests, such as Chi square test, t-test, F-test, have been developed by statisticians for the
purpose. The hypotheses may be tested through the use of one or more of such tests, depending upon
the nature and object of research inquiry. Hypothesis-testing will result in either accepting the hypothesis
or in rejecting it. If the researcher had no hypotheses to start with, generalisations established on the
basis of data may be stated as hypotheses to be tested by subsequent researches in times to come.
10. Generalisations and interpretation:If a hypothesis is tested and upheld several times, it may
be possible for the researcher to arrive at generalisation, i.e., to build a theory. As a matter of fact,
the real value of research lies in its ability to arrive at certain generalisations. If the researcher had no
hypothesis to start with, he might seek to explain his findings on the basis of some theory. It is known
as interpretation. The process of interpretation may quite often trigger off new questions which in
turn may lead to further researches.
11. Preparation of the report or the thesis:Finally, the researcher has to prepare the report of
what has been done by him. Writing of report must be done with great care keeping in view the
following:
1. The layout of the report should be as follows: (i) the preliminary pages; (ii) the main text,
and (iii) the end matter.
In its preliminary pagesthe report should carry title and date followed by acknowledgements
and foreword. Then there should be a table of contents followed by a list of tables and list
of graphs and charts, if any, given in the report.
The main text of the reportshould have the following parts:
(a) Introduction:It should contain a clear statement of the objective of the research and
an explanation of the methodology adopted in accomplishing the research. The scope
of the study along with various limitations should as well be stated in this part.
(b) Summary of findings:After introduction there would appear a statement of findings
and recommendations in non-technical language. If the findings are extensive, they
should be summarised.
(c) Main report:The main body of the report should be presented in logical sequence and
broken-down into readily identifiable sections.
(d) Conclusion:Towards the end of the main text, researcher should again put down the
results of his research clearly and precisely. In fact, it is the final summing up.
At the end of the report, appendices should be enlisted in respect of all technical data. Bibliography,
i.e., list of books, journals, reports, etc., consulted, should also be given in the end. Index should also
be given specially in a published research report.

2. Report should be written in a concise and objective style in simple language avoiding vague
expressions such as ‘it seems,’ ‘there may be’, and the like.
3. Charts and illustrations in the main report should be used only if they present the information
more clearly and forcibly.
4. Calculated ‘confidence limits’ must be mentioned and the various constraints experienced
in conducting research operations may as well be stated.
Criteria of Good Research
Whatever may be the types of research works and studies, one thing that is important is that they all
meet on the common ground of scientific method employed by them. One expects scientific research
to satisfy the following criteria:
11
1. The purpose of the research should be clearly defined and common concepts be used.
2. The research procedure used should be described in sufficient detail to permit another
researcher to repeat the research for further advancement, keeping the continuity of what
has already been attained.
3. The procedural design of the research should be carefully planned to yield results that are
as objective as possible.
4. The researcher should report with complete frankness, flaws in procedural design and
estimate their effects upon the findings.
5. The analysis of data should be sufficiently adequate to reveal its significance and the
methods of analysis used should be appropriate. The validity and reliability of the data
should be checked carefully.
6. Conclusions should be confined to those justified by the data of the research and limited to
those for which the data provide an adequate basis.
7. Greater confidence in research is warranted if the researcher is experienced, has a good
reputation in research and is a person of integrity.
In other words, we can state the qualities of a good research
12
as under:
1. Good research is systematic:It means that research is structured with specified steps to
be taken in a specified sequence in accordance with the well defined set of rules. Systematic
characteristic of the research does not rule out creative thinking but it certainly does reject
the use of guessing and intuition in arriving at conclusions.
2. Good research is logical:This implies that research is guided by the rules of logical
reasoning and the logical process of induction and deduction are of great value in carrying
out research. Induction is the process of reasoning from a part to the whole whereas
deduction is the process of reasoning from some premise to a conclusion which follows
from that very premise. In fact, logical reasoning makes research more meaningful in the
context of decision making.


3. Good research is empirical:It implies that research is related basically to one or more
aspects of a real situation and deals with concrete data that provides a basis for external
validity to research results.
4. Good research is replicable:This characteristic allows research results to be verified by
replicating the study and thereby building a sound basis for decisions.
Problems Encountered by Researchers in India
Researchers in India, particularly those engaged in empirical research, are facing several problems.
Some of the important problems are as follows:
1. The lack of a scientific training in the methodology of researchis a great impediment
for researchers in our country. There is paucity of competent researchers. Many researchers
take a leap in the dark without knowing research methods. Most of the work, which goes
in the name of research is not methodologically sound. Research to many researchers and
even to their guides, is mostly a scissor and paste job without any insight shed on the
collated materials. The consequence is obvious, viz., the research results, quite often, do
not reflect the reality or realities. Thus, a systematic study of research methodology is an
urgent necessity. Before undertaking research projects, researchers should be well equipped
with all the methodological aspects. As such, efforts should be made to provide short-duration intensive courses for meeting this requirement.
2. There is insufficient interactionbetween the university research departments on one side
and business establishments, government departments and research institutions on the other
side. A great deal of primary data of non-confidential nature remain untouched/untreated
by the researchers for want of proper contacts. Efforts should be made to develop
satisfactory liaison among all concerned for better and realistic researches. There is
need for developing some mechanisms of a university—industry interaction programme so
that academics can get ideas from practitioners on what needs to be researched and
practitioners can apply the research done by the academics.
3. Most of the business units in our country do not have the confidence that the material
supplied by them to researchers will not be misused and as such they are often reluctant in
supplying the needed information to researchers. The concept of secrecy seems to be
sacrosanct to business organisations in the country so much so that it proves an impermeable
barrier to researchers. Thus, there is the need for generating the confidence that the
information/data obtained from a business unit will not be misused.
4. Research studies overlapping one another are undertaken quite often for want of
adequate information. This results in duplication and fritters away resources. This problem
can be solved by proper compilation and revision, at regular intervals, of a list of subjects on
which and the places where the research is going on. Due attention should be given toward
identification of research problems in various disciplines of applied science which are of
immediate concern to the industries.
5. There does not exist a code of conduct for researchersand inter-university and inter-departmental rivalries are also quite common. Hence, there is need for developing a code
of conduct for researchers which, if adhered sincerely, can win over this problem.

6. Many researchers in our country also face the difficulty of adequate and timely secretarial
assistance,including computerial assistance. This causes unnecessary delays in the
completion of research studies. All possible efforts be made in this direction so that efficient
secretarial assistance is made available to researchers and that too well in time. University
Grants Commission must play a dynamic role in solving this difficulty.
7. Library management and functioning is not satisfactory at many placesand much of
the time and energy of researchers are spent in tracing out the books, journals, reports, etc.,
rather than in tracing out relevant material from them.
8. There is also the problem that many of our libraries are not able to get copies of old
and new Acts/Rules, reports and other government publications in time. This problem
is felt more in libraries which are away in places from Delhi and/or the state capitals. Thus,
efforts should be made for the regular and speedy supply of all governmental publications
to reach our libraries.
9. There is also the difficulty of timely availability of published datafrom various
government and other agencies doing this job in our country. Researcher also faces the
problem on account of the fact that the published data vary quite significantly because of
differences in coverage by the concerning agencies.
10. There may, at times, take place the problem of conceptualizationand also problems
relating to the process of data collection and related things.
Questions
1. Briefly describe the different steps involved in a research process.
2. What do you mean by research? Explain its significance in modern times.
3. Distinguish between Research methods and Research methodology.
4. Describe the different types of research, clearly pointing out the difference between an experiment and a
survey.
5. Write short notes on:
(1) Design of the research project;
(2) Ex post facto research;
(3) Motivation in research;
(4) Objectives of research;
(5) Criteria of good research;
(7) Research and scientific method.
6. “Empirical research in India in particular creates so many problems for the researchers”. State the problems
that are usually faced by such researchers.

7. “A research scholar has to work as a judge and derive the truth and not as a pleader who is only eager
to prove his case in favour of his plaintiff.” Discuss the statement pointing out the objectives of
research.

8. “Creative management, whether in public administration or private industry, depends on methods of
inquiry that maintain objectivity, clarity, accuracy and consistency”. Discuss this statement and examine
the significance of research”.

9. “Research is much concerned with proper fact finding, analysis and evaluation.” Do you agree with this
statement? Give reasons in support of your answer.
10. It is often said that there is not a proper link between some of the activities under way in the world of
academics and in most business in our country. Account for this state of affairs and give suggestions for
improvement.

Saturday, December 15, 2012

Deductive & Inductive Methods in Research



Deductive & Inductive Methods in Research 

By; Prof. M. Rizwan


In logic, we often refer to the two broad methods of reasoning as the deductive and inductive approaches.

 Deductive reasoning works from the more general to the more specific. Sometimes this is informally called a "top-down" approach. We might begin with thinking up a theory about our topic of interest. We then narrow that down into more specific hypotheses that we can test. We narrow down even further when we collect observations to address the hypotheses. This ultimately leads us to be able to test the hypotheses with specific data -- a confirmation (or not) of our original       theories.
Inductive reasoning works the other way, moving from specific observations to broader generalizations and theories. Informally, we sometimes call this a "bottom up" approach (please note that it's "bottom up" and not "bottoms up" which is the kind of thing the bartender says to customers when he's trying to close for the night!). In inductive reasoning, we begin with specific observations and measures, begin to detect patterns and regularities, formulate some tentative hypotheses that we can explore, and finally end up developing some general conclusions or theories.
These two methods of reasoning have a very different "feel" to them when you're conducting research. Inductive reasoning, by its very nature, is more open-ended and exploratory, especially at the beginning. Deductive reasoning is more narrow in nature and is concerned with testing or confirming hypotheses. Even though a particular study may look like it's purely deductive (e.g., an experiment designed to test the hypothesized effects of some treatment on some outcome), most social research involves both inductive and deductive reasoning processes at some time in the project. In fact, it doesn't take a rocket scientist to see that we could assemble the two graphs above into a single circular one that continually cycles from theories down to observations and back up again to theories. Even in the most constrained experiment, the researchers may observe patterns in the data that lead them to develop new theories. Logical arguments are usually classified as either 'deductive' or 'inductive'.
Deduction: In the process of deduction, you begin with some statements, called 'premises', that are assumed to be true, you then determine what else would have to be true if the premises are true. For example, you can begin by assuming that God exists, and is good, and then determine what would logically follow from such an assumption. You can begin by assuming that if you think, then you must exist, and work from there. In mathematics you can begin with some axioms and then determine what you can prove to be true given those axioms. With deduction you can provide absolute proof of your conclusions, given that your premises are correct. The premises themselves, however, remain unproven and unprovable, they must be accepted on face value, or by faith, or for the purpose of exploration.
Induction: In the process of induction, you begin with some data, and then determine what general conclusion(s) can logically be derived from those data. In other words, you determine what theory or theories could explain the data. For example, you note that the probability of becoming schizophrenic is greatly increased if at least one parent is schizophrenic, and from that you conclude that schizophrenia may be inherited. That is certainly a reasonable hypothesis given the data. Note, however, that induction does not prove that the theory is correct. There are often alternative theories that are also supported by the data. For example, the behavior of the schizophrenic parent may cause the child to be schizophrenic, not the genes. What is important in induction is that the theory does indeed offer a logical explanation of the data. To conclude that the parents have no effect on the schizophrenia of the children is not supportable given the data, and would not be a logical conclusion.
Deduction and induction by themselves are inadequate for a scientific approach. While deduction gives absolute proof, it never makes contact with the real world, there is no place for observation or experimentation, no way to test the validity of the premises. And, while induction is driven by observation, it never approaches actual proof of a theory. The development of the scientific method involved a gradual synthesis of these two logical approaches.

Knowledge


CHAPTER 6: KNOWLEDGE

From Great Issues in Philosophy, by James Fieser
Home: www.utm.edu/staff/jfieser/120
Copyright 2008, updated 4/1/2011

CONTENTS

A. Skepticism
Radical Skepticism
Criticisms of Radical Skepticism
B. Sources of Knowledge
Experiential Knowledge
Non-Experiential Knowledge
Rationalism and Empiricism
C. The Definition of Knowledge
Justified True Belief
The Gettier Problem
D. Truth, Justification and Relativism
Theories of Truth
Theories of Justification
What’s so Bad about Relativism?
E. Scientific Knowledge
Confirming Theories
Scientific Revolutions

For Reflection
1. What reasons might you have for doubting that an object in front of you, such as a table, actually exists?
2. What are the common ways in which you acquire knowledge, and which is most reliable?
3. If you claim to know something, do you need to have evidence to back up your claim?
4. What does it mean for a statement to be true?
5. Everyone believes that George Washington existed; what other beliefs are that based upon, and what, in turn, are those based upon?
6. Are scientific theories merely the collective opinions of scientists, or do such theories give us genuine knowledge of the real world?

Some years ago, 39 members of an organization called Heaven's Gate committed suicide in the belief that they were shedding their earthly bodies to join an alien spaceship that was following the path of a comet. The centerpiece of the cult's belief system was that there are superior beings out there in the universe that exist on a higher level than we do on earth. They have perfect bodies, roam the galaxy in spacecrafts, and have mastered time travel. Occasionally these aliens send an away team to earth, who temporarily take the form of human beings, and assist willing human students in transforming to this higher level. Jesus and his disciples, they believed, were an earlier away team of alien teachers. Another away team appeared in the late 20th century. After the right training, human students would need to kill their physical bodies, which would release their spirits into the atmosphere. Nearby alien spacecrafts would then retrieve the spirits and provide them with perfect bodies.
           It's one thing to have an interesting idea about how the universe runs. It is quite another thing to know that the idea is true. Heaven's Gate members indeed claimed to know the truth of their views. That knowledge, they explained, begins when superior aliens implant special wisdom in the minds of select human students; this knowledge is further refined as they study under their alien teachers. But does this count as genuine knowledge? One of the central concerns of philosophy is to understand the concept of knowledge, which might help us distinguish between the convictions of a Heaven's Gate believer and the convictions of, say, a scientist. We want to see what precisely it means to "know" something, and what the legitimate avenues are for gaining knowledge. We'd also like to know how to respond to skeptics who say that all knowledge claims – including scientific ones -- are just as uncertain as the views of Heaven's Gate believers. These are the primary concerns in the philosophical study of the concept of knowledge, which goes by the name epistemology – from the Greek words episteme (knowledge) and logos (study).
           There are two main ways that we normally use the term "knowledge." First I might say that I know how to do some task, like fix a flat tire on my car or run a program on my computer. This is procedural knowledge, which involves skills that we have to perform specific chores. Second, I might say that I know some proposition, such as that "Paris is the capital of France." This is propositional knowledge – knowledge about some fact or state of affairs in the world. As important as procedural knowledge is in our daily lives, it is propositional knowledge that interests philosophers and will be the focus of this chapter.

A. SKEPTICISM
According to Heaven's Gate believers, their knowledge about the superior alien race came principally from the aliens themselves who took the form of human teachers. However, once becoming human, the alien teachers were stripped of their previous memories and knowledge. All that remained for them was a hazy image of the higher level, which they struggled to convey to their human students. Ironically, they explain, the aliens purposefully imposed this knowledge restriction on themselves since "too much knowledge too soon could potentially be an interference and liability to their plan." Immediately we should be suspicious about the belief system of the Heaven's Gate cult since the sources of their knowledge are so shaky. Not only must the human students blindly trust the statements of their supposed alien teachers, but the alien teachers themselves have no clear memories of their previous alien lifestyle. Genuine knowledge must have some evidence to back it up, which we don't see here. It's thus pretty natural for us to be skeptical about cults like Heaven's Gate that make extravagant claims with little concrete evidence. If we didn't have this built-in suspicion we'd be suckered into every hair-brained scheme that came along.
           But how far should our skepticism go? As long as there have been philosophers on this planet, there have been skeptics who have cast doubt on even our most natural beliefs, such as my belief that the table in front of me actually exists. One ancient philosopher, for example, believed that everything in the world changed so rapidly that, when someone spoke to him, he couldn't trust that the words meant the same thing by the time they reached his ears. He thus wouldn't verbally reply to anyone, but would only wiggle his finger indicating that he heard something. While this is quite an extreme reaction, it vividly illustrates the notion of philosophical skepticism -- the view that there are grounds for doubting claims that we typically take for granted.

           Radical Skepticism. There are many kinds of philosophical skepticism, and one distinguishing factor involves the extent of the skeptic's doubt. Local skepticism focuses on a particular claim, such as the belief that God exists, or that there is a universal standard of morality, or that there is intelligent life elsewhere in the universe. In each case, the skeptic would argue that we should doubt the specific claim in question. Many of us are local skeptics about at least some beliefs that others hold, and while we are skeptical about some issues we might be full believers on others. For example, I might be a religious skeptic about God's existence, but not be a moral skeptic about a universal standard of morality. And then there is radical skepticism, which maintains that all of our beliefs are subject to doubt. For any belief that we propose, we cannot know with certainty whether that belief is true or false. This is the type of skepticism that has attracted the most interest among philosophers. A couple centuries ago traditional thinkers argued that this kind of skepticism is a danger to everything that we hold sacred and it threatens to set civilization adrift on an ocean of chaos. The foremost task of philosophers, they argued, should be to combat radical skepticism and establish the certainty of our most important beliefs. Time has shown, though, that this was a false alarm: radical skepticism has not done any apparent damage to society. In fact, radical skeptics have maintained that there is a special benefit to skepticism: it can make us more tolerant of others when we realize that we ourselves can't claim to have superior knowledge.
           There are three general strategies for defending radical skepticism, each named after its originator. The first is Pyrrhonian skepticism, which was inspired by the ancient Greek philosopher Pyrrho (c.365-c.275 BCE). While Pyrrho wrote nothing, through his teachings he started a skeptical tradition whose aim was to suspend belief on every possible issue. The Pyrrhonian position is this: for any so called fact about the world, there are countless ways of interpreting it, none of which we can prefer above another; we should thus suspend belief about the nature of that thing. Take, for example a red ball that's in front of me. My eyes tell me one thing about it, but my sense of touch tells me an entirely different thing. To someone else who is color blind or has chapped hands, it will have a different set of features. To a dog it will appear even more differently. Suppose that someone was shrunk to the size of a molecule and sitting on the ball: the ball's surface would seem flat, not round. Suppose someone else expanded to the size of a mountain and was looking down on the ball: it would appear to be a speck with no recognizable features at all. We get used to the way that we perceive things like a red ball, and we assume that the ball actually has the features that we perceive. According to the Pyrrhonian skeptic, there's no basis for preferring our individual perspective over any other one. Arguments supporting any claim to knowledge will always be counter-balanced by opposing arguments, thus forcing the suspension of judgment on the original knowledge claim. Thus, views of the physical world, God, morality and everything else are all merely a matter of perspective, and the wisest course of action for us is to abstain from believing those views. Doubting everything, Pyrrhonians argue, will give us a sense of peace since we'll no longer be pulled back and forth in controversies about science, God, morality, politics, or anything else.
           The second approach to radical doubt is Humean skepticism, defended by the Scottish philosopher David Hume (1711-1776). According to this view, the human reasoning process is inherently flawed and this undermines all claims to know something. The problem is that when we list the reasons for our various beliefs about the world, we'll find that many of the explanations are contradictory. For example, if I follow one course of reasoning, I'll come to the conclusion that the ball in front of me really is round. But if I reason in another way, I'll conclude that the ball's roundness is just a matter of my perspective. Maybe the ball really is round; then again, maybe it's not. It makes no difference what the truth of the matter is since we now can't trust anything that human reason tells us. It's like being tested on a math problem: it makes no difference if you accidentally come up with the right answer. Once you've made a mistake in your calculations, your solution to the math problem is wrong, and you get no partial credit. Similarly, human reasoning is defective, and it's irrelevant whether it accidentally leads us to the truth.  After exposing a series of contradictions within the human reasoning process, Hume makes this dismal assessment:

The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another. [Treatise of Human Nature, 1.4.7.8]

Thus, for Hume, everything that we reason about is based on faulty mental programming, and we need to regularly remind ourselves of this before we get too confident about what we claim to be true.
           The third approach to radical doubt is Cartesian skepticism, named after French philosopher RenĂ© Descartes (1596-1650). On this view, our entire understanding of the world may just be an illusion, and this possibility casts doubt on any knowledge claim that we might make. Descartes himself was not a skeptic, but he tentatively used a compelling argument for radical skepticism as a tool for developing a non-skeptical philosophical system. Descartes speculates: what if he was just a mind without any body, bobbing around in the spirit realm, and everything he perceived about the world was implanted in his mind by a powerful evil demon? Everything he assumes about the world, then, would be false. He describes this scenario here:

I will suppose that ... some evil demon with extreme power and cunning has used all his energies to deceive me. I will consider that the sky, the air, the earth, colors, shapes, sound, and all other external things are nothing but deluded dreams, which this genius has used as traps for my judgment. I will consider myself as having no hands, no eyes, no flesh, no blood, nor any senses, but falsely believing that I have all these things. [Meditations, 1]

Descartes didn't actually believe that he was being manipulated by an evil demon. His point is that this is a theoretical possibility that undermines all of our knowledge claims. I look at a ball in front of me; while it seems to really be there, I can't know this for sure since my experience might be an illusion imposed on me by the evil demon. 
           Of the three approaches to radical skepticism, the Cartesian version has captured people's imagination the most. Science fiction movies galore play off this theme. For example, in the film The Matrix, people's bodies are suspended in tubs of goo and their brains are wired into a massive computer that generates an artificial reality. Similarly, a commonly used example in contemporary philosophy is that of the brain in a vat: a mad scientist puts a person's brain into a glass jar and wires it to a supercomputer that creates an artificial reality. Whether the mechanism is an evil demon, the Matrix, or a mad scientist, the victims' experiences are so convincing that, from their perspective, it's impossible to tell that the perceived reality is fake.

           Criticisms of Radical Skepticism. With arguments as shocking as these, traditional philosophers wasted no time trying to stamp out the fire of radical skepticism. Four arguments were commonly used. First, even if there are ample reasons for me to doubt everything, there is still one truth that is irrefutable: my own existence. For, even if I say "I doubt that I exist," I must still be present to do the doubting. The act of doubting itself requires a doubter, and so my own existence will always be immune to skeptical doubts. This was the criticism that Descartes himself made of radical skepticism, which he encapsulated in the expression "I think, therefore I am." But radical skeptics have not been impressed by this maneuver. The problem with Descartes' solution is that it assumes too many things about what the "I" is behind all those doubts. Most importantly, it takes for granted that the "I" is a unified, conscious thing that continues intact as time moves on. But, according to the skeptic, this conception of the "I" relies too heavily on memory. I assume that I'm the same person now that I was a few moments ago because that's how it seems in my memory. And memory is a very easy target of doubt. Imagine that, every half second, an evil demon wiped clean all of my memories and gave me entirely new ones. One moment I think I'm a farmer, a half-second later a caveman, a half-second later a frog. For all I know, says the skeptic, that's what's actually happening to me right now and in that situation it would seem pretty meaningless for me to assert that "I exist".
           A second common attack on radical skepticism is that we can't live as skeptics in our normal lives. Sure, there is the occasional odd ball, like the finger-wiggling ancient philosopher described earlier. But if we persistently doubted everything, then we wouldn't eat when hungry, move from the path of speeding cars, or a thousand other things that we do during a typical day. We'd hesitate and question everything, but never act. Radical skeptics have not been impressed with this argument either. According to Hume, we have natural beliefs that direct our normal behavior and override our skeptical doubts. As legitimate as radical skepticism is, nature doesn't give us the option to act on it. He makes this point here:

Most fortunately it happens, that since reason is incapable of dispelling these clouds [of skepticism], nature herself suffices to that purpose, and cures me of this philosophical melancholy. . . . I dine, I play a game of backgammon, I converse, and am merry with my friends; and when, after three or four hours' amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any further. [Treatise, 1.4.7]

Thus, according to Hume, we waver back and forth between skepticism and natural beliefs. When we realize how philosophically unjustified natural beliefs are, we are led down the path of skepticism. The doorbell then rings, and we're snapped out of our philosophical speculations and back to our normal routines and natural beliefs.
           A third attack on radical skepticism is that the skeptic's position is logically self-refuting. The skeptic's main point is this:

  • We cannot know any belief with certainty.

Let's call this "the skeptic's thesis." However, if I put forward the skeptic's thesis, then I am implying that I know it with certainty. It is like saying this:

  • We know with certainty that we cannot know any belief with certainty.

The skeptic's thesis itself seems to be an exception to the very point that it is making. Thus, the skeptic's thesis is logically inconsistent with itself and we should reject it. But skeptics have a response to this criticism, which they sometimes explain using the metaphor of a digestive laxative. We take laxatives to rid our digestive system of unwanted stuff. But as the laxative takes effect, the laxative itself is expelled from the digestive system along with everything else. The skeptic's thesis, then, is like a laxative: we take it to rid our minds of all unjustified beliefs, and in the process we expel the skeptic's thesis itself. It's a higher level of skepticism in which we set aside everything, including the skeptic’s thesis.
           A fourth criticism of radical skepticism is that it rests on an unrealistically high standard of evidence. There are two basic levels of evidence: complete and partial. The skeptic assumes that genuine knowledge requires complete evidence, but complete evidence is not achievable. Try as we might, says the skeptic, we can never prove any assertion with absolute certainty and some skeptical argument cast doubt on that assertion. The solution is to this skeptical challenge is to reduce the qualifications for knowledge and be content with partial evidence. To illustrate, suppose I want to gather enough evidence to support the claim that "I know that there is a ball in front of me." I first get evidence through my senses: I perceive the ball with my eyes. I could then get supporting evidence by having others stand in front of the ball and report whether they see it too. I could get even stronger evidence by using scientific equipment that would measure the ball's density and detect the light spectrum reflected off the ball. This may seem extreme, but even then the evidence is still not complete. I could bring in a team of physicists to study the ball and write up an exhaustive report. I could hire a second team to do more tests. But even this is not complete since there are always more tests that I could run. Complete evidence is not possible, and the radical skeptic knows this. What, though, if we lower the requirements for what counts as knowledge? We could allow partial evidence, but not require the evidence to be complete. In the case of the ball, it might be enough to simply rely on the evidence that I gain about it through my senses, as incomplete as it is. This would put radical skepticism to rest. The problem with this solution, though, is that it doesn't refute radical skepticism, but surrenders to it. It concedes the impossibility of ever having genuine knowledge with absolute certainty. What we're left with is a version of knowledge that's so diluted that it doesn't count for much more than a personal conviction. After viewing the ball with my eyes, I may as well say "I have a partially supported belief that a ball is in front of me." Inserting the word "knowledge" here would add nothing.
           While radical skepticism seems excessive, it nevertheless poses a challenge to genuine knowledge that can't be easily combated. It may well be impossible to ever refute radical skepticism and so it might forever remain the archenemy of knowledge. While attempts to destroy this villain may ultimately fail, struggling with the issue helps illuminate the nature of knowledge itself. It's much like research into seemingly incurable diseases: even if scientists can't discover a cure for cancer, the investigation still gives them a greater insight into human physiology. As we move on to explore the concept of knowledge in more detail, skepticism will always be lurking in the background, often forcing us to reject some theories and revise others.

B. SOURCES OF KNOWLEDGE
We claim to know a lot of facts, for example, that fire is hot, that George Washington was the first U.S. president, and, in the case of Heaven's Gate believers, that superior aliens are roaming the galaxy. Our knowledge claims vary dramatically, and frequently we claim to know something that we really don't know. One way of understanding the concept of knowledge is to look at the different ways in which we acquire knowledge.
           Philosophers have traditionally maintained that there are two types of knowledge from two entirely different sources. First, there is knowledge through experience: seeing something, hearing about something, feeling something. This goes by the Latin term a posteriori which literally means knowledge that is posterior to – or after experience. Second, there is knowledge that does not come from experience, but perhaps instead from reason itself, such as logical and mathematical truths. This is called a priori knowledge, which, from Latin, literally means knowledge that is prior to experience.

           Experiential Knowledge. Experiential (a posteriori) knowledge is of many types, the most obvious of which involves perception. Each of our five senses is like a door to the outside world; when we throw them open, we are flooded with an endless variety of sights, sounds, textures, smells and tastes. When I look at a cow in front of me and say "I know that it is brown," the source of this knowledge rests upon my visual perception of the brown cow. While perception is perhaps the dominant source of experiential knowledge, it can also be misleading. There are optical illusions, such as a stick which appears bent when in water; there are mirages, such as the appearance of water puddles on hot roads. Skeptics, as we’ve seen, have exposed endless problems with perceptual knowledge such as these.
           A second source of experiential knowledge is introspection, which involves directly experiencing our own mental states. Introspection is like a sixth sense that looks into the most intimate parts of our minds, which allows us to inspect how we are feeling and how our thoughts are operating. If I go to my doctor complaining of an aching back, she'll ask me to describe my pain. Through introspection I then might report, "Well, it’s a sharp pain that starts right here and stops right here." The doctor herself cannot directly experience what I do and must rely on my introspective description. Like perception, introspection is not always reliable. When surveying my mental states, I may easily misdescribe feelings, such as mistaking a feeling of disappointment for a feeling of frustration. Other mental states seem to defy any clear descriptions at all, such as feelings of love or happiness.
           A third source of experiential knowledge is memory. My memory is like a recording device that captures events that I experience more or less in the order that they occur. I remember my trip to the doctor and the pain that I described to her at the time. This recollection itself constitutes a new experience. Again, experiential knowledge through memory is not always reliable. For example, I might wrongly recollect that there's pizza in the refrigerator, completely forgetting that I ate it all last night. Also, sometimes overbearing people like police investigators can make us think that we remember something that never happened. And then there's the phenomenon of deja vu, the feeling that we've encountered something before when we really haven't.
           A fourth source of experiential knowledge is the testimony of other people. Take, for example, my knowledge that George Washington was the first U.S. president. Since Washington died centuries before I was born, I couldn't know this through direct perception. Instead, I rely on the statements in history books. The authors of those books, in turn, rely on accounts from earlier records, and eventually it traces back to the direct experience of eyewitnesses who personally knew George Washington. A large portion of our knowledge rests on testimony – facts about people we've never seen our places we've never been to. While it's convenient for us to trust the testimony of others, there is often a high likelihood of error. This is particularly so with word-of-mouth testimonies: talk is cheap, and we're often sloppy in the accounts that we convey to others. Testimonies from written sources are usually more reliable than oral sources, but much depends on the integrity of the author, publisher, and the methods of fact-gathering. With oral or written sources, the longer the chain of testimony is, the greater the chance is of error creeping in.
           Perception, introspection, memory, and testimony: these are the four main ways of acquiring knowledge through experience. Did we leave any out? There are a few contenders, one of which is extrasensory perception, or ESP. For example, you might telepathically access my mind and know what I'm thinking. Or, through clairvoyance, you might be aware of an event taking place far away without seeing it or hearing about it. If ESP actually worked, we might indeed classify it among the other sources of experiential knowledge. But does it? Typical studies into ESP involve subjects guessing symbols on cards that are hidden from view. If the subject does better than a chance percentage, this is presumed to be evidence of ESP. However, the most scientifically rigorous experiments of this sort have failed to produce anything better than a chance percentage. While we regularly hear rumors of people having ESP, we have little reason to take them seriously. The safe route, then, would be to leave ESP off the list of sources of experiential knowledge.
           Consider next religious experiences. Believers sometimes say that they receive prophecies from God, or are guided by him, or know something through faith. Christian theologian John Calvin even spoke of a sense of the divine that we all have, which informs us that God exists. Might any of this count as experiential knowledge? The question is a complex one considering the wide range of religious experiences that believers report. Let's narrow the question to two representative types: knowledge through faith and prophetic knowledge. Regarding faith, as typically understood, faith involves belief without evidence, such as faith that God exists, or that the bodies of the dead will be resurrected in the future, or that our souls will be reincarnated in different bodies. These faith beliefs may important for in our personal religious lives, but there is a problem when we to claim to know something through faith. One of the chief requirements for something to count as “knowledge” is that there is evidence to support it—as we'll see more clearly in the next section. But since faith is belief without evidence then, technically speaking faith wouldn't qualify as knowledge. Prophetic knowledge faces the same challenge as ESP: are prophecies any more successful than educated guesses? Imagine an experiment that we might conduct in which half of the subjects were prophets, and the other half non-prophets. We then asked both groups to make predictions about the upcoming year; at the end of the year we then checked the results. How would the prophets do? The odds are slim that we could even conduct the experiment since prophets would say that they can't prophesize on demand: it's a unique and unpredictable revelatory experience. They might also say that their revelations from God are not the sort of things that can be confirmed in the newspaper. If prophetic experiences are genuine sources of knowledge, the burden of proof seems to be on the believer. In the mean time, it would be premature to include it among the normal sources of experiential knowledge.

           Non-Experiential Knowledge. Turning next to non-experiential (a priori) knowledge, this source of information is much more difficult to describe. Some philosophers depict it as knowledge that flows from human reason itself, unpolluted by experience. We presumably gain access to this knowledge through rational insight. Usual examples of non-experiential knowledge are mathematics and logic. Take, for example, 2+2=4. Indeed, I might learn from experience that two apples plus two more apples will give me four apples. Nevertheless, I can grasp the concept itself without relying on any apples; I can also expand on the notion in ways that I could never experience, such as with the equation 2,000,000 + 2,000,000 = 4,000,000. Logic is similar; take for example the following argument:

All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.

When we strip this argument of all its empirical parts – men, morality, Socrates – the following structure is revealed:

All X are Y
Z is an X
Therefore Z is a Y

This logical structure is something that we know independently of experience. In addition to math and logic, there are other truths that we know non-experientially, such as these:

·       All bachelors are unmarried men.
·       A sister is a female sibling.
·       Red is a color.

In each of the above cases, the truth depends entirely on the concepts within these statements. In the first, "unmarried men" is part of the definition of "bachelor"; the statement is thus true by definition, irrespective of our experiences.
           Two concepts have been important in fleshing out the notion of non-experiential knowledge. First is necessity: non-experiential truths are necessary in that they could never be false, regardless of how differently the world was constructed. 2+2 would equal 4 in every conceivable science fiction scenario of the universe. Even if no human being ever existed, it would still be true that "All bachelors are unmarried men" based on the meaning of the words themselves. Experiential knowledge, though, is different in that it is contingent, as opposed to nessary: it could be false if the world had unfolded differently. Take the statement "George Washington was the first U.S. president," which is an item of experiential knowledge. It is of course true as things stand now. But we can imagine a thousand different things that might have prevented Washington from becoming president. What if he was sent to an orphanage for chopping down the family cherry tree? What if he choked to death on his wooden teeth prior to his inauguration? The truth of all experiential knowledge hinges on the precise construction of the world as it currently is.
           The other concept embedded in the notion of non-experiential knowledge is that of an analytic statement: a statement that becomes self-contradictory if we deny it. Take, for example, the statement "All bachelors are unmarried men." Its denial would be this:

It is not the case that all bachelors are unmarried men.

This is clearly self-contradictory since it would be like claiming that there exists some bachelor who is married, which is impossible. Many traditional philosophers have held that non-experiential knowledge is analytic in the above sense. Denying math or logic would produce a self-contradiction. Experiential knowledge, on the other hand, is synthetic: denying it won't produce a self-contradiction. Take again the statement "George Washington was the first U.S. president," which we know is true from experience. Its denial would be this:

It is not the case that George Washington was the first U.S. president.

While this statement is false as things actually stand, it isn't self-contradictory since, if the world had unfolded differently, the U.S. might well have had a different first president.

           Rationalism and Empiricism. An important philosophical war took place in the 17th and 18th centuries between two schools of thought. Most briefly, first there were rationalists from continental Europe who were critical of sense experience and felt that genuine knowledge was acquired non-experientially through reason. The leaders on this side were RenĂ© Descartes, Benedict Spinoza, and Gottfried Leibniz. Second there were empiricists from the British Isles who felt that non-experiential reasoning would give us nothing, and experience was the only path to knowledge. John Locke, George Berkeley and David Hume were the leaders on this side. The war finally ended when Immanuel Kant proposed a compromise: true knowledge depends on a mixture of experiential and non-experiential knowledge. We need both, Kant argued, otherwise our whole mental system will not operate properly.
           Let's return to the rationalist position, particularly the version championed by Descartes. Sense experience, he argued, is seriously flawed and cannot be the source of important ideas that we have. Take, for example, the idea of a triangle. Look around the world and you'll never see a perfect triangle, whether it's a shape that we draw on a piece of paper or the side of a pyramid in Egypt. On close inspection, they'll all have irregular lines. The fact remains, though, that we do have conceptions of perfectly-shaped triangles. Rationalism, according to Descartes, offers the best explanation of how we get those perfect ideas. There are two central components to the rationalist position: innate ideas and deductive reasoning. Innate ideas, according to Descartes, are concepts that we have from birth that serve as a foundation for all of our other ideas. While they are inborn, we only become aware of them later in life – when we reach the "age of reason" as one philosopher called it. Innate ideas are in a special class of their own: we know them with absolute certainty, and it's impossible for us to acquire them through experience. While rationalists were reluctant to offer a complete list of innate ideas, the most important ones include the ideas of God, infinity, substance and causality. Regarding deductive reasoning, Descartes held that from our innate ideas we deduce other ideas. It's like in geometry where we begin with foundational concepts of points and lines, and deduce elaborate propositions from these about all kinds of geometrical shapes. Descartes was in fact inspired by the deductive method of geometry and maintained that we deduce ideas in the same way. Through deduction, the certainty that we have of innate ideas transfers to the other ideas that we derive from these. Mistakes creep in only when our deductions become so long that they rest on memory. All knowledge, he argued, including scientific knowledge, proceeds from innate ideas and deductive demonstration.
           Turn now to empiricism, particularly Locke's version. Locke's first task was to challenge the theory of innate ideas: none of our concepts, he argued, are inborn. Our mind is from birth like a blank sheet of paper, and it is only through experience that we write anything on it. One problem with innate ideas is that we can explain the origin of each one of them through experience. The idea of God, for example, is not innate as Descartes supposed, but comes from our perceptions of the world around us. There's thus no reason to put forward the theory of innate ideas when experience explains these notions just fine. Locke also found fault with the rationalist position that we don't become aware of innate ideas until later in life. It's not clear how such ideas can linger in our minds for so many years before we can be conscious of them. And by that time our minds have been flooded with experience, and a late-blooming innate idea wouldn't contribute anything to our knowledge of the world. Empiricists also challenged the rationalists' emphasis on deductive demonstration. We don't expand our knowledge by deducing new concepts from foundational ones, as mathematicians do. Geometry is the wrong role model to follow. Instead, we acquire new knowledge through induction, such as making generalizations from our experiences. I hit ten light bulbs with a hammer and each breaks; I generalize from this that all similar light bulbs that I hit with a hammer will also break. We first perceive, then we generalize. We perceive some more, then generalize some more. That's how we push knowledge forward.
           And then comes along Kant, the great mediator in the rationalism-empiricism debate. Kant was sympathetic with empiricism but thought that it suffered from a serious problem: it doesn't offer a good explanation for how we acquire non-experiential knowledge, such as mathematics and logic. Complex mathematical formulas in particular could not come from sense perception. There is a quality of self-evidence and certainty that they have, which fallible experience could never produce.  Kant's solution was not to resurrect the old theory of innate ideas. Instead, he argued that there are innate organizing structures in our minds that automatically systematize our raw experiences – sort of like a skeleton that gives shape to flesh. For example, as I watch someone hit a light bulb with a hammer, raw sensory information rushes in through my eyes. My mind immediately reconstructs this information into a three-dimensional image and puts it on a timeline. My mind then imposes other organizational schemes on the sensory information. It makes me see the hammer and light bulb as separate things, rather than just a single blob of stuff. It then makes me see the hammer as the cause of the light bulb breaking. My experience of the world, then, is a fusion of innate structures and raw experience. The innate part is a concession to rationalism, and the experience part a concession to empiricism. 
           Rationalism and empiricism in their original forms are outdated theories today, in part because of Kant’s insights. Nevertheless, they still are useful for depicting two fundamentally different ways in which we assess the sources of knowledge. Rationalism will continue to be attractive whenever we have knowledge that cannot be easily explained by experience. Empiricism will be attractive whenever the claims of innateness look fishy.

C. THE DEFINITION OF KNOWLEDGE
Throughout our discussion of knowledge so far, certain concepts have appeared again and again.  There's the question of the truth of a claim. There is also the matter of our personal belief conviction for a claim. There are also issues about the evidence or justification that we have for a claim. Tradition has it that these are the three key elements to knowledge: truth, belief and justification. For example, when I say "I know that Paris is the capital of France", this means

It is true that Paris is the capital of France.
I believe that Paris is the capital of France.
I am justified in believing that Paris is the capital of France.

For short, contemporary philosophers call this definition of knowledge justified true belief – often abbreviating it "JTB". The crucial point about this definition is that all three components must be present: if any one of the three is absent, then it doesn't count as knowledge.

           Justified True Belief. To better understand the JTB definition of knowledge, let's go through each of the three elements. First is that the statement must be true. I can't claim to know that Elvis Presley is alive, for example, if he is in fact dead. Knowledge goes beyond my personal feelings on the matter and involves the truth of things as they actually are. Some critics of the JTB definition of knowledge question whether truth is always necessary in our claim to know something. For example, based on the available evidence of the time, scientists in the middle ages claimed to know that the earth was flat. Even though we understand now that it isn't, at the time they had knowledge of something that was false. Didn't they? In response, it may have been reasonable for scientists back then to believe the world was flat, but they really didn't know that it was. Their knowledge claims were premature in spite of how strong their convictions were. This is a trap that we fall into all the time. While talking with someone I may say insistently, "I know that Joe's car is blue!" When it turns out that Joe's car is in fact red, I have to apologize for overstating my conviction. Truth, then, is an indispensable component of knowledge.
           Second, I must believe the statement in order to know it. For example, it's true that Elvis Presley is dead, and there is enormous evidence to back this up. But if I still believe that he is alive, I couldn't sincerely say that I know that he is dead. Part of the concept of knowledge involves our personal belief convictions about some fact, irrespective of what the truth of the matter is. Critics of the JTB definition of knowledge sometimes think that belief isn't always required for our claims to know something. For example, I might say "I know I'm growing old, but I don't believe it!" In this case, I have knowledge of a particular fact without believing that fact. In response, if I say the previous sentence, what I actually mean is that I'm not capable of imagining myself getting old or I haven't yet emotionally accepted that fact. I just make my point more dramatically by saying "I don't believe it!" Instead I really do believe it, but I don't like it.
           Third, I must be justified in believing the statement insofar as there must be good evidence in support of it. Suppose that I randomly pick a card out of a deck without seeing it. I believe it is the Queen of Hearts, and it actually is that card. In this case I couldn't claim to know that I've picked the Queen of Hearts; I've only made a lucky guess. Critics question whether evidence is really needed for knowledge. For example, a store owner might say "I know that my employees are stealing from me, but I can't prove it!" Here the store owner has knowledge of a particular fact without any evidence for it. In response, the store owner is really saying that he strongly believes that his employees are stealing from him, but doesn't have enough evidence to press charges. Evidence, then, is indeed an integral part of knowledge.

           The Gettier Problem. For centuries philosophers took it for granted that knowledge consists of justified true belief. In 1963 a young philosophy professor named Edmund Gettier published a three-page paper challenging this traditional view. He argued that there are some situations in which we have justified true belief, but which do not count as knowledge. This was dubbed "The Gettier Problem" and discussions of it quickly dominated philosophical accounts of knowledge. Gettier's actual illustrations of the problem are rather complex, but a more simple one makes the same point.
           Suppose that a ball in front of me appears to be red. First, I believe it is red. Second, I'm justified in this belief since that's how the ball appears to me. Third, it's also true that the ball is red. I thus have a justified true belief that the ball is red. However, it turns out that the ball is illuminated by a red light which casts a red tint over it – a fact that I'm unaware of. Although the ball in reality is red, under the light it would appear red to me even if the ball was a different color. Consequently, I can't claim to know that the ball is red even though I have a justified true belief that it is. I was fooled by the effects of the red light, but made a lucky guess anyway. Again, the point of this counterexample is to show that some instances of justified true belief do not count as genuine knowledge. This suggests that the traditional JTB definition of knowledge is seriously flawed.
           What can we do to rescue the JTB account of knowledge from the Gettier problem? A common response is to add a stipulation to the definition of knowledge that would weed out counterexamples like the red ball. Most of the Gettier-type counterexamples involve a case of mistaken identity. In our current example, I mistake the appearance of a red-illuminated ball for an actual red ball. Perhaps, then, we can stipulate that knowledge is justified true belief except in cases of mistaken identity. More precisely, we can add a fourth condition to the definition of knowledge in this way:

I know that the ball is red when,
(1) It is true that the ball is red;
(2) I believe that the ball is red;
(3) I am justified in my belief that the ball is red;
(4) There is no additional fact that would make my belief unjustified (for example, a fact about a red light).

According to the above, my belief about the red ball would not count as knowledge since it wouldn't pass the fourth condition. That is, there is indeed an additional fact regarding the red light that would make my initial belief about the ball unjustified. That additional fact undermines -- or defeats – my original justification. We've thus saved the JTB definition of knowledge, although cluttering it a little with a fourth condition. This strategy is called the no-defeater theory (also called the indefeasibility theory). A problem with this strategy, though, is that there are possible counter examples even to this – that is, situations in which we have undefeated justified true belief that don’t count as true knowledge. This, in fact, is a problem with most proposed solutions to the Gettier problem: if we get creative enough, we will likely find a new counter example that defies the solution.

D. TRUTH, JUSTIFICATION AND RELATIVISM
Truth and justification, we've seen, are two of the key components of knowledge. They are also concepts that need some explanation themselves. Let’s first look at the notion of truth.

           Theories of Truth. The concept of truth has many possible meanings. We talk about having true friends, owning a true work of art, or someone being a true genius. In all of these cases the word "true" means genuine or authentic. In philosophy, though, the notion of truth is restricted to statements or beliefs about the world – such as the statement that "My car is white" or "Paris is the capital of France". While we all have gut feelings about what it means for a statement to be true, philosophers have been particularly keen on arriving at a precise definition of truth. Here's one suggestion from a classic song:

"What is truth?" you ask and insist,
"Correspondence to things that exist?"
The answer, you fool, requires no sleuth:
Whatever I say is the truth.
Want proof of the truth? I say so! So there!
Purveyors of falsehood beware:
I'm sick of your lies, and, truth be told,
I am the truth, behold!

The above account of truth is clearly satirical since no one would seriously grant that the truth of all statements is grounded in the assertions of one individual person. But what are the more serious alternatives for definitions of truth? As usual in philosophy, there's much disagreement about what the correct definition is. We will consider the three leading candidates here.
           The first and most famous definition of truth is the correspondence theory: a statement is true if it corresponds to fact or reality. This is the most commonsensical way of looking at the notion of truth and is how standard dictionaries define the concept. A true statement simply reflects the way things really are. Take the statement "My car is white.” This statement is true if it conforms to how the world actually is, specifically whether my car is in fact painted white. As compelling as the correspondence theory of truth seems, skeptics immediately see one major flaw with it: we don’t have access to the world of facts. In spite of my best efforts to discover the way things really are, I’m at the mercy of my five senses, which, we’ve seen, are unreliable. While my senses tell me that my car is white, the color receptors in my eyes may not be working properly and my car may be a shade of yellow. For that matter, I may be living in a world of hallucinations and don’t even own a car. The sad fact is I can never reach beyond my perceptions and see the world as it really is.
           With trivial issues, such as the truth concerning the color of my car, I may be willing to simply pretend that I have direct access to the world of facts and blindly trust my senses. This may serve my immediate needs perfectly well. It isn’t so easy to pretend, though, when I investigate the truth of more serious statements, such as whether “Bill murdered Charlie.” Even if I have a mountain of evidence that implicates Bill, such as fingerprints and eyewitness testimony, it’s impossible for me to turn back the hands of time and directly access the scene of Charlie’s murder. I only have hints about what the reality is. Similarly, if I’m investigating the truth of the statement “God exists,” I can’t directly access the reality of an infinitely powerful deity, even if God did exist and stood right in front of me. The best I would have is some imperfect evidence that the mysterious being standing before me was indeed God. Thus, the correspondence theory would not permit us to say either that “It is true that Bill murdered Charlie” or “It is true that God exists.”
           A second famous definition of truth is the coherence theory, which aims to address the shortcomings of the correspondence theory. According to the coherence theory, a statement is true if it coheres with a larger set of beliefs. Rather than attempting to match up our statements with the actual world of facts, we instead try to see if our statements mesh with a larger web of beliefs that support them. For example, the statement “my car is white” is true if it coheres with a collection of other beliefs such as “many cars are painted white,” “I perceive that my car is white,” and “other people invariably report that my car is white.” With the coherence theory, we avoid skeptical obstacles such as the unreliability of our senses and the possibility that we are hallucinating. What matters is our web of beliefs, which we all have access to -- in contrast with a hidden world of facts that is blurred by the limits of our sensory perceptions. We also can even investigate statements such as “It is true that Bill murdered Charlie” or “It is true that God exists.” What matters here is whether these statements consistently fit with other beliefs that we have -- beliefs about the pieces of evidence against Bill and beliefs about the evidence regarding a divine being.
           Unfortunately the coherence theory faces serious criticisms, the most important of which is that it is relativistic. That is, it grounds truth in the changeable beliefs of human beings, rather than in an unchanging external reality. According to the coherence theory, the standard for all truth is the larger web of beliefs that people hold – beliefs about white cars, criminal evidence, evidence for God’s existence, and countless other issues. The problem is that belief systems come and go.  Take beliefs about criminal evidence as just one example. Many cultures throughout history based criminal convictions on the evidence of supernatural omens: prophetic visions, the flight path of birds, patterns in the guts of sacrificed animals. That was their belief system which they relied on. In other cultures the testimony of one eye witness is sufficient to prove guilt. In our culture today we have fingerprints, DNA samples and psychological profiles which all contribute to our belief system about criminal guilt. The statement “Bill murdered Charlie” could cohere with some belief systems, but not with others. We typically think about truth as being absolute: either Bill murdered Charlie or he didn’t. If truth hinges on a changeable belief system, though, truth is no longer absolute.
           The problems with the correspondence and coherence theories are so serious that many contemporary philosophers have abandoned both. In fact some philosophers have even abandoned the concept of “truth” as being completely unnecessary. This brings us to our third theory, the deflationary theory of truth: to assert that a statement is true is just to assert the statement itself. Compare these two statements:

  • My car is white.
  • It is true that my car is white.

What is the difference between the two? Nothing of substance. The phrase “it is true that” seems to be just repeating something that is already assumed in the phrase “my car is white.” In that sense, I am being redundant if I use the phrase “it is true that.” At times it may be rhetorically helpful to use the phrase “it is true that” in an effort to convince someone of my belief. Suppose you say to me “I don’t believe that your car is white.” I might respond by saying, “You’re wrong: it’s absolutely true that my car is white”. Again, I’ve not added anything of substance by injecting the notion of truth into my response; I’ve just stood up to you more forcefully. In short, according to the deflationary theory, the quest for a clear conception of truth—such as correspondence or coherence—will not succeed because it is ultimately a quest for something that doesn’t really exist.
           But the deflationary theory also faces problems, one of which is that the notion of truth is built into our normal expectations of what we assert. When I say that “my car is white” you have an expectation that what I’m saying is true. Occasionally I do say something that is false, but when that happens we all recognize that I’m doing something that is incorrect. The normal expectation, then, is that my assertion will be truthful. And this creates a problem for the deflationary theory: by eliminating the notion of truth, it cannot adequately account for our normal expectation of truthfulness.

           Theories of Justification. Of the three components of knowledge, justification is the one that has attracted the most attention among contemporary philosophers. For centuries most philosophers followed a theory of justification called foundationalism. On this view, our justified beliefs are arranged like bricks in a wall, with the lower ones supporting the upper ones. These lowest bricks are called “basic beliefs”, and the ones they support are “non-basic” beliefs. Take this example:

*My car is white (non-basic belief)

This belief rests upon some supporting ground-level basic beliefs, including these:

*I recognize the car in front of me as my car (basic belief)
*I remember what white things look like (basic belief)
*The car in front of me looks white (basic belief)

There are two distinct elements to this foundationalist theory of justification. First, our ground-level basic beliefs are self-evident, or self-justifying, and thus require no further justification. When we have such beliefs, we cannot be mistaken about them, we cannot doubt them, and we cannot be corrected in our beliefs about them. For example, if I am perceiving the color white, then my belief that I am perceiving white is self-evident in this way. Even if I am hallucinating at the moment, my belief that I am perceiving the color white cannot be called into question. The second element of foundationalism is that justification transfers up from my foundational basic beliefs to those non-basic beliefs that rest upon them. Think of it like the mortar between bricks that begins at the very bottom level, locks them solid, and moves upwards to lock the higher bricks into place. For example, if I have the three basic beliefs about my car and whiteness listed above, then I am justified in inferring the non-basic belief that “my car is white.”
           While foundationalism holds a respected place in the history of philosophy, it faces a major problem: it is not clear that there really are any self-evident basic beliefs that form the foundation of other beliefs. Foundationalists themselves have mixed views about what exactly our lowest-level foundational beliefs are. Descartes, for example, argued that there is only one single brick at the foundation of my wall of beliefs, namely, my belief that I exist. Every other belief I have rests on this. Locke, on the other hand, held that our most foundational beliefs are simple perceptions such as blue, round, sweet, smooth, pleasure, motion. These combine together to make more complex ideas. Contemporary philosophers resist both Descartes’ and Locke’s depiction of our most foundational beliefs. Some offer examples such as “I see a rock” (a basic belief about one’s perception), “I ate cornflakes this morning” (a basic belief about one’s memory), or “That person is happy” (a basic belief about another person’s mental state). But even these are questionable since they seem to rely on beliefs or perceptions that are more ground-level. If there really are ground-level foundational beliefs that are self-evident or self-justifying, you’d think that philosophers would have agreed along time ago about exactly which ones they are. But there is no such agreement.
           An alternative to foundationalism is coherentism: justification is structured like a web where the strength of any given area depends on the strength of the surrounding areas. Thus, my belief that my car is white is justified by a web of related beliefs, such as these:

*I recognize the car in front of me as my car
*I remember what white things look like
*The car in front of me looks white

These, though are not foundational, but instead depend on another web of beliefs related to them, which includes these:

*I remember purchasing my car
*People seem to agree that I use the term “white” properly
*Nothing is abnormally coloring my vision, such as a pair of sun glasses

Each of these, in turn, rests on an ever-widening web of related beliefs. At no point do we reach a bottom-level foundation to these beliefs; the justification of each belief rests on the support it receives from the surrounding web of beliefs that relates to it. Coherentism is closely associated with the coherence theory of truth. With truth we determine that a proposition is true if it coheres with a larger web of beliefs. With justification, we determine that a belief is justified if it is supported by a larger web of beliefs. Coherentism’s similarities with the coherence theory of truth make it vulnerable to the same fundamental charge of relativism: not everyone’s belief system is the same, so a particular belief might find justification within your larger web of beliefs, but not within mine. Your belief system might justify the belief that “Bill killed Charlie,” that “God exists,” or that “abortion is immoral,” while my belief system might not justify any of these. We’d like to think that justification is a bit more universal and not dependent on the peculiarities of a particular person’s belief system.
           Given the liabilities of both foundationalism and coherentism, many contemporary philosophers hold a third position called reliabilism: justified beliefs are those that are the result of a reliable process, such as a reliable memory process or a reliable perception process. It’s like how we depend on a reliable clock to tell us what time it is. As long as we have confidence in the clock mechanism itself, then that’s all we need in order to trust the time that it tells us. We don’t have to inspect the internal gears of the clock and see how they relate to the movement of the clock’s hands. Similarly, to justify my beliefs, I don’t need to inspect how each belief connects with surrounding beliefs that are beneath them or next to them; I just trust the reliability of my mental process that gives me the belief. If my memory process is on the whole reliable, then I’m justified in my belief that I ate cornflakes this morning for breakfast. If my perceptual process is on the whole reliable, then I’m justified in my belief that my car is white. What matters is the reliability of the larger processes upon which my beliefs rest, not my other beliefs that border them. According to reliabilism, the fault with both foundationalism and coherentism is that they rely too much on introspection: presumably, with our mind’s eye, we can see the strength of our specific beliefs and how they gain support from other beliefs that are connected to them (either like bricks in a wall or strands in a web). But, says the reliabilist, this approach places too much confidence in our ability to internally witness the connections between our specific beliefs. Our standard of justification should not depend on what our mysterious mind’s eye internally perceives, but, instead, upon more external standards and mental processes that we know are reliable through our life experiences. I am justified in believing that I ate cornflakes for breakfast because that’s what I remember, and I trust my memory since it is a reliable process of supplying me with information about the past.

           What’s so Bad about Relativism? Twice so far the issue of relativism has raised its ugly head, and how we assess theories of truth and justification hinges greatly on how we feel about relativism. The relativist position in general is that knowledge is always dependent upon some particular conceptual framework (that is, a web of beliefs), and that framework is not uniquely privileged over rival frameworks. The most famous classical statement of relativism was articulated by the Greek philosopher Protagoras (c. 490–c. 420 BCE), who said that “Man is the measure of all things.” His point was that human beings are the standard of all truths, and it’s a futile task to search for fixed standards of knowledge beyond our various and ever-flexible conceptual frameworks. Knowledge in medieval England depended on the conceptual framework of that place and time. Knowledge for us today depends on our specific conceptual frameworks throughout the world and throughout our wide variety of social environments.
           Our initial reactions to relativism are usually negative. “The truth is the truth,” I might say, “and it shouldn’t make any difference what my individual conceptual framework is. Some conceptual frameworks are simply wrong, and others may be a little closer to the truth.” But is relativism really so bad that it warrants this negative reaction?
           The first step to answering this question is to recognize that there are different types of relativism, some of which may be less sinister than others. The most innocent and universally accepted type is etiquette relativism, the view that correct standards of protocol and good manners depend on one’s culture. When I meet people for the first time, should I bow to them or shake hands? If I make the wrong decision, I might offend that person, rather than befriend them. Clearly, that depends on the social environment that you’re in, and it makes no sense to seek for an absolute standard that applies in all situations. Etiquette by its very nature is relative. There is also little controversy regarding aesthetic relativism, the view that artistic judgments depend on the conceptual framework of the viewer. We commonly feel that there is no absolute right and wrong when it comes to art, and it’s largely a matter of opinion. I might enjoy velvet paintings of dogs playing cards, while that might offend your aesthetic sensibilities. In many cases, perceptual relativism is also no big issue: one’s sensory perceptions depend on the perceiver. Something might appear red to me but green to you. There are people known as “supertasters” who experience flavors with far greater intensity than the average person, so much so that they need to restrict themselves to food that you or I would find bland. How we perceive sensations depends on our physiology, which we readily acknowledge may differ from person to person.
           The types of relativism that we often resist, though, are those connected specifically with the two components of knowledge that we’ve discussed above, namely, truth and justification. Truth relativism is the view that truth depends upon one’s conceptual framework. This amounts to a denial of the correspondence theory of truth and acknowledges our inability to access an objective and independent reality. Justification relativism is the view that what counts as evidence for our beliefs depends upon one’s conceptual framework. This is a denial of foundationalism and an acknowledgement of coherentism. The German philosopher Friedrich Nietzsche (1844-1900) boldly embraced truth and justification relativism, as we see here:

Positivism stops at phenomena and says, “These are only facts and nothing more.” In opposition to this I would say: No, facts are precisely what is lacking, all that exists consists of interpretations. We cannot establish any fact “in itself”: it may even be nonsense to desire to do such a thing. . . .  To the extent to which knowledge has any sense at all, the world is knowable: but it may be interpreted differently, it has not one sense behind it, but hundreds of senses. “Perspectivity.”  [Will to Power, 481]

For Nietzsche, then, there are many perspectives from which the world can be interpreted when we make judgments. Some justification relativists even go so far as to deny the universal nature of so-called laws of logic; even these, they maintain, are grounded in mere social conventions.
           A standard criticism of truth and justification relativism is that it leads to absurd consequences that no rational person would accept. By surrendering to relativism, we abandon any stable notion of reality and place ourselves at the mercy of cultural biases, fanatical social groups, and power hungry tyrants who are more than happy to twist our conceptual frameworks to their benefits. Everything, then, becomes a matter of customs that are imposed on us, even in matters of science. Scottish philosopher James Beattie (1768–1790) makes this point in a fictional story where he describes a crazy scientist who attempts to put relativism into practice:

[The scientist] was watching a hencoop full of chickens, and feeding them with various kinds of food, in order, as he told me “that they might [give birth to live offspring and] … lay no more eggs,” which seemed to him to be a very bad custom. . . . “I have also,” continued he, “under my care some young children, whom I am teaching to believe that two and two are equal to six, and a whole less than one of its parts; that ingratitude is a virtue, and honesty a vice; that a rose is one of the ugliest, and a toad one of the most beautiful objects in nature.” [James Beattie, “The Castle of Skepticism”]

According to Beattie, if we took the relativist’s position seriously, we’d be forced to accept absurd views like “it is just a matter of custom that chickens lay eggs,” or that “it’s possible that 2+2=6.” Thus, even if we acknowledge a certain level of relativism with etiquette, aesthetics and perception, we need to draw the line when it comes to standards of truth and justification.
           How might the relativist respond to this criticism? One approach is to hold that not all conceptual schemes are on equal footing, and some indeed are better than others. Nietzsche argues that there are competing perspectives of the world, and the winner is the one whose conceptual framework succeeds the best:

It is our needs that interpret the world; our instincts and their impulses for and against. Every instinct is a sort of thirst for power; each has its point of view, which it would gladly impose upon all the other instincts as their norm. [Will to Power, 481]

Nietzsche presents the conflict as a kind of power struggle among competing conceptual frameworks, where the winner takes all. A more gentle approach, though, would be to hold that the winner is the one that best assists us in our life’s activities and allows us to thrive. If people today held that “it is just a matter of custom that chickens lay eggs,” or “it is possible that 2+2=6”, their underlying conceptual framework would not enable them to succeed very well in the world. For that matter, such a conceptual framework would not have allowed people to thrive very well in medieval England or any other pre-modern period of human history. While there may be an underlying objective reality that molds our conceptual frameworks in successful ways, that possibility is irrelevant since, according to the relativist, we could never know such an objective reality even if it existed. What we do know is how our conceptual frameworks enable us to succeed in the world, and that’s the real litmus test for truth and justification.
           Thus, with many ordinary life beliefs, relativist theories of truth and justification work reasonably well, without leading us down the path to absurd consequences. What, though, of more scientific theories? In medieval times people thought mental illness was caused by demon possession; today we think that it is caused by physiological brain disorders. The medieval theory worked well in its own day; does that mean that it was true back then – supported by its own web of beliefs – but not now? In scientific matters, people feel uncomfortable with relativism and instead believe that our knowledge of physics, chemistry and biology has a fixed and objective reference point. We will next examine the issue of scientific knowledge in more detail.

E. SCIENTIFIC KNOWLEDGE
Every child knows the tale of Isaac Newton’s inspiration for his views on gravity: while sitting beneath a tree he saw an apple fall, which prompted him to wonder why things always fell downward rather than sideways or upward. In time Newton formulated his theory of universal gravitation, which described the attraction between massive bodies. Less known is the rival theory of intelligent falling, devised by the satirical newspaper The Onion. According to this view, things fall downward “because a higher intelligence, 'God' if you will, is pushing them down.” As proof for their view they cite a passage from the Old Testament book of Job: “But mankind is born to trouble, as surely as sparks fly upwards.” Accordingly, a defender of intelligent falling theory remarks, “If gravity is pulling everything down, why do the sparks fly upwards with great surety? This clearly indicates that a conscious intelligence governs all falling.” The theory of intelligent falling is obviously not a real theory, but rather a parody of the religiously-based intelligent design theory. Nevertheless, we can ask the basic theoretical question, why is universal gravitation a better account of natural events than intelligent falling? The job of science is to explain how the natural world works, to give us knowledge of the underlying mechanics of natural phenomena. That knowledge does not come easy, though, and it seems that science has to wrench nature’s secrets out of her. As scientists put forward rival theories, how do we determine which are closer to the truth?

           Confirming Theories. The starting point for discussion is to distinguish between three related scientific concepts: a hypothesis, a theory, and a law. The weakest of these is the scientific hypothesis, which is any proposed explanation of a natural event. It is a provisional notion whose worth requires evaluation. Newton’s account of gravity began as a humble hypothesis, and even the theory of intelligent falling qualifies as a hypothesis. While hypotheses may be inspired by natural observations, they don’t need to be, and virtually anything goes at this level. One step up from this is a scientific theory, which is a well confirmed hypothesis. It is not mere guess, like a hypothesis may be, but is a contention supported by experimental evidence. When Newton proposed his account of gravity, he accompanied it with a wealth of observational evidence, which quickly elevated it to the status of a theory. This, though, is where the theories of gravity and intelligent falling part company: there’s no scientific evidence in support of intelligent falling, and thus it fails as a theory. Lastly, there is a scientific law which is a theory that has a great amount of evidence in its support. Indeed, laws are confirmed by such a strong history of evidence that they cannot be overturned by any singular piece of evidence to the contrary; rather, we assume instead that that singular piece of contrary evidence is flawed. As compelling as Newton’s theory of gravity was, it took well over 100 years before it was confirmed to the point that it gained status as a law.
           We see that confirmation is the critical component in establishing a scientific claim: it is what elevates a hypothesis to a theory, and a theory to a law. There are several different ways of confirming scientific notions. The first factor in the confirmation process is that it exhibit simplicity; that is, when evaluating two rival theories, the simplest theory is the one most likely to be true. This doesn’t guarantee that it’s true, but, all things being equal, it’s the one that we should prefer. Compare, for example, universal gravity and intelligent falling. Universal gravity involves a single gravitational force that is inherent to all physical bodies. Intelligent falling, on the other hand, involves countless divine actions that guide individual bodies downwards. We should thus prefer universal gravity as the correct explanation since it is not burdened by such an abundance of distinct divine actions.
           A second component of confirmation is unification, that is, the ability to explain a wide range of phenomena. The rule of thumb here is that the more information explained by a theory, the better. Science is an immense interrelated system of facts, laws, and theories, and scientific contentions gain extra weight when they contribute to the scheme of unification. It is unification that gave an initial boost to Newton’s theory of universal gravitation. Prior to Newton, Astronomers assumed that planets and other celestial objects followed their own unique set of laws that were distinct from those on earth. However, Newton showed how the motions of the planets were governed by precisely the same rules of gravity and motion that physical bodies on earth obey.
           A third factor in scientific confirmation is successful prediction. Good scientific theories should not simply organize collections of facts, but should be able to reach out and predict new phenomena. This is what bumped Newton’s theory of gravity up to the status of a law. Astronomers in the early 19th century noticed some strange movements in the orbital pattern of the planet Uranus, and they hypothesized that the irregularities were caused by the gravitational tugging of an undiscovered eighth planet. Applying Newton’s formulas of gravity and motion, they pinpointed a location in space where the large body must be. Then, pointing their telescopes at the spot, they discovered the mystery planet, which was subsequently named “Neptune.” Scientific predictions like these don’t happen too often, but when they do they do much to confirm a theory. Einstein's theory of relativity, for example, was confirmed with the prediction of bent star light during a solar eclipse.
           A fourth and final factor in scientific confirmation is falsifiability: it must be theoretically possible for a scientific claim to be shown false by an observation or a physical experiment. This doesn’t mean that the scientific claim is actually false, but only that it is capable of being disproved. The criterion of falsifiability is important for distinguishing between genuine scientific claims that rest on tests and experimentation, and pseudo-scientific claims that are completely disconnected with testing. Take, for example, the views of Heaven’s Gate believers that we examined at the outset of this chapter. According to them, aliens come down to earth in the form of teachers, but once becoming human they are stripped of their previous memories and knowledge. Thus, we can’t test the claims of these teachers about their previous alien lives, since they can’t remember anything about them. “So tell me a little about your home planet” I might ask one alleged alien. He then replies “Sorry, I can’t remember anything about it, but I’m still an Alien.” To make things worse, Heaven’s Gate believers claim that the aliens purposefully imposed this knowledge restriction on themselves since "too much knowledge too soon could potentially be an interference and liability to their plan." In short, their claims about the aliens are completely resistant to refutation. Fortune tellers are another good example of this, as the philosopher Karl Popper explains here:

[B]y making their interpretations and prophesies sufficiently vague they were able to explain away anything that might have been a refutation of the theory had the theory and the prophesies been more precise. In order to escape falsification they destroyed the testability of their theory. It is a typical soothsayer’s trick to predict things so vaguely that the predictions can hardly fail: that they become irrefutable. [Conjectures and Refutations]

Legitimate scientific theories, by contrast, always hold open the possibility of being refuted by new data or a new experiment. By putting forth their theories, scientists take a risk that what they’re proposing might be disproved by the facts. Even universal gravitation is vulnerable to refutation if some future experiments produce compelling evidence against it. Thus, a good theory is always potentially falsifiable, although it has not been actually falsified.

           Scientific Revolutions. Scientists continually push the boundaries of knowledge, and on a daily basis we see new theories about the spread of diseases, healthy eating habits, or the environment. We also read about new studies that challenge previously accepted scientific views. For example, in contrast to earlier claims by scientists, the accepted wisdom now is that vitamin C does not help prevent colds, and fiber in our diets does not help prevent colon cancer. Science thus moves ahead in baby steps, occasionally taking a step backwards to correct an erroneous theory. All the while, though, the larger body of scientific knowledge seems secure and well established. But then sometimes a new scientific theory comes along that is so radical and far reaching in its consequences that it forces scientists to throw out many of their underlying assumptions about the world and set things on a dramatically new course. These are scientific revolutions. The most dramatic example is the shift from the earth-centered view of the heavens, championed by the ancient Greek astronomer Ptolemy, to a sun-centered system which was defended by Copernicus in the 1500s. This “Copernican Revolution,” as it is now called, did more than simply swap the earth with the sun in the model of celestial objects. It also had the effect of overturning medieval theories about matter and motion, and ultimately replacing them with Newton’s laws of motion. Other important revolutions were sparked by Darwin’s account of evolution, Einstein’s account of general relativity, and the Big Bang theory.
           The most probing philosophical analysis of scientific revolutions was offered by American historian of science Thomas Kuhn (1922-1996). Kuhn argued that scientific revolutions are the result of changing paradigms – that is, the web of scientific beliefs held in common by members of the scientific community. When major paradigms are overthrown and replaced with new ones, such as replacing the Ptolemaic with the Copernican paradigm, we have a scientific revolution. What triggers the paradigm shift, according to Kuhn, is that scientists run into inconsistencies with the old paradigm that cannot easily be explained away. Scientific theories will always face some irregularities – such as with an experiment that seems to contradict an accepted theory. If the theory is well established, a few irregularities here or there won’t matter; in fact, scientists often chalk these up to an acceptable level of error that’s built into the enterprise of scientific investigation. However, sometimes irregularities pile up to such a degree that it throws science into a condition of crisis. Seeking resolution to the crisis, scientists then replace an old scientific paradigm with a new one that better resolves the irregularities.
           Kuhn argues that scientific revolutions have much in common with political revolutions: rebel groups think that the ruling institution is inadequate, which they then overthrow and replace with a new one:

Political revolutions are inaugurated by a growing sense, often restricted to a segment of the political community, that existing institutions have ceased adequately to meet the problems posed by an environment that they have in part created. In much the same way, scientific revolutions are inaugurated by a growing sense, again often restricted to a narrow subdivision of the scientific community, that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature to which that paradigm itself had previously led the way. In both political and scientific development the sense of malfunction that can lead to crisis is prerequisite to revolution.

Kuhn warns that the transition from the old to new paradigm is not a smooth one. Many scientists will hold fast to the old paradigm, and the new paradigm needs to attract an ever-growing number of supporters before it can finally overthrow the old one. The upshot of Kuhn’s theory is that science is not cumulative: our present theories are not built upon the secure foundation of past theories. Instead, scientific knowledge shifts according to our current paradigms, and, once again, the issue of relativism arises. These paradigms are webs of belief that are held by the scientific community at the time. Truths in science, then, are relative to these shifting webs of belief.
           Kuhn’s account of scientific revolutions has its critics, particularly among those who believe that science, when done properly, is grounded in objective truth, and not in shifting belief paradigms of the scientific community. One criticism is that Kuhn has over dramatized the sweeping nature of most scientific revolutions. Sure, the Copernican revolution was indeed a major one that resulted in overthrowing old scientific models that were rooted in superstitious conceptions of the world and sloppy experimentation. In fact, the older models were so engrained with religious mythology and metaphysics, it’s overly generous to even call them “scientific.” Since the time of Copernicus, however, we’ve not seen any scientific revolutions that “overthrow” entire paradigms. Rather, new mini-revolutions seek to encompass much of the theory and data of previous scientific investigations while at the same time setting a new direction for future investigation. For example, Newton’s laws of motion were not overthrown by Einstein’s theory of relativity; instead scientists try to incorporate both into a larger scientific vision of reality that unifies all of nature’s forces. Other mini-revolutions, such as Darwinian evolution, quickly put an end to rival theories of biological development, such as Lamarck’s, that had little or no supporting evidence to begin with. Thus, contrary to Kuhn’s position, when science is done properly our knowledge of the natural world is cemented into a fixed and objective reference point.
           This chapter began with a discussion of radical skepticism, and while that may not be the most cheery way of investigating the nature of knowledge, in many ways it sets the right tone. No matter what we say to clarify the characteristics of knowledge, warning flags immediately go up. All of our sources of knowledge have serious limitations. The very definition of knowledge can be picked apart by an endless variety of Gettier-type counter examples. Theories of truth and justification seem to be either naively optimistic, or they lean towards relativism. While scientific knowledge attempts to move progressively towards unchanging truth, it is always cradled by a potentially changing web of beliefs held by scientists. Achieving genuine knowledge is in some ways like playing a video game where the winning score is infinitely high: no matter how close we move towards it, it remains at a distance. If the human effort to gain knowledge was merely a leisure activity like playing an impossible game, we’d certainly give it up for a more attainable diversion. But the pursuit of knowledge is a matter of human survival that we can’t casually set aside. Philosophical discussions of knowledge are an important reality check as we routinely gather facts and construct theories about how the world operates. The hope of acquiring a fixed body of knowledge is very seductive, and the problems of knowledge that we’ve covered in this chapter help us resist that temptation.

For Review
1. What are the three main arguments for radical skepticism?
2. What are the four main criticisms of radical skepticism?
3. What are the four main sources of experiential knowledge?
4. What are the key features of non-experiential knowledge?
5. What are the key features of rationalism and empiricism?
6. Describe the "JTB" definition of knowledge.
7. What is the point of the Gettier problem?
8. Name and describe the different theories of truth.
9. Name and describe the different theories of justification.
10. Name and describe the different types of relativism.
11. What are the different ways in which scientific theories and laws gain confirmation?
12. What is Kuhn’s view of scientific revolutions?

For Analysis
1. Write a dialogue between a radical skeptic who thinks that he’s living in an artificial reality, and a non-skeptic who thinks we have knowledge of the common sense world that we perceive.
2. Explain the different features of rationalism and empiricism, and try to defend one position over the other.
3. Explain the foundationalist theory of justification, and try to defend it against one of the criticisms.
4. Write a dialogue between a relativist respond and non-relativist regarding the argument against relativism from absurd consequences.
5. Explain the notion of falsification, and then describe whether a religious view like creationism, intelligent design theory, or intelligent falling theory can be falsified.
6. Explain the criticism of Kuhn at the end of the chapter and try to defend Kuhn against it.

REFERENCES AND FURTHER READING

Works Cited in Order of Appearance.
Information about the Heaven’s Gate cult can be found at http://www.rickross.com/groups/heavensgate.html
Sextus Empiricus, Outlines of Pyrrhonism (c. 200 CE). A recent translation is by J. Annas and J. Barnes (Cambridge: Cambridge University Press, 1994).
Hume, David, Treatise of Human Nature (1739-1740). The standard edition is by David Fate Norton, Mary J. Norton (Oxford: Clarendon Press, 2000).
Descartes, René, Meditations (1641). A recent translation by J. Cottingham is in The Philosophical Writings of Descartes (Cambridge: Cambridge University Press, 1984).
Locke, John, Essay Concerning Human Understanding (1689). The standard edition is by P.H. Nidditch (Oxford: Oxford University Press, 1975).
Kant, Immanuel, Critique of Pure Reason (1781). A recent translation of this is by P. Gruyer and A.W. Wood, Cambridge: Cambridge University Press, 1998.
Gettier, Edmund, “Is Justified True Belief Knowledge?” Analysis (1963), Vol. 23, pp. 121-123.
Nietzsche, Friedrich, Will to Power (1906). A standard translation of this is by Walter Kaufmann, (New York: Viking, 1967).
Beattie, James, “The Castle of Skepticism” (1767). Manuscript transcribed in James Fieser, ed., Early Responses to Hume’s Life and Reputation (Bristol: Thoemmes Press, 2004).
"Evangelical Scientists Refute Gravity with New 'Intelligent Falling' Theory," The Onion, August 17, 2005, Issue 41.33.
Popper, Karl, Conjectures and Refutations (London: Routledge, 1963).
Kuhn, Thomas, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962), Ch. 9.

Further Reading.
Feldman, Richard, Epistemology, (Englewood Cliffs, NJ: Prentice Hall, 2002).
Lemos, Noah, An Introduction to the Theory of Knowledge (Cambridge, UK: Cambridge University Press, 2007).
Moser, Paul K., The Oxford Handbook of Epistemology (Oxford, UK: Oxford University Press, 2002).
Sosa, Ernest, and Kim, Jaegwon, Epistemology: An Anthology (Malden, MA: Blackwell, 2000).