Está en la página 1de 31

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/222891768

Alternative research paradigms in operations

Article  in  Journal of Operations Management · October 1989


DOI: 10.1016/0272-6963(89)90033-8

CITATIONS READS
375 3,194

4 authors:

Jack R. Meredith Amitabh Raturi


Wake Forest University University of Cincinnati
104 PUBLICATIONS   6,639 CITATIONS    31 PUBLICATIONS   1,063 CITATIONS   

SEE PROFILE SEE PROFILE

Kwasi Amoako-Gyampah Bonnie Kaplan


University of North Carolina at Greensboro Yale University
37 PUBLICATIONS   1,983 CITATIONS    88 PUBLICATIONS   3,956 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Plant closing View project

Project and General Management View project

All content following this page was uploaded by Amitabh Raturi on 19 October 2017.

The user has requested enhancement of the downloaded file.


JOURNAL OF OPERATIONS MANAGEMENT

Vol. 8, No. 4, October 1989

Alternative Research Paradigms


in Operations

JACK R. MEREDITH*
AMITABH RATURI*
KWASI AMOAKO-GYAMPAH*
BONNIE KAPLAN**

EXECUTIVESUMMARY

Due to the heritage and history of operations management, its research methodologies have been
confined mainly to that of quantitative modeling and, on occasion, statistical analysis. The field has been
changing dramatically in recent years. Firms now face numerous worldwide competitive challenges.
many of which require major improvements in the operations function. Yet, the research methodologies in
operations have largely remained stagnant. The paradigm on which these methodologies are based, while
useful, limits the kinds of questions researchers can address.
This paper presents a review and critique of the research in operations, itemizing the shortcomings
identified by researchers in the field. These researchers suggest a new research agenda with an integrative
view of operations’ role in organizations, a wider application of alternative research methodologies,
greater emphasis on benefit to the operations manager, cross-disciplinary research with other functional
areas, a heavier emphasis on sociotechnical analysis over the entire production system, and empirical
field studies. Some of the alternative research methodologies mentioned include longitudinal studies,
field experiments, action research, and field studies.
Following a description of the nature of research, three stages in the research cycle are identified:
description, explanation, and testing. Although research can deal with any stage in this cycle, the
majority of attention currently seems to focus on the explanation stage. The paper then discusses
historical trends in the philosophy of science, starting with positivism, expanding into empiricism, and
then leading to post-positivism. The impacts of each of these trends on research in operations (which
remains largely in the positivist mode) are described. Discussion of the importance of a plurality of
research methods concludes the section.
A framework for research paradigms is then developed based on two key dimensions of research
methodologies: the rational versus existential structure of the research process and the natural versus
artificial basis for the information used in the research. These dimensions are then further explored in
terms of thirteen characteristic measures. Next, research methodologies commonly used in other fields as
well as operations are described in reference to this framework. Methodologies include those traditional
to operations such as normative and descriptive modeling, simulation, surveys, case and field studies as
well as those more common to other fields such as action research, historical analysis, expert panels,
scenarios, interviewing, introspection, and hermeneutics. Examples from operations or allied fields are
given to illustrate the methodologies.
Past research publications in operations are plotted on the framework to see the limitations of our
current paradigms relative to the richness of other fields. We tind that operations methodologies tend to

Manuscript received November 15, 1988; accepted December 22, 1989, after two revisions.
*University of Cincinnati, Cincinnati, OH 45221-0130
**American University, Washington, D.C. 20016

Journal of Operations Management 297


lie on the more rational end of the framework while spanning the natural/artificial dimension, though the
majority of research is at the artificial pole.
Last, recommendations are made for applying the framework and paradigms to research issues in
operations management. The topics of quality management and technology implementation are used as
examples to illustrate how a wide variety of methodologies might be employed to research a much
broader range of issues than has currently been researched.

INTRODUCTION

The field of operations, or operations management (OM)t, faces multiple new research
challenges in the areas of service operations, productivity, quality, technology and many other
areas. Never before has the need for pragmatic research, directly useful to the operations
manager, been so important to the field, and to industry and society. One perspective on this is
offered by Galliers and Land (1987) in reference to information systems, but just as applicable to
operations: “[It is] an applied discipline, not a pure science. It follows, therefore, that if the
fruits of our research fail to be applicable in the real world, then our endeavors are relegated to
the point of being irrelevant.” Yet, OM researchers still tend to employ a limited range of
research paradigms to address the new challenges in operations.
A part of the reason is historical. Originally, the research orientation of operations was
entirely pragmatic: What procedures should be used in what situations? The presentation in
early textbooks on production (e.g., Mitchell (1939)) focused on the organization and
transformation process through a combination of descriptive and prescriptive discussion
(Andrew and Johnson (1982)). The unit of analysis was the production manager and the
definition of production management centered around what the production manager did. For
example, in explaining why plant location was left out of his book, Mayer (1962, p. v) stated:

. . there is a reason to believe that the production manager will play a relatively minor role in
these areas of decision making.”
Then in the 1950s the Ford (Gorden and Howell (1959)) and Carnegie (Pierson (1959))
Foundations’ reports severely criticized business colleges for their lack of rigor or a scientific
approach to business education and research (Laidlaw (1988)). Operations research (OR) was
moving from war applications into the business and industrial arena and with it, the opportunity
for business schools to gain academic respectability. OR quickly developed into a favorite tool
(e.g., Holt, Modigliani, Muth, and Simon (1960)) for conducting research in operations (Buffa
(1965, p. v.), Nistal (1979-80); Andrew and Johnson (1982)). It also allowed, for the first time,
the development of a systematic body of knowledge in operations based on a consistent and
rigorous framework.
Although useful to numerous areas of business, the predominant application of OR was to the
area of operations (Buffa (1968, p. 4)). Marketing, finance, and organizational behavior also
used the new tool but found it somewhat limited in its applicability to their problems (Hudson
and Ozanne (1988)). Ackoff (1979, p. 94), for example, remarked that the OR approach “came
to be identified with the use of mathematical models and algorithms rather than the ability to
formulate management problems, solve them, and implement and maintain their solutions in
turbulent environments.” Marketing and organizational behavior, in particular, developed a
number of other paradigms drawn heavily from the fields of psychology and sociology to address
their research problems.
But OM researchers were having great success with the new algorithmic modeling tools and
found no need to explore other paradigms (Andrew and Johnson (1982)). The new area of
operations research/management science (OR/MS) was steadily replacing the function of

298 Vol. 8, No. 4


operations in academia and operations was becoming seen as an applied portion of the OR/MS
field.
Simultaneously, the field of operations lost considerable interest as its sister functions grew in
size and importance both in industry and in academia. As Galbraith (1958) put it some years
earlier, the United States had “solved the problem of production. ” The attention and resources
of firms were thus directed to marketing and finance instead of “toward improving manufactur-
ing capabilities” (Hayes and Wheelwright (1984, p. 20)). In response, operations academics
began to feel an “identity crisis” (Andrew and Johnson (1982)) reflecting the nature of those
teaching in the field as well as the content of the courses. Raiszadeh and Ettkin (1988), in their
survey of operations curricula, found a preponderance of “diversity” in the faculty teaching
operations and, therefore, also in the course content.
There are two major milestones in the discipline of operations. The first, the virtually
overnight reorientation of the field to an analytical approach based on quantitative modeling,
was the most revolutionary. Another reorientation response to the Ford and Carnegie reports was
the systems analysis conception of the field, though this approach never grew the way the
quantitative approach did. Two textbooks offering this latter perspective of the field were Starr
(1964) and Greene (1965).
The second major milestone occurred in the late 1970s and early 1980s as the Operations
Management Association (OMA) and the OMA-United Kingdom were formed and their journals
began publication. At this point, researchers in OM took stock of the field and found its name
being inappropriately applied, its original faculty largely retired and replaced with
quantitatively-oriented faculty, and the business world seriously in need of its attention (Grayson
(1973)). Moreover, OM researchers addressing the problems of production and productivity
through the now-standard quantitative modeling paradigm were more and more simply talking
among themselves (Buffa (1968, p. 5)). Managers looked at this “research” and found that they
could neither understand the solutions being proposed nor the problems OM researchers thought
they were addressing (Andrew and Johnson (1982)).
Along similar lines, Anderson, Chervany, and Narasimhan (1979) noted “. . . managers felt
that the existing implementation research had little practical value for them in their day-to-day
responsibilities . . . .” This attitude is confirmed by Buxey (1984, p. 529): “. . . it is debatable
whether the practice of production management is much influenced by what appears in leading
production research periodicals . . . it does not capture the essential flavour of what the manager
has to do and is therefore unlikely to be of use to him . . .” And as McKay, Safayeni, and
Buzacott (1988, p. 87) concluded about job-shop scheduling: “The problem definition is so far
removed from job-shop reality that perhaps a different name for the research should be
considered.”
Our point is not that OR/MS methodology is inappropriate for research in operations (e.g., see
Orden (1988)), but that it should not be the only methodology. We would reiterate the sentiments
of Buffa (1980) who noted that “. MS/OR methodology does not define the OM field nor
point the way of the future.” Yet, research in operations has still not changed significantly
(Amoako-Gyampah and Meredith (1989)). What is needed at this point is a broader understand-
ing by OM researchers, as well as journal editors and referees, of the variety and acceptability of
alternative research paradigms that other fields use and have accepted as rigorous. This paper is
intended as a first step in this direction.

Journal of Operations Management 299


A REVIEW OF RESEARCH IN OPERATIONS

AS described by the following critics, past research in operations has too frequently exhibited
a variety of shortcomings:
1. Narrow instead of broad scope
Focused on problems with a narrow scope (Buffa (1980))
l

Largely micro-oriented
l (Chase (1980))
Concerned a subsystem rather than a whole system (Buffa (1980))
l

Used only a single-criterion


l quantitative model (Buffa (1980))
2. Technique instead of knowledge orientation
Dominated by the application of techniques (Chase (1980))
l

Assumed to be simply applied operations research (Voss (1984), p. 29)


l

3. Abstract instead of reality perspective


Used approaches largely confined to the laboratory and based on model formulation and
l

manipulation (Chase (1980))


Emphasized equipment rather than people (Chase (1980))
l

Rarely involved field studies (Chase (1980))


l

Even in the few studies using real-world settings, the research approaches were charac-
l

terized by one-day visits, interviews, and the use of questionnaires (T. Hill (1987)).
In sum, it appears that OM research has failed to be integrative, is less sophisticated in its
research methodologies than the other functional fields of business, and is, by and large, not very
useful to operations managers and practitioners. Operations is an applied field and its research
should be usable, in some fashion, in practice. It is not, like management science or
organizational behavior, a tool area but a functional discipline. As Voss (1984, pp. 29, 30) notes:
“. .
the production/operations management person is concerned with procedure and process
. as well as . linking operating decisions and policies with company policies and the
decisions, technologies, and procedures they should adopt to maximize company competitive-
ness.” It is worth noting that this extends far beyond the normative perspective of management
science; it includes exploration and interpretation of procedures and processes which may not be
embedded in a rational, single-objective, or value-free context. The decision environment of the
real-world operations manager is not usually driven just by quantitative elements susceptible to
mathematical modeling-where does politics, law, ethics, or the environment fit in?
From the very first issue of the Journal of Operations Management, authors have called for
research that examines the unstructured real-world problems of operations practitioners,
considers multiple evaluation criteria, recognizes both the inter- and intrarelationships of
organizational units, and incorporates both parametric and nonparametric statistical tests (Buffa
(1980), Chase (1980)). But such prescriptions are rarely accompanied by an overview of the
systematic and methodological changes in research techniques that will support these forays into
new research settings. We hope to provide this overview here.
Several researchers have addressed the content problems in their proposals for a new OM
research agenda. For example:
l Miller and Graham (198 1) called for an integrative view of operations’ role in organizations
under the broad categories of operations policy, control, productivity, and services. Manufac-
turing strategy, the role of the customer in service delivery systems, and the effects of new
technologies on operations policies were mentioned as key issues in the agenda.
l As an extension to Miller’s agenda, Groff and Clark (198 1) called for a wider array of research

300 Vol. 8, No. 4


methodologies to broaden the theoretical foundations of operations.
l Hax (198 1) suggested a redirection of OM research so as to be of more benefit to the operations
practitioner.
l Focusing on services, Mabert (1982) suggested studying the interrelationships between formal
planning systems and the databases to support these systems.
l Sullivan (1982) advocated cross-disciplinary research between operations, marketing, and
organizational behavior for a more holistic approach to real-world problems.
l More recently, Chase and Prentis (1987) recommended more interdisciplinary research and the
application of sociotechnical analysis to the total production system.
l In a review of recent operations dissertations, Hill et al. (1987) identified the need for new
research methodologies to address more managerial and macro-level issues.
l In an update of the progress on the original Miller et al. (198 1) agenda, Amoako-Gyampah and
Meredith (1989) reviewed journal and proceedings publications between 1982-1987. They
concluded that the dominant strategies used for research in the operations discipline continue
to be model building and laboratory simulation.
l Finally, Swamidass (1988a) reviewed the difficulties operations has been having with its
existing research paradigms and pointed out the crucial need in operations for empirical field
studies, which he identifies as a “new frontier.”
Except for Swamidass (1988a), none of the previous work addressed the causes of the
industry-academia gap. Individually, these researchers recognized the need to bridge the gap and
offered a wide range of prescriptions. These papers clearly drew attention to the problem and
provided incentive to work on real problems-yet it did not happen. Since the prescriptions were
not embraced widely, it would seem that the problem lay with the inadequate fit between the
problems being addressed and the research paradigm used by OM researchers. The current
paradigm is typically prescriptive, deterministic, non-contextual, and exhibits a preponderance
of “rational” constructs.
A major contributing factor to this narrowness of current OM research is a lack of knowledge
about appropriate alternative research paradigms. Reisman (1988) has defined a number of
generic research strategies for the management and social sciences. Also, some of the authors
mentioned above outlined a few alternative methodologies for the issues they recommended
investigating. These alternative methodologies, as well as a number of others, will be discussed
in more detail in a later section.
Next, we explore the concept of research as a foundation for the paradigms we describe later.
We start with some definitions of research and then examine how knowledge is accumulated
through a repetitive cycle of research. A background section, briefly describing the history of
research and post-positivist thought with examples from the field of operations, is appended
following the references. This section is for those seeking a broader foundation in the
philosophical development of research methodology.

THE NATURE OF RESEARCH

The Research Cycle


Arguments frequently arise about whether the best research is that which proposes knowledge
or that which validates knowledge. Borrowing from Emory’s (1985) three research tasks of
description, prediction, and explanation, we suggest that all research investigations involve a
continuous, repetitive cycle of description, explanation, and testing (through prediction), as

Journal of Operations Management 301


illustrated in Figure 1. Thus, proposing knowledge (explanation) and validating knowledge
(testing) simply are two stages in the ongoing cycle of research. An individual research study
may involve only one of the stages in the cycle at a time.
We consider each of the three stages in detail. It should be noted that, in practice, the stages
rarely are as clear and distinct as we portray them here. As with many stage models, the
boundaries between the stages are purposely drawn sharply here for analytical reasons. In
actuality, a researcher may intertwine activities from different stages, step through the stages in
a different order, or backtrack as the research progresses. We initiate our discussion with
“description” since this must precede explanation and testing.

FIGURE 1
THE ONGOING CYCLE OF RESEARCH STAGES

Description. Descriptive research seeks to report and chronicle elements of situations and
events. As noted previously, the predominant activity of early research in operations was
descriptive. The approaches and techniques available for capturing this information depend on
the field and on the nature of the situations and events of interest. The result is a well-
documented characterization of the subject of interest. This characterization then may be used
for generating or testing theories, frameworks, and concepts regarding the situation. For
example, Meredith (1984) describes the complications that arose in the simple process of
attempting to purchase a copying machine for a university department and Heller (1951)
describes investment decisions in general.
A finer, more detailed level of description about a particular facet of the subject may require
what is sometimes known as exploratory research. Here, a particular aspect is investigated more
fully, based on the understanding that the preliminary descriptive research gave. This

Vol. 8, No. 4
understanding may have illuminated areas of confusion, unearthed contradictions in previous
concepts or “facts” about the situation, or given further meaning to areas of interest or to
existing knowledge. The result of the exploratory research is more detailed description that may
lead to further insight and understanding. Miles (1983) argues that such qualitative data are
attractive for many reasons: “They lend themselves to the production of serendipitous findings
and the adumbration of unforeseen theoretical leaps.”
A number of areas in operations need realistic descriptions. Examples include MRP and shop
floor control systems, new manufacturing technologies, operational problems relating to new
technologies and systems, what operations managers’ jobs consist of, and even the organization-
al decision-making process concerning the adoption of new operational imperatives such as JIT,
TQC, FMS, CIM.
Explanation. On the basis of, or in the process of producing, a description, some initial
concepts about the situation may be postulated. Perhaps some action-reaction or cause-effect
relationships may be inferred. Or possibly a more complex set of reactions or relationships may
be constructed to explain the observed behavior or events. If a complex, relatively closed set of
relationships appears to be operating, a “framework” may be constructed to explain the
dynamics of the situation. A framework offers a conceptual frame of reference to help
researchers design specific research studies, interpret existing research, and generate testable
hypotheses. One example in operations is Saladin’s (1984) conceptual model of the scope of the
field.
At a more integrative level and with further testing, the framework or sets of frameworks may
be developed into a theory describing the principles operating in the situation. There are many
definitions of “theory” such as “a set of general principles that explain observed facts,” but
more importantly, Dubin (1969) has identified a number of characteristics typical of all theories:
a theory must include the interrelationships between its variables and/or attributes as well as
some criteria that define its boundaries. The theory must also improve our understanding of the
non-unique phenomenon or help us make predictions about it. Finally, the theory must be
interesting (Davis (1971)), that is, non-trivial.
Note that, as Striven (1962) argues, a prediction is not the same as an explanation; the former
can be inferred from correlation but the latter has to address the underlying causal structure of
the theory. How can research activity in a field be conducted without the researcher having an
understanding of what it means to “explain”? Studies complaining about the lack of “usable”
research in operations lead to a fundamental issue: not what research topics need investigation,
but what research perspective we should hold.
Striven concludes that explanation is “a topically unified communication, the content of
which imparts understanding of some scientific phenomenon” (1962, p. 224). A description that
does not explain, although research, is incomplete. A prediction based on constructs that cannot
be explained, such as a crystal ball, is magic.
Hospers (1956) presents three common interpretations of the explanation of a phenomenon.
These may be summarized as:
(1) Stating the phenomenon’s goal or purpose. Research in operations strategy often gets
trapped here. Apart from the limited number of case studies, few researchers have devoted time
and effort to disseminate any knowledge about the resultant overall impact when a firm develops
and implements an operations strategy. Most arguments here are purposive: A firm should
develop an operations strategy because its purpose is sacrosanct. Similar arguments are used for
a number of other initiatives like inventory reduction.

Journal of Operations Management 303


(2) Showing the phenomenon to be an instance of a familiar phenomenon. This interpretation
of explanation shows the subject phenomenon to be an instance of a more familiar phenomenon.
Essentially:

the essence of an explanation consists in reducing a situation to elements with which we are so
familiar that we accept them as a matter of course, so that our curiosity rests.” (Bridgeman (1968,
P. 37))
An example here is the common reinterpretation of the just-in-time philosophy as simply
“reducing waste” (Schonberger (1987)). Eliminating waste through increased flexibility (cross-
training), responsiveness (quick setups and changeovers), and responsibility (zero defect
quality) are the cornerstones of this approach. However, as Klein (1989) points out, this
reductionist spiral ignores our conventional notions about workers. The entire philosophy hinges
primarily on “more and more strictures on workers’ time and action” (Klein (1989, p. 60)). In
the process of reducing the phenomenon to a more familiar form (waste reduction), we wind up
making certain assumptions about autonomy and worker cooperation that may not conform to
reality.
(3) Bringing the phenomenon under a law. This last interpretation is that an explanation
makes a phenomenon more familiar or removes mystery from it by bringing it under an existing
law or making it a new law. The law itself may be unfamiliar, but its consistency with other laws
gives us comfort and removes our concern about it. Hence, to explain an event is to “simply
bring it under a law, and to explain a law is to bring it under another law.” (Hospers (1956), p.
98).
This interpretation reflects the current axiomatic, prescriptive conceptualization of research
activity in operations where managerial problems are addressed primarily with mathematical
models. Thus, a complex phenomenon is simplified and then “solved” (or explained) by
treating it with an algorithmic model. Given the nature of the real-world questions asked of
operations, this paradigm severely limits the scope of OM research activity, the usability of the
results, and the ability of researchers to understand operations phenomena better. For example,
many operations studies are concerned with cost minimization or output maximization-
production scheduling, capacity planning, process design, and so on. But in an organizational
setting, cutting costs may negatively affect operations managers by reducing their budgets,
personnel, flexibility, or power.
Testing. The final stage in the cyclic process of research (Figure 1) is testing the concepts to
determine which are correct, which are false, and how to modify or expand them. The process
commonly involves a prediction based on the explanation constructed in the previous stage, and
then observation to determine if the prediction was correct. Alternatively, a prediction may be
postulated and then checked against observations already made or included in the description.
This testing stage often is claimed to be “true research,” perhaps because of the rigor the
commonly used tools of statistics or experimentation seemingly lend to this activity. The
dominant mode of testing in operations is simulation. While simulation provides for uncertainty
in outcomes, very few simulations are actually grounded in reality.
Following testing of the concepts, more description follows concerning other or more detailed
aspects of the situation, more exploratory investigation is conducted, and new concepts (or
modifications) are developed to be tested in turn. Thus, the cycle of learning and research
continues.

304 Vol. 8, No. 4


A FRAMEWORK FOR RESEARCH PARADIGMS

A research paradigm is a set of methods that all exhibit the same pattern or element in
common. However, there are a number of dimensions on which research activity may be
classified. For example, it may be classified according to the technique used to gather the data
(model, literature, survey, observation, interview, experiment, laboratory, etc.), the methods
used to analyze the data (statistics, protocol analysis, taxonomy), the immediate purpose of the
research (exploration, description, evaluation, hypothesis generation, hypothesis testing), the
nature of the units of analysis (individuals, groups, processes), the duration/time points of data
collection, and so forth.
Though limited, there have been other frameworks offered for classifying research paradigms.
Beged-Dov and Klein (1970), for example, classify research in management science in terms of
formalism or empiricism, Reisman (1988) categorizes the range of management and social
science research strategies in terms of a Venn diagram-type of framework, and Chase (1980)
offers a matrix framework for classifying the research conducted in operations. A more generic
and comprehensive framework, similar in that sense to the framework constructed by Mitroff
and Mason (1984) for business policy, is presented by Paulin, Coffey, and Spaulding (1982) for
the field of entrepreneurship. Here we present a generic framework for a classification of
paradigms based on the framework generated by Mitroff and Mason (1984).
In discussing the underlying metaphysical assumptions inherent in business policy research,
Mitroff and Mason specify two key dimensions that shape the philosophical basis for research
activity. We have redefined their two dimensions, illustrated in Figure 2, to better fit operations.
The first is the “rutionallexistential dimension” which concerns the nature of truth and whether
it is purely logical and independent of man or whether it can only be defined relative to
individual experience. The second dimension is “naturallartificial” and concerns the source
and kind of information used in the research.

The Rational/Existential Dimension

This dimension relates to the epistemological structure of the research process itself. It
involves the benefits and limitations of the philosophical approach taken to generating
knowledge; that is, the viewpoint of the researcher. At one extreme is rationalism, which uses a
formal structure and pure logic as the ultimate measure of truth. At the other extreme is
existentialism, the stance that knowledge is acquired through the human process of interacting
with the environment. Thus, in existentialism an individual’s unique capabilities, in concert with
the environment, are regarded as the basis of knowledge. The former conforms to the traditional
deductive approach to research; the latter to an inductive approach.
Our view of the rational/existential dimension includes four generic perspectives that
structure the research by different degrees of formalism. These four perspectivesin order of
degree of formal structure, are axiomatic, logical positivist/empiricist, interpretive, and critical
theory. We explain these briefly, using examples from operations.
The axiomatic perspective represents the theorem-proof world of research. A high degree of
knowledge is assumed, a priori, about the goals and the socio-technical structure of the
organization. The key organizing concepts are the presence’of formal procedures (e.g., lot
sizing), consensus, consistency of goals (such as cost minimization), and a work place ideology
characterized by scientific management principles. Operations research (OR) studies tend to fall
in this category, such as the many variations of the economic order quantity model. Additionally,

Journal of Operations Management 305


FIGURE 2
A GENERIC RESEARCH FRAMEWORK

RATIONAL

gp;
DIRECT , , ARTIFICAL
OBSERVATION’ PERCEFTIONS ‘RECONSTR’CTN

.9

EXISTENTIAL

Hounshell(1988, p. 61) provides a historical perspective of some basic axioms of manufacturing


management.
The logical positivist/empiricist perspective assumes that the phenomenon under study can be
isolated from the context in which it occurs and that facts or observations are independent of the
laws and theories used to explain them. This is the basis for most survey research. For example,
Anderson, Schroeder, White, and Tupy (1980) use this perspective to derive conclusions about
critical MRP implementation factors. Isolated from the context, one concludes that top
management commitment is essential for implementation success. The question that naturally
follows is what leads to lack of commitment. Are there competing demands for management
commitment? If so, then the phenomenon is much more complex than we have assumed. If not,
the results are tautological. “Good” management is essential for implementation success, but
“good” is defined by a successful implementation.

306 Vol. 8, No. 4


The interpretive perspective includes the context of the phenomenon as part of the object of
study. Interpretive researchers study people rather than objects, with a focus on meanings and
interpretations rather than behavior. The purpose is to understand how others construe,
conceptualize, and understand events and concepts. In contrast to the implicit absolutism of
positivism, interpretivism is relativistic because facts are not considered independent of the
theory or the observer. Interpretive researchers explain by placing behaviors in a broader context
in which the behaviors make sense.
An excellent example of an interpretive study is provided in the context of just-in-time (NT)
manufacturing by Klein (1989, p. 66). In a review of the JIT movement she concludes that
greater employee responsibility does not mean greater discretion over time and work. She
exemplifies it with the attitude of the typical operations manager: “. . . it seemed obvious to him
that increased participation was precisely what the workers wanted . . .” Observing that with
JIT, tasks are more tightly coupled than ever before, her conclusion is hard hitting: “They ought
not to promise workers autonomy when they mean them to deliver an unprecedented degree of
cooperation. ”
Critical theory is a recent influential contribution to post-positivist thought, primarily through
the work of Jurgen Habermas (1979a, 1979b). The critical theory perspective is an attempt to
synthesize the positivist and interpretive perspectives and get past their dichotomy by placing
knowledge in a broader context of its contribution to social evolution. The positivist and
interpretivist perspectives are considered dialectically interrelated. Critical theorists transcend
the contradiction between the way people behave in practice and the way they understand
themselves to be acting. An example here is the evolution of quality practice in organizations
where a symbiotic thrust has emerged between the positivist traditions related to cost of quality
and the highly interpretive findings related to quality of work life (Alexander (1988) and Raturi
and McCutcheon (1989)).
Measures of the dimension. A number of measures, as illustrated in Figure 2, can be placed
on this dimension that help clarify the continuum. At the rational pole, the research process:
l tends to be deductive,
l is more formally structured,
l entails a high degree of objectivity,
l is methodologically prescribed,
l restricts environmental interaction lest the findings be biased by the researcher’s orientation,
l requires a priori assumptions concerning primary constructs,
l establishes the truth of its findings by coherence with the truth of other statements or
“laws.”
This contrasts with the existential pole where the process is more inductive, less structured,
typically subjective, and requires more interaction with the environment. The process of
knowledge creation requires “detective work” and then a “creative leap” (Mintzberg (1983)).
Further, researchers at this pole are concerned more about the correspondence of their findings
to the real world than their coherence with existing theories or laws.

The Natural/Artificial Dimension

This second dimension concerns the source and kind of information used in the research. At
the natural end of the continuum is empiricism (deriving explanation from concrete, objective
data), while at the artificial end is subjectivism (deriving explanation from interpretation and
artificial reconstruction of reality). The progression from natural to artificial on this dimension

Journal of Operations Management 307


parallels the historical periods presented in the Appendix.
The researcher’s perception of reality is molded by the mechanisms used to study the
phenomenon. In a very broad sense, these mechanisms may be classified into three categories:
object reality, people’s perceptions of object reality, and artijcial reconstruction of object reality.
Object reality refers to direct observation by the researcher of the phenomenon. It assumes
that there is an objective reality and that human senses can detect it. It corresponds to the pure
empiricism extremum exemplified by Locke. As with the other categories, the observation may
be subjected to formal structured analysis (or axiomatization, as in econometric studies) or to
interpretation using critical theory.
People’s perceptions of object reality relate to research conducted “through somebody else’s
eyes, ’ ’ as in surveys, interviews, or many laboratory experiments. Thus, the primary concern is
with the perception or abstract representation of the reality of individuals exposed to the
phenomenon.
These are second source methods, but may be the only efficient or effective way to obtain
information about the phenomenon of interest. A number of constructs in operations, like the
effect of layout on the productivity of an operation or the success of a new piece of equipment,
are difficult to study through direct observation. The opportunity may not be there, or the results
may be clouded by the Hawthorne effect. In such situations, an assessment of people’s
perceptions may yield significant insights into the underlying explanation of the phenomenon.
Descriptive information about the phenomenon, as well as people’s constructs/models about
what relationships are operative, can be ascertained through these second source methods.
An arti$cial reconstruction of object reality is attempted in almost all the modeling and
systems analytic efforts in operations. These approaches recast the object reality, as originally
determined from one of the above two categories (usually the researcher’s own belief concerning
the object reality), into another form that is more appropriate for testing and experimentation,
such as analytical models, computer simulations, or information constructs.
Measures of the dimension. As with the rational/existential dimension, there are a number of
measures that describe this dimension, as shown in Figure 2. At the artificial pole, the research:
l uses highly abstracted and simplified models such as linear representations;
l tends to yield conclusions with high reliability and internal consistency;
l is often characterized by a significant separation of the phenomenon from the researcher, as
with an abstract representation;
l is highly controlled since the researcher uses a priori constructs or models to specify the
information to be collected;
l process is highly efficient since aberrations (classified as “noise”) do not have any causal
source;
l is dated, since the specification of the constructs or models takes most of the researcher’s
time and pushes the natural phenomenon further into the past.
This contrasts with the natural pole where the research process is more directly concerned
with the real phenomenon, less concerned with reliability and more with externally generaliz-
able validity, closer to reality, less controllable, less efficient, and more current. The critical
issue here is the balance between reliability and external validity. Like IQ tests, survey
instruments provide very reliable data but their validity in actually measuring constructs is
suspect. Clearly, the most valid information is that obtained by direct involvement with the
phenomenon.
This section has proposed a new research framework and illustrated a broad range of

308 Vol. 8, No. 4


paradigms available to researchers. Current research in operations has tended to lie in the
rational-artificial quadrant and thereby has limited not only the phenomena that can be
researched effectively but also the utility of the findings. In the next section, we describe a
number of research methodologies that fall across the quadrants of the framework and discuss
their potential application to research in operations.

PARADIGMS OF RESEARCH METHODS

Figure 3 presents the two dimensions we established in the last section, with the meth-
odologies available to researchers placed in their appropriate cell(s). Note that some meth-
odologies logically could fall into a number of cells, or relate to only one of the dimensions. Also
some methodologies can fairly easily be used for any of the three stages of research-
description, explanation, or testing-and these are occasionally pointed out in passing. For
example, case studies can describe, explainor disprove a hypothesis.
The methods listed in this figure are described briefly below. Methodological references are

FIGURE 3
A FRAMEWORK FOR RESEARCH METHODS

NATURAL < > ARTIFICIAL

DIRECT PEOPLE’S ARTIFICIAL


OBSERVATION PERCEPTIONS RECONSTRUCTION
OF OF OF
RATIONAL OBJECT REALITY OBJECT REALITY OBJECT REALITY
A
l REASON/LOGIC/
THEOREMS
l NORMATIVE

AXIOMATIC MODELING
- DESCRIPTIVE
MODELING

l FIELD STUDIES * STRUCTURED l PROTOTY PING


l FIELD INTERVIEWING * PHYSICAL
LOGICAL EXPERIMENTS . SURVEY RESEARCH MODELING
POSITIVIST/ * LABORATORY
EMPIRICIST EXPERIMENTATION
l SIMULATION

l ACTION RESEARCH - HISTORICAL l CONCEPTUAL


l CASE STUDIES ANALYSIS MODELING
l DELPHI l HERMENEUTICS
INTERPRETIVE * INTENSIVE
INTERVIEWING
l EXPERT PANELS
- FUTURES
SCENARIOS

CRITICAL THEORY l INTROSPECTIVE


REFLECTION
V
EXISTENTIAL

Journal of Operations Management 309


included with the description of the methodology. Some general references are Galliers (1985),
Jenkins (1985), and Strauss (1987). Examples are also included for those methods that are less
familiar to OM researchers.
Direct Observation
Field studies. In this approach, a carefully selected set of field sites is used to evaluate one or
more factors. The factors are controlled through the judicious selection of sites rather than by
attempting to manipulate the factors within the sites at the time of observation or analysis.
This approach is similar to the classic experimental design. Analysis of the field data is
expected to reveal the significance, or lack thereof, of the factors. This approach is also excellent
for evaluating hypotheses when the phenomenon cannot be manipulated; that is, when neither
the independent nor the intervening variables can be controlled by the researcher. However, in
the multi-site field study, as opposed to the single-site case study described later, the relevant
variables typically are preselected.
Two examples of this approach are Graham and Rosenthal (1986) and Meredith (1987).
Graham and Rosenthal selected plants that were implementing a given technology and controlled
for the business sectors of commercial, military, and hybrid. Meredith selected commercial
plants that were using an advanced technology and controlled for stage in the technology’s life
cycle, thereby simulating a longitudinal research design.
Another common way of employing the field study approach, though perhaps a misnomer, is
simply to use the sites as a multiple sample of case studies without particularly controlling for
any factor. This technique is commonly done in early exploratory research where it is hoped that
some factors of interest, a natural typology or taxonomy, or a confirmation or refutation of
existing theory will emerge from the multiple cases. This approach is relatively common in
operations (e.g., see Chase, Northcraft, and Wolf (1984); Goodman and Garber (1988);
Leonard-Barton (1988); or Swamidass (1988b)).
Field experiments. In this situation, the field site variables, at least the important ones, are
under the control of the researcher. The independent variables are manipulated while the
intervening variables are controlled to determine the resulting dependent variables. Various
methods of data collection may be used.
This method has not had extensive use in operations because of the difficulty of controlled
experimentation in the operations of real-world organizations. On a limited basis, however, this
approach may be acceptable to the firm’s management, particularly if they themselves are unsure
of how to manage a process or system. A limited example of this approach was employed in
Freiman and Saxberg (1989) where one set of sites received the treatment (quality circles) and
another set of control sites did not.
Action research. With this method, we get into the more interpretive approaches to research.
This approach has been publicized in the operations literature recently because it requires the
researcher to become involved with the phenomenon under study. In this method, the researcher
attempts to influence the situation in a positive direction while collecting data and observing the
dependent variables. It differs from field experimentation in that a complete, factorial design is
not attempted nor even desired. The advantage of this method, particularly for operations, is the
immediacy of the results and their relevance to the organization’s situation. For further
information, see Rapoport (1970), Anti11 (1985), or Argyris et al. (1985).
In one example, Taylor, Gustavson, and Carter (1986) describe their research efforts in an
organization that is attempting to apply the sociotechnical systems approach to the design
activities of engineers. The research process, obstacles, and resulting decisions are described in

310 Vol. 8, No. 4


detail so the reader can continuously compare research theory (e.g., freezing, boundary
spanning) with the activities occurring in the firm. In another example, Ruwe and Skinner
(1988) describe their research activities concerning focus in a plant headed for disaster and then
detail how events unfolded.
Case studies. This method is used to investigate a specific phenomenon through an in-depth,
limited-scope study. We would include here those methods such as ethnography (Agar (1986))
and ethnomethodology, as used by anthropology and some other social sciences. Typically, the
breadth is restricted to a single site which is studied in detail, possibly over an extended duration
of time. Neither the independent nor intervening variables are controlled, but various outcomes
and processes are measured extensively and systematically, commonly by using multiple
sources of data. Benbasat et al. (1987) and Bonoma (1985) give more detail on the case study
method in other functional applications.
The attractions of the case study are that operations can be studied in their natural settings and
theories generated directly from the data. In addition, how and why questions can be included.
And most important, the case study method is useful in the early phases of research
(description, concept development) where there may be no prior hypotheses or previous work for
guidance (Donham (1922)). The researcher may not even know what the dependent variables
are. However, case research need not even be limited to these situations; the case method has
been used to study the behavior of dependent variables, to provide counter-examples to prior
hypotheses, to investigate established areas where contradictions have arisen, and even to allow
analysis without variables. Benbasat et al. (1987) distinguish between case study research and
practitioner applications by noting that the latter informally detail the author’s experiences in a
project and typically conclude with a set of “do’s and don’ts.”
An example of a single case study is Liker, Roitman, and Roskies (1987) where the reactions
of employees to an organization’s attempts to successfully make a concurrent social and
technological change is analyzed and contrasted with existing theory.

People’s Perceptions

At this point we begin to consider those methodologies that rely on determining people’s
perceptions of object reality. These first two fall in the logical positivist/empiricist cell because
of their high rationality.
Structured interviewing. This method contrasts with field and case studies in that observation
is limited to the interview process and transcripts. The main reason for personal interviewing is
to control the situation and responses, thereby aiding uniformity in analysis. The results may
then be systematically analyzed through non-quantitative means or subjected to intensive
statistical analysis to identify factors, clusters, and other such relationships in a statistically
significant way. In structured interviewing, a fixed format is followed for the interview and the
details of every answer are carefully noted as the interview proceeds. All questions are the same
so that the typically constrained answers (check marks, values on a given scale) can be
compared across interviews, situations, plants, and so forth.
Some examples of this method are the interviews conducted by London, Stevenson and
Holmes (1987) on issues relating to manufacturing strategy, those conducted by Miller and
Toulouse (1986) with CEOs relating their personality to their firm’s strategy and structure, and
those conducted by Bourgeois and Eisenhardt (1988) of the top managers and decision makers of
firms competing in a high-velocity competitive environment. This latter study was particularly
well-designed in that the structured interviews were only one of the multiple methods used in
triangulating to ascertain the strategic decision processes unique to this industry.

Journal of Operations Management 311


Surveying. Like structured interviewing, this method allows for statistical analysis. It is more
time efficient than interviewing, particularly at a distance because, once properly designed, the
survey can be sent to a large number of people with little extra trouble. Stratified samples can
also be predetermined to assess factors of particular interest. For these reasons, this method is
extremely popular among academics. Its disadvantage is that only a fraction of the surveys may
be returned and the “answers” may be of little value. Space can be left for open-ended
comments but there is no way to alter the questions on the survey to accommodate these
comments after the fact. Also, interesting responses cannot be easily followed up. For more
details, see Babbie ( 1973) or Rosenberg ( 1968).
Two critical issues of concern with surveys are the reliability and validity of the test
instrument. In most survey research, the former tends to be emphasized over the latter, probably
from concerns of replicability and consistency in the positivist tradition. However, the two are
not the same (c.f. Emory (1985, p. 94-98)): high reliability may well be associated with zero
validity. It is interesting to note that, in contrast to many of our sister fields in business,
operations has virtually no standard, accepted instruments that have been validated for use in
research studies.
Some typical examples of surveys are Ferdows, Miller, Nakane, and Vollmann’s (1987)
worldwide “manufacturing futures survey,” Mansfield’s (1988) comparison of innovations in
Japan and in the U.S., and White, Anderson, Schroeder, and Tupy’s (1982) study of the MRP
implementation process. Sometimes surveys are simply sent to as many members of the target
group as possible to obtain a data base large enough to yield results that are statistically
significant, particularly when the data are culled for various factors, such as was done with
White et al.
In comparison, Mansfield’s survey was designed to obtain approximately equal numbers of
Japanese and U.S. firms across a predetermined set of industries and was therefore smaller but
more focused. The decision here is based primarily on how exploratory the study is or how
thoroughly the topic has already been researched and described. Recent transnational research
studies (e.g., Whybark and Rao (1988)) seem to indicate that it is difficult to identify laws or
principles that are valid across cultures.
Historical/archivul analysis. With this approach, we begin to move into the more interpretive
methods. A note on subjectivity in research is warranted here. Interpretivism and subjectivity
allow the researcher to take into account their particular insights. Objectivity is valuable for
prediction but since the goal of research is also understanding, subjectivity has its place here too
(Striven (1972)).
The historical/archival approach examines historical documents or less formally recorded
data to evaluate factors, make comparisons, note apparent contradictions, see similarities, make
inferences, gain insight, and generally analyze a situation from a particular perspective,
frequently over some period of time. No manipulation of the variables is possible and the only
control the researcher can exercise is that of selecting and culling for particular evidence or
factors and then interpreting it. A common use of this approach is the analysis of written
communications in a firm or the analysis of utilization, cost, or productivity data.
Two excellent examples of this process are the analysis of the “American System of
Manufacturing” by Abernathy and Corcoran (1983) and the multicentury analysis of the
firearms firm of Beretta by Jaikumar (1988). Both of these studies analyze the development of
manufacturing over a long history, identify clear patterns within that history, detail the lessons to
be drawn, and then come to conclusions about the direction for the future and the need for
change. A similar study over a shorter time span is offered by Skeddle (1980) of the float glass

312 Vol. 8, No. 4


innovation process, concluding with lessons regarding the capital investment process.
Delphi. This technique is a type of expert panel (described in more detail further below) in
which explicitly defined methods are used to obtain and consolidate expert opinion. The
participants need not be near the object reality but even more important, they need not be near
each other nor even respond to the Delphi administrator at the same time. Typically, only one
issue is explored and participants report a score on a measure of interest. They also give reasons
for the scores reported and the score statistics and reasons are fed back to the participants for a
second round. This process may be repeated up to four times and then the panel results are
documented. For more details, see Meredith and Mantel (1989, pp. 575,597-598).
The Delphi approach has been used in operations primarily for predicting alternative futures,
or even for regular forecasts. In terms of alternative futures, Benson, Hill, and Hoffmann (1982)
used Delphi in a study of probable manufacturing practices in the future. An insightful example
of the use of the Delphi method for forecasting in an actual firm situation is described in Basu
and Schroeder (1977).
Intensive (also “elite’ ’ or “unstructured’ ‘) interviewing. Here, people are interviewed using
open-ended questions. As issues or points of interest to the researcher arise, these are followed
up on the spot or in later interviews to give further insight to the researcher. This approach is
particularly good for the descriptive and exploratory phases of research. It has the advantage that
the issues are framed by the participants and the researcher may not have even been aware of
them. It can also be used for testing hypotheses. Of course, every interview may be different and
there might be little that can be compared between them. For more information, consult Murphy
(1980), Dexter (1970), or Schatzman and Strauss (1973).
A good example of this approach is contained in Nutt (1986) where 9 1 executives were
interviewed to determine the tactics most commonly employed in an implementation situation in
their firm. A taxonomy of tactics was derived and the chance for success of each of the tactics in
various situations was estimated.
Expert panels. As with Delphi, this approach also can be conducted from a distance, though
both in-person and mail methods are normally used. Here, a set of experts is polled for their
opinions, beliefs, or experiences concerning a specific situation. Commonalities and differences
among the experts are noted and statistical analyses of their responses may be conducted.
Again, this method typically is most useful in the exploratory and concept development stages of
research. Panels often are used by practitioner magazines to explore particularly difficult
managerial or technical issues, such as implementing new technologies, establishing standards,
developing criteria, changing employee attitudes, or generating research agendas.
An example of an in-person panel is detailed in Schroeder, Scudder, and Elm (1989) where 65
manufacturing managers were brought together, divided into seven nominal groups, and asked to
brainstorm answers to questions regarding manufacturing innovation.
Futures research/scenarios. What is involved here is postulating different future situations
and evaluating the results by means of a conceptual, pictorial, or abstract model. This approach
has become quite popular in some areas such as forecasting and long-range planning where
different possible world scenarios are postulated-a regional war or an embargo on critical
resources-and the effect is then evaluated. If the effect appears reasonable, then the model’s
validity is better established. An example exploring the utility of this technique for forecasting,
and the problems associated with it, is given in Wright and Ayton (1986).
Introspective rejection. This is the most existential of the methods involving people’s
perceptions of reality. It falls in the critical theory paradigm of Figure 3 because it allows the
integration of both the positivist and interpretivist approaches to attain a higher level of

Journal of Operations Management 313


understanding. In this approach, the researcher reflects on his or her own experience to either
describe, explore, formulate concepts about, or evaluate some situation of interest. That is, the
researcher is analyzing his or her own impressions rather than someone else’s,
This approach is common in operations, particularly in practitioner applications. Because of
the difficulty of controlling for the possible bias of the investigator, the method is held in low
esteem by researchers. But properly used, it can be highly productive, especially when used in
conjunction with action research or some other real-world involvement by the researcher. Two
examples here are Nutt (1983) and Meredith and Green (1988). Nutt deduces a range of project
planning implementation approaches for each of sixteen potential combinations of planning
environments. Meredith and Green reflect on their experiences to identify ten non-intuitive
recommendations for managing the introduction of advanced technologies into a firm.

Artificial

At this point we move to the methods that are based primarily on artificial reconstructions of
the object reality. The first three tend to be most rational and thus are classified in the axiomatic
cell.
Reasonllogical deductionltheorem proving. The approaches in this category all involve
rigorous, logical analysis that can be followed and replicated by other researchers. They differ
from simple introspective reflection because of both their formal rigor and their external, rather
than perceptual, focus. That is, introspective reflection is an attempt to evaluate one’s own
perceptions of a phenomenon whereas with these methods, a model of the object reality is
constructed. This model can be composed of a set of reasoned or logical statements or logico-
mathematical theorems or formulas. This method is frequently used to construct models that are
embodied in software for expert and decision support systems and in mathematical models of
operational systems. Examples are so common as to be unnecessary (see further discussion in
next section).
Normative analytical modeling. This approach is conducted at a relatively macro level with
simple, closed-form mathematical representations. The model is used to produce a prescriptive
result, typically through iteratively applying the analytical equations until some desirable value,
usually of one dependent variable, is achieved. Examples include linear programming and
inventory models (guaranteed optima) or the CRAFT layout algorithm (no guaranteed
optimum). In some cases, multiple dependent variables may be considered, as with goal
programming and scoring models.
Again, examples of these methods are common (see next section). But as Ijiri (197.5, p. 28)
notes: “Goal assumptions in normative models or goals advocated in policy decisions are often
stated purely on the basis of one’s conviction and preference, rather than on the basis of inductive
study of the existing system.”
Descriptive analytical modeling. This method also is conducted at a macro level with
relatively simple, closed-form mathematical representations. However, unlike normative model-
ing, this method is used to simply describe, by mathematical emulation, the actual workings of a
real-world system counterpart. For example, rather than representing a complex organizational
system by a linear program for the purpose of determining the system’s maximum productivity, a
discriminant function might be constructed instead. The plethora of quantitative models used in
operations-queuing theory, location analysis, inventory theory, and so on-are representative
of this method and examples are unnecessary.
Prototyping. The next four methods-prototyping, physical modeling, lab experimentation,
and simulation+mploy the logical positivist/empiricist tradition rather than the axiomatic

314 Vol. 8, No. 4


approach. Prototyping and physical modeling generally take the same physical form as the final
product, whereas descriptive and normative modeling, as mentioned earlier, employ abstract
(mathematical) forms. Also, the former are used to test the representation of an item, whereas
the latter are typically descriptions used for explanation or prediction.
Prototyping involves the construction of a working model of a system. This model serves as a
pilot or exemplar that includes the majority of, or some selected subset of, the attributes of the
system. Alternatively, it can be a full model but tested in a restricted environment. The
prototype is used for testing and evaluating the system being constructed. Although it is an
artificial model, it is close to becoming, and may become, a part of the real system.
Prototyping commonly is used in operations when potential new systems are being tested in
an organization, such as a new MRP purchasing, or incentive system. Research is conducted
with the prototype to evaluate its feasibility, benefits, limitations, and so on. An example of the
use of a prototype is when Schlumberger initiated its conversion to a cellular manufacturing
approach and evaluated the benefits and problems of a prototype cell (Stoner, Tice, and Ashton
(1989)).
Physical modeling. With physical modeling, the model is a physical representation of the real
setting such as is done by industrial engineers when evaluating a new layout for an organization.
This is a common approach for office remodeling, warehouse evaluation, and refinery checkout.
A two- or three-dimensional model of the proposed system is built to determine or evaluate
relationships, interferences, sizes, and other such characteristics of the proposed system. The
model is also used to conduct tests and do research on the system. An example of such a process
using a plastic miniature of a flexible manufacturing system to conduct research on work
scheduling rules is contained in Choi and Malstrom (1988).
Laboratory experimentation. In this approach, the intervening variables are carefully
controlled while the independent variables are manipulated systematically to determine their
effects on the dependent variables. The research is conducted so as to hold constant all factors
that might affect the results except the factors under study. The timing and nature of the
experiment are under the control of the researcher. This approach is the paradigmatic research
method of the physical sciences and has been used extensively with human subjects in many of
the social sciences as well. However, with realistic operations problems, it is difficult to conduct
experiments, especially if they involve the organization itself, large systems, or actual managers
in situ rather than students.
The Hawthorne experiments are a good example of this approach, as well as of the problems
with it. A more recent example is Lawrence, Edmundson, and O’Connor’s (1986) experiment to
ascertain the accuracy of combined judgmental and statistical forecasts.
Mathematical simulation. A special type of analytical modeling, this method is a common
one in operations. It includes both a conceptual model of what is happening, through equations,
plus an element of reality through the values set for the parameters in the equations. On
occasion, parameter values are hypothesized and evaluated in the model rather than being
derived from the real world data, which reduces the fit of the model to the actual phenomenon
and increases the risk of irrelevance (reduces external validity).
The simulation can be either stochastic (e.g., “Monte Carlo”) or deterministic (e.g.,
“industrial dynamics”). In the stochastic case, complex interrelationships can be modeled and
the distribution of parameter values iteratively tested through multiple runs of the model to
result in a distribution of the dependent variables. Variations in policies then can be evaluated by
changing the equations of the model, and the simulation rerun to determine the effect. Such
models frequently have been used in operations for the evaluation of queuing systems, inventory

Journal of Operations Management 315


systems, and many other organizational systems. Examples are widely available.
Conceptual modeling. The last two methods fall in the interpretive cell and are more
existential than rational. With conceptual modeling, a mental model of the suspected
relationships is posited which may then be evaluated by means of a framework that captures the
essence of the system under investigation. A PERT/CPM diagram is a typical conceptual model
that can be tested and evaluated for research purposes. Similarly, Gantt charts, system models
with feedback loops, fishbone cause-effect diagrams, and even taxonomies and categorizations
are all conceptual models. Examples of the latter include Hayes and Wheelwright’s (1984) four
stage taxonomy of firms in terms of their use of manufacturing as a competitive weapon and
Parnaby’s (1979) conceptual model of a manufacturing system.
Another common use of conceptual modeling is meta-analysis, in which a researcher draws
the threads of existing research together to formulate a larger or more integrated perspective of a
phenomenon. An example here is Gerwin’s (1988) theory of innovation processes.
Hermeneutics. Hermeneutics is directly concerned with the interpretation of what is being
observed. Classically, hermeneutics involved the interpretation of the scriptures. Here it is used
in reference to interpreting either reconstructions of object reality, such as texts or films, or the
object reality directly, such as an organization in action. The observer’s framework and
methodology are important to the description of the object of study because the observer is
interpreting and documenting it. For more discussion, consult Oliga (1988).
A more classical use of hermeneutics is illustrated by Nobes (1982) in his attempt to interpret
the Gallerani account book of 1305 1308 as an early double-entry bookkeeping system. In a
more contemporary setting, Wynne and Robak (1989) interpret the TIMS Edelman Paper
Contest results in terms of a framework of factors that may help bridge the gap between the
theory and practice of management science. In the process of conducting the interpretation they
rely partially on the wisdom of a variety of nationally-recognized management theorists.
Similarly, Amoako-Gyampah and Meredith (1989) interpret the research publications in
operations in terms of a “research agenda” and provide guidance on the trend of research in the
field.

APPLYING THE FRAMEWORK TO OPERATIONS

The majority of research on operations has been restricted to the northeast corner of the
research framework, as illustrated in Figure 4. This was determined by reviewing the operations
publications in Management Science, Decision Sciences, and Journal of Operations Management
for 1987 and 1977 (JOM began publication in 1981 and thus was unavailable for comparison in
1977). The review consisted of classifying the methodologies employed and plotting them on the
framework of Figure 3.
As illustrated by the percentages shown in Figure 4, 93% of all the articles fell in the
“artificial” paradigm and 91% fell in the highly rationalist paradigms of “axiomatic” or
“logical positivist/empiricist.” It is also interesting to note the changes over the years.
Management Science has increased its percentage of axiomatic types of operations publications
from 60 in 1977 up to 70 in 1987, while Decision Sciences has decreased theirs from 82 to 58.
However, Management Science is apparently more receptive than Decision Sciences to the more
“naturalistic” and “interpretive” paradigms and these percentages have remained relatively
stable over the decade. Interestingly, Journal of Operations Management has a different
paradigmatic thrust, with a minority of their publications being axiomatic (33%), though still

316 Vol. 8, No. 4


primarily artificial (93%). However, their percentage of interpretive publications (20%) is almost
twice that of Management Science (1 l-12%).

FIGURE 4
DISTRIBUTION OF JOURNAL ARTICLES ON OM TOPICS

NATURAL < > ARTIFICIAL

DIRECT PEOPLE’S ARTIFICIAL


OBSERVATION PERCEPTIONS RECONSTRUCTION
OF OF OF
RATIONAL OBJECT REALITY OBJECT REALITY OBJECT REALITY
A
1977 1987
MS 60% 70%
AXIOMATIC DS 82% 58%
JOM - 33%

TOTAL 62%

1977 1987 1977 1987


MS - 4% MS 28% 15%
LOGICAL DS - - DS 18% 42%
POSITIVIST/ JOM - JOM - 47%
EMPIRICIST
TOTAL 1% TOTAL 28%

1977 1987 1977 1987 I977 1987


MS 8% 7% MS 4% MS 4%
INTERPRETIVE DS DS - DS -
JOM 7% JOM - - JOM 13%

TOTAL 5% TOTAL 1% TOTAL 3%

CRITICAL THEORY

V
EXISTENTIAL

SAMPLE SIZE BY JOURNAL 1977 1987

MANAGEMENT SCIENCE MS 25 26
DECISION SCIENCES DS 17 12
JOURNAL OF OPERATIONS MANAGEMENT JOM 15

TOTAL 95

The inescapable conclusion is that our research in operations is still overwhelmingly artificial
in nature, though breaking the methodological tie with the field of management science has
allowed us to begin moving toward more existential (primarily interpretive) paradigms and to
move away from the more rationalistic, “scientific” paradigms (both axiomatic and logical
positivist/empiricist). We believe that a much stronger movement toward naturalistic paradigms
(especially direct observation via case, action, and field studies) and existential (primarily

Journal of Operations Management 317


interpretive) paradigms is called for. The methods are accessible, their legitimacy is proven, and
the need is great.
In general, the newer, more interrelated, more situation- or people-dependent topics in
operations require the additional perspective afforded through the natural and existential
methodologies. As an example, we describe the possible application of these methodologies to
two major topics in operations: quality management and technology implementation. In so
doing, we only sketch the major outlines of the potential application, leaving the methodological
details to the creativity of the research readers.

Potential Research in Quality Management

One rich area within operations for discussing the application of the different paradigmatic
research stances is quality management. There are two primary reasons. The first is that quality
management recently has become a primary concern and even strategic thrust of many business
organizations. Secondly, the concept of the quality of a product or a service is multi-dimensional
and, to some extent, nebulous in definition. The accepted definition concerns “fitness for use”;
yet evolving social values and perceptions continually redefine the interpretation of this phrase.
Much difficulty in conducting research in quality is due to how products and, especially,
services are perceived. Abstract, subjective, and commercial characteristics (e.g., a restaurant’s
“atmosphere” or their staffs “courtesy”) can only be measured through people’s perceptions.
Moreover, each person may define these characteristics differently. Because the basic definition
of quality involves the degree of consumer satisfaction with a product or service, people’s
perceptions of the major components of quality provide a basis for quality assessment. The
notion of quality is created by a combination of such diverse attributes and thus, diverse research
methods may also be required (Alexander (1988)).
We first consider the application of the natural-artificial dimension of our framework for
research in quality management. The three major paradigms include direct observation,
determining people’s perceptions, and artificially reconstructing object reality.
Quality characteristics presumed inherent in the product or service (defects, durability) are,
by and large, directly observable. Field experiments and field studies are appropriate for
analyzing these characteristics of a product because the variables are clear and potentially
subject to control. For the more contextually-defined or situation-dependent quality characteris-
tics (such as aesthetic appeal), case studies and action research are required because of their
deeper analysis of context.
When assessing perceptions, methods such as surveys and structured interviews provide an
empirical approach to studying quality. But significant progress in this area may require more
interpretive research. Here, intensive interviewing, expert panels, scenario analysis, and even
introspective reflection could be valuable for uncovering basic attitudes, perhaps even
subconscious feelings, about what constitutes quality. Reflection offers the possibility of
merging the more rational/empirical quality concepts with the interpretive findings so as to move
to a higher level of understanding of the concept of quality.
Most of the existing research pertaining to quality has been conducted at the artificial end of
the scale through the methods of modeling, laboratory experimentation, and simulation. It might
be noted here that other “direct, observable” characteristics such as reliability and main-
tainability are, in practice, typically measured through artificial reconstructions of the object
reality. For example, a system reliability of 0.99 either is a statistical estimate or a guess based
on the hypothetical reconstruction of past experience. However, conceptual modeling for the
purpose of interpretation has been rare.

318 Vol. 8, No. 4


On the rational-existential dimension, the axiomatic perspective historically has been most
common in quality control research. Statistical laws are used to make judgments about the
quality of products/processes/services and quality planning efforts are directed toward reducing
the overall cost of quality. But is cost the real driver? The trade-offs implicit in this analysis are
becoming suspect, as the Japanese approach to quality has indicated (Hayes and Abernathy
( 1980)).
Logical positivist/empiricist notions are implicit in research using surveys that relate quality
to product/firm success, and also in experimental work related to the efficacy of small groups
(quality circles) in quality improvement programs. In the more artificial vein, simulation of
processes and systems have been conducted to identify improved quality management strategies.
Similarly, Fine and Bridge (1984) address issues concerning the explicit and implicit cost and
direct measures of quality.
Interpretive efforts in the area of quality might address the interactions of organizational
functions in the provision of quality products and services, including the place of organizational
politics, leadership, consensus-building, and so on. A major issue here is the strategic role of
quality in the firm’s competitive strategy and how quality is included in the formulation of the
firm’s policies.
Critical theory research could address issues such as the firm’s reward and motivation
systems’ effects on quality. Issues of safety, health, ethics, and social benefit as related to quality
and how it is delivered also could be researched here. Given the highly individualistic nature of
interpretations of quality, other research topics might include the differing perceptions of quality
within the firm, the relative importance of its different facets, and the organizational policies
that influence them.

Potential Research in Technology Implementation

The same kind of arguments might be made for the topic of technology implementation. We
will only briefly sketch some of these points here.
Research on the implementation of new technologies is an area that lends itself particularly
well to the use of the interpretive paradigm. Previous operations studies on implementation have
been mainly in the positivist/empiricist mode and employed surveys and field studies (e.g., see
Ettlie (1988) Nutt (1986), White et al. (1982), Leonard-Barton and Kraus (1985)). Rogers
(1983) and Voss (1988) argue that the study of implementation must spread over the life span of
the implementation process. Action research, where the researcher is involved with other parties
in the analysis, planning, and execution of the implementation process, could help significantly
in improving OM researchers’ understanding of the implementation process (Warmington
(1980)). And case studies, particularly longitudinal, would allow detailed exploration of
unfolding processes in real time and avoid the frequently distorted responses of surveys and
structured interviews.
And since the implementation of new technology usually spans some years and involves many
participants, historical analysis could be used to capture the context of the social, political,
economic, and cultural issues that invariably come into play during such an extended process.
This might even allow the development of a framework of recurring patterns in implementation
processes. And also because of the extended time span of the implementation, Delphi and
scenario analyses would be feasible and possibly desirable to anticipate normally unexpected
developments.

Journal of Operations Management 319


CONCLUSIONS
Research in operations has employed a limited set of paradigms for too long. Heavy on the
artificial and rational ends of the scales and light on the object reality and existential/interpretive
ends, research in operations has typically exhibited high reliability and internal validity but
almost no external validity. We emphasize objectivity in research and thus stress predictive
power, but with little understanding of the phenomenon. It is time to expand our limited set of
worn-out paradigms and consider new research methods from paradigms used in our sister
fields. This broadening of our perspective would contribute substantially to addressing the
research problems we face.
If we do not expand our approaches to research, managers will continue to perceive us as
irrelevant academics who address fictitious problems and are not interested in the real world. To
make true contributions to both research and practice, we must enlarge our repertoire of
methodologies and apply those that are most appropriate, efficient, and effective for the
situations at hand.

ENDNOTES

?Throughout this paper we follow the example of the other functional business fields of finance, marketing, and human
resources and drop the unnecessary “management.”

REFERENCES

1. Abernathy, W.J., and J.E. Corcoran. “Relearning from the Old Masters: Lessons of the American System of
Manufacturing.” Journal of Operations Management. vol. 3, no. 4, August 1983, 155-167.
2. Ackoff, R. “The Future of Operational Research is Past.” Journal of the Operational Research Society, vol. 30,
no. 2, 1979, 93-104.
3. Agar, M.H. Speaking of Ethnography. Sage University Paper Series on Qualitative Research Methods, vol. 2.
Beverly Hills, CA: Sage Publications, 1986.
4. Alexander, C.P “Quality’s Third Dimension.” Quality Progress. July 1988, 21-23.
5. Amoako-Gyampah, K., and JR. Meredith. “The Operations Management Research Agenda: An Update.”
Journal of Operations Management, vol. 8, no. 3, 1989, 250-262.
6. Anderson, J.C., R.G. Schroeder, E.M. White, and SE. Tupy. “MRP: The State of the Art.” APICS Monograph.
Falls Church, VA: American Production and Inventory Control Society, Inc., Fall 1980.
7. Anderson, J.C., N.L. Chervanay, and R. Narasimhan. “1s Implementation Research Relevant for the OR/MS
Practitioner?” Inferfaces, vol. 9, no. 3, May 1979, 52-56.
8. Andrew, C.G., and GA. Johnson. “The Crucial Importance of Production/Operations Management.” Academy of
Managemenr Review, vol. 7, no. 1, 1982, 43-147.
9. Antill, L. “Selection of a Research Method.” In Research Methods for Information Systems, E. Mumford and R.
Hirschheim (eds.) Amsterdam: North-Holland, 1985.
10. Argyris, C., R. Putnam, and D.M. Smith. Action Science: Concepts, Methods, and Skills for Research and
Intervention. San Francisco: Jossey-Bass, 1985.
11. Babbie, E.R. Survey Research Methods. Belmont, CA: Wadsworth, 1973.
12. Basu, A., and R.G. Schroeder. “Incorporating Judgments in Sales Forecasts: Application of the Delphi Method at
American Hoist & Derrick.” Interfaces, vol. 7, no. 3, May 1977, 8-27.
13. Beged-dov, A.G., and T.A. Klein. “Research Methodology in the Management Sciences: Formalism or
Empiricism.” Operational Research Quarterly, vol. 21, no. 3, 1970, 31 l-326.
14. Benbasat, I., D.K. Goldstein, and M. Mead. “The Case Research Strategy in Studies of Information Systems.”
MIS Quarterly, vol. 11, no. 3, Sept., 1987, 369-386.
15. Benson, PG., A.V! Hill, and T.R. Hoffmann. “Manufacturing Systems of the Future-A Delphi Study.”
Production and fnventoy Management, vol. 23, no. 3, Third Quarter 1982, 87-106.
16. Bonoma, T.V. “Case Research in Marketing: Opportunities, Problems, and a Process.” Journal qf Marketing
Research, vol. 22, no. 2, May 1985, 99-208.

320 Vol. 8, No. 4


17. Bourgeois, L.J., III, and K.M. Eisenhardt. “Strategic Decision Processes in High Velocity Environments: Four
Cases in the Microcomputer Industry.” Management Science, vol. 34, no. 7, July 1988. 816-835.
18. Bridgeman, PW. The Logic of Modern Physics. New York: Macmillan, 1968.
19. Buffa, E.S. “Research in Operations Management.” Journal qfUperations Managemenr. vol. I, no. I, 1980. l-8.
20. Buffa, E.S. Operations Management: Problems and Models. 2nd ed. New York: Wiley, 1968.
21. Buffa, E.S. Modern Production Management. New York: Wiley, 1965.
22. Buxey, G. “Research Needs in Production Management.” In Research in Production/Operations Management,
C.A. Voss. Brookfield. VT: Gower, 1984.
23. Chase, R.B. “A Classification and Evaluation of Research in Operations Management.” Journal of Operations
Management, vol. 1, no. 1, 1980, 9-14.
24. Chase, R.B., G.B. Northcraft, and G. Wolf. “Designing High-Contact Service Systems: Application to Branches
of a Savings and Loan.” Decision Sciences. vol. 15, no. 4, Fall 1984, 542-555.
25. Chase, R.B., and E.L. Prentis. “Operations Management: A Field Rediscovered.” Journal qf Management, vol.
13, no. 2, 1987, 351-366.
26. Choi, R.H., and E.M. Malstrom. “Evaluation of Traditional Work Scheduling Rules in a FMS Using a Physical
Simulator.” Journal of Manufacturing Systems, vol. 7, no. I, 1988, 33-45.
27. Davis. M. “That’s Interesting!” Phil. Sot. Sci.. vol. 1, 1971, 309-344.
28. Denzin, N.K. The Research Act. 2nd ed. New York: McGraw-Hill, 1978.
29. Dexter, L.A. Elite and Specialized Interviewing. Evanston, IL: Northwestern University Press, 1970.
30. Donham, W.B. “Essential Groundwork for a Broad Executive Theory.” Harvard Business Review, vol. 1, no. 1,
Oct. 1922, I-10.
3 I. Dubin, R. Theor?/ Building. New York: The Free Press, 1969.
32. Emory, W.C. Business Research Methods. 3rd ed. Homewood, IL: R.D. Irwin. 1985.
33. Ettlie, J. Taking Charge of Manufacturing. San Francisco: Jossey-Bass, 1988.
34. Ferdows, K., J.G. Miller, J. Nakane, and T.E. Vollmann. “Evolving Global Manufacturing Strategies: Projections
into the 1990s.” International Journal of Operations and Production Management, vol. 7, no. 4, 1987, 6-16.
35. Fine, C.H., and D.H. Bridge. “Managing Quality Improvement.” M.I.T. Working Paper #1607-84. Boston:
Massachusetts Institute of Technology, 1984.
36. Freiman, J.M., and B.O. Saxberg. “Impact of Quality Circles on Productivity and Quality: Research Limitation of
a Field Experiment.” IEEE Transactions on Engineering Management, ~01.36, no. 2, May 1989, 14-118.
37. Friedman, M. “Explanation and Scientific Understanding.” Journal of Philosophy, vol. LXXI, no. I, Jan. 17,
1974, 5-19.
38. Galbraith, J.K. The Afluent Society. Boston: Houghton Mifflin, 1958.
39. Galliers, R.D. “In Search of a Paradigm for Information Systems Research.” In Research Methodsfor Information
Systems, E. Mumford and R. Hirschheim (eds.) Amsterdam: North-Holland. 1985.
40. Galliers, R.D., and FE Land. “Choosing Appropriate Information Systems Research Methodologies.” Communi-
cations of the ACM, vol. 30, no. Il. Nov. 1987, 900-902.
41. Gerwin, D. “A Theory of Innovation Processes for Computer-Aided Manufacturing Technology.” IEEE
Transactions on Engineering Management, vol. 35, no. 2, May 1988, 90-100.
42. Goodman, PS., and S. Garber. “Absenteeism and Accidents in a Dangerous Environment: Empirical Analysis of
Underground Coal Mines.” Journal of Applied Psychol0g.x vol. 73, no. I, 1988. 81-86.
43. Gordon, R.A., and J.E. Howell. Higher Eduction for Business. New York: Columbia University Press, 1959.
44. Graham, M.B.W., and S.R. Rosenthal. Institutional Aspects of Process Procurement ,for Flexible Machining
Systems. Research report. Boston: Boston University, September 1986.
45. Grayson. C.J. “Management Science and Business Practices.” Harvard Business Revif)<: vol. 51, no.4, July-
August 1973. 41-48.
46. Greene, J. Production Control. Homewood, IL: R.D. Irwin, 1965.
47. Groff, G.K., and T.B. Clark. “Commentary on ‘Production/Operations Management: Agenda for the ’80s.‘”
Decision Science, vol. 12, no. 4, 1981, 578.581.
48. Habermas, J. Knowledge and Human Interests. Boston: Beacon, 197Ya.
49. Habermas, J. Communication and the Evolution qf Society. Boston: Beacon, 1979b.
50. Hax, A. “Comment on ‘Production/Operations Management: Agenda for the ’80s.” Decision Sciences. vol. 12,
no. 4, 1981, 574-577.
51. Hayes, R.H., and W. Abernathy. “Managing Our Way to Economic Decline. ” Harvard Business Revien: vol. 58,
no. 4, July-August 1980, 67-77.

Journal of Operations Management 321


52. Hayes, R.H., and SC. Wheelwright. Restoring Our Competitive Edge: Competing through Mum&-turing. New
York: Wiley, 1984.
53. Heller, W.W. “The Anatomy of Investment Decisions. ” Harvard Business Review, vol. 29, no. 2. I95 I. 95- 103.
54. Hempel, C. “Studies in the Logic of Explanation.” In Aspects of Scient$c Explaruztion. New York: The Free
Press, 1965, 245295.
55. Hill, A.V., G.D. &udder, and D.L. Haugen. “Production/Operations Management Agenda for the ’80s: A
Progress Report.” In Proceedings of the 1987 DSI Conference. Boston: Decision Sciences Institute, 1987,
840-842.
56. Hill, T.J. “Teaching and Research Directions in Production/Operations Management: The Manufacturing Sector.”
International Journal of Operations und Production Munu,yement, vol. 7. no.4, 1987, 5-12.
57. Holt. C.C., E Modigliani, J.E Muth, and H.A. Simon. Planning Production Inventories and Work Force. New
York: Prentice-Hall, 1960.
58. Hospers. J. “What is Explanation. ‘1” In Essays in Conceptual Ana&sis. A. Flew (ed.) London: Macmillan, 1956,
94-l 19.
59. Hounshell, D. “The Same Old Principles in the New Manufacturing. ” Hutward Business Review ~01.66. no. 6.
November-December 1988. 54-61,
60. Hudson, L.A., and J.L. Oaanne. ‘Alternative Ways of Seeking Knowledge in Consumer Research.” Journal of
Consumer Research, vol. 14, March 1988, 508-521.
61. Ijiri, Y. Theory qfilccounting Measurement. American Accounting Association, 1975.
62. Jaikumar, R. “From Filing and Fitting to Flexible Manufacturing: A Study in the Evolution of Process Control.”
Working paper. Cambridge. MA: Harvard Business School. February 1988.
63. Jenkins, A.M. “Research Methodologies and MIS Research.” In Research Methods,for lrzformation Systems. E.
Mumford and R. Hirschheim (eds.) Amsterdam: North-Holland, 1985.
64. Kaplan, B., and D. Duchon. “Combining Qualitative and Quantitative Methods in Information Systems Research:
A Case Study.” MIS Quarterly, vol. 12, no. 4, December 1988, 571-586.
65. Klein, H.K., and K. Lyytinen. “The Poverty of Scientism in Information Systems.” In Reseurch Methods .fbr
information Systems. E. Mumford and R. Hirschheim (eds.) Amsterdam: North-Holland. 1985.
66. Klein, J. “The Human Costs of Manufacturing Reform.” Harvard Business Review vol. 67, no. 2, March-April
1989, 60-66.
67. Laidlaw, W.K., Jr. “Improving the Quality of Business Schools.” Wall Street Journcd. Letters to the Editor. Sept.
22, 1988. 35.
68. Lawrence, M.J., R.H. Edmundson, and M.J. O’Connor. “The Accuracy of Combming Judgmental and Statistical
Forecasts.” Management Science, vol. 32, no. 12, December 1986, 1521-1532.
69. Lee, A.S. “Positivism: A Discredited Model of Science Still in Use in the Study and Practice of Management.”
Working Paper 87-6 I Boston: Northeastern University, September 1987.
70. Leonard-Barton, D. “Implementation Characteristics of Organizational Innovations. ” Communications Resecwch.
vol. 15, no. 5, October 1988. 603-631.
71. Leonard-Barton, D., and W.A. Kraus. “Implementation of New Technology. ” Harwrd Business Review, vol. 63,
no. 6, November-December 1985. 102-l 10.
72. Liker, J.K., D.B. Roitman, E. Roskies. ‘Changing Everything All at Once: Work Life and Technological
Change.” Sloan Munugement Review Summer 1987, 29-47.
73. London, J., N. Stevenson. and G. Holmes, Mrmufbcturing in Europe. Geneva, Switzerland: Business International.
1987.
74. Mabert, V.A. “Service Operations Management: Research and Application. ” Journcrl of Operations Manclgement,
vol. 2, no. 4, 1982, 203-209.
75. Mansheld. E. “The Speed and Cost of Industrial Innovations in Japan and the United States: External versus
Internal Technology.” Mancl,yement Science. vol. 34, no. 10. October 1988, 1157-I 168.
76. Martineau, H. The Positive Philosophy ofAu,qrtste Comte. Vol. I London: George Bell, 1896.
77. Mayer, R. Production Management. New York: McGraw-Hill. 1962.
78. McCarthy, T. The Critical They! of Jurgen Huhrrmus. Cambridge. MA: The Massachusetts Institute of
Technology Press, 1978.
79. McKay, K.N., ER. Safayeni, and J.A. Buzacott. “Job-Shop Scheduling Theory: What is Relevant’!” Interfuces.
vol. 18. no. 4, July-August 1988, 84-90.
80. Meredith, J.R. “Implementmg New Manufacturing Technologies: Managerial Lessons Over the FMS Life Cycle.”
1ntecfuce.s. November-December 1987. 51-62.

322 Vol. 8, No. 4


8I Meredith, J.R. ‘*Reconsidering the Decision-Making Approach to Management.” OMEGA: The lnfernafional
Journal qf Manqemrnr Science,vol. 12. no. 4, July 1984, 341-352.
82. Meredith, J.R., and S.G. Green. “Managing the Introduction of Advanced Manufacturing Technologies.”
Manu$zcturing Reviebv, vol. 1, no. 2, June 1988, 87-92.
83. Meredith, J.R., and S.J. Mantel, Jr. Protect Managemeni: A Manqeriul Approach. 2nd ed. New York: Wiley,
1989.
84. Meredith, J.R., A. Raturi. B. Kaplan, and K. Amoako-Gyampah. “The Foundations of Research.” Working Paper
OM-19X8-010, Cincinnati, OH: University of Cincinnati, 1988.
85. Miles, M.B. “Qualitative Data as an Attractive Nuisance: The Problem of Analysis. ” In Qualiratiw Methodology,
J. van Maanen (ed.) Beverly Hills, CA: Sage Publications, 1983. 117-134.
86. Miller, D., and J.M. Toulouse. “Chief Executive Personality and Corporate Strategy and Structure in Small
Firms.” Management Science. vol. 32, no. 1 I, Nov. 1986, 389-1409.
87. Miller, J.G., and M.B.W. Graham (with J.R. Freeland, M. Hottenstein. D.M. Maister, J.R.Meredith, and R.W.
Schmenner.) “Production/Operations Management: Agenda for the ’80s. ” Decision Science, vol. 12, no. 4, 1981,
547-51 I.
88. Mintzberg, H. “An Emergent Strategy of ‘Direct’ Research.” In Qucditufive Methodology. J. van Maanen ted.)
Beverly Hills. CA: Sage Publications, 1983. 105-l 16.
89. Mitchell, W.N. Orgunizarion and Management of Production. New York: McGraw-Hill, 1939.
90. Mitroff, I.I., and R.O. Mason, “Business Policy and Metaphysics: Some Philosophical Considerations.”
Unpublished manuscript, University of Southern California. 1984.
91. Murphy. J.T. Getring rhe Fucfs. Santa Monica. CA: Goodyear, 1980.
92. Nistal, G. “Is Higher Eduction Responsive to the Needs of the Real World of Business?” Collegiate News and
VieHas, vol. 33, no. 2. 1979-80. 7-11.
93. Nobes, C.W. “The Gallerani Account Book of 1305-1308.” The Accounting Review, vol. 57. no. 2, April 1982,
303-3 10.
94. Nutt, PC. “Tactics of Implementation. ” Academy Of’Managemenr Journcrl, vol. 29, no. 2. 1986, 230-261.
95. Nutt. PC. “Implementation Approaches for Project Planning.” Acudemy of Management Review, ~01.8. no. 4.
1983, 600-61 I.
96. Oliga, J.C. “Methodological Foundations of Systems Methodologies. ” Svs~c~ns Practice, vol. I, no. I, 1988.
87-l 12.
97. Orden. A. “Model Assessment Objectives in Operations Research. ” Mutkvnutical Programming, vol. 42, 1988.
85-97.
98. Parnaby, J. “Concept of a Manufacturing System. ” Inrernc~tioncd Journal of Production Research, vol. 17, no. 2,
1979, 23-135.
99. Paulin, W.L.. R.E. Coffey, and M.E. Spaulding. “Entrepreneurship Research: Methods and Directions.” Chapter
I8 in Oyvclopediu qf Entrepreneurship. C.A. Kent. D.L. Sexton. K.H. Vesper teds.) Englewood Cliffs, NJ:
Prentice-Hall, 1982. 352-369.
100. Pierson, EC. The Education of‘AmericnnBusinessmen. New York: McGraw-Hill. 1959.
101. Popper, K. Conjectures arid R<futufionx London: Routledge and Kegan Paul. 1963.
102. Popper, K. The Logic @.Scien$c Discovery. New York: Harper Torchbooks, 196X.
103. Raiszadeh, EM.E., and L.P Ettkin. “POM in Academia: Some Causes for Concern. ” Production und Inventor>
Monugement Journal, 2nd quarter, 1989, 37-40.
104. Rapoport, R.N. “Three Dilemmas m Action Research.” Humun Relarions. vol. 23, no 6. 1970.
105. Raturi, A.S., and D.M. McCutcheon. “Quality Management: Some Future Directions.” Working paper.
Cincinnati, OH: University of Cincinnati, 1989.
106. Reisman, A. “On Alternative Strategies for Doing Research in the Management and Social Sciences.” IEEE
Transacrion on En,qineering Management. vol. 35. no. 4, November 1988, 215-220.
107. Rogers, E. D@usion $Innovcttion. 3rd ed. New York: The Free Press, 1983.
108. Rosenberg, M. The Logic of Survey Analysis. New York: Basic Books, 1968.
109. Ruwe. D.M., and W. Skinner. “Reviving a Rust Belt Factory. ” Hurvclrd Business Rrvirbv vol. 65,no. 3. May-June
1987. 70-76.
1IO. Saladin, B.A. “Operations Management: One Model of the Field.” Operations Management Reviev. Summer
1984. 51-55.
I 11. Schatzman, L., and Strauss, A. Field Research: Strategies for a Natural Sociology. Englewood Cliffs, NJ: Prentice-
Hall, 1973.

Journal of Operations Management 323


112. Schonberger, R.J. “Frugal Manufacturing.” Harvard Business Review, vol. 65, no. 5, September-October 1987,
95-100.
113. Schroeder, R.G., G.D. Scudder, and D.R. Elm. “Innovation in Manufacturing,” Journal of Operarions
Management, vol. 8, no. 1, Jan. 1989, I-15.
114. Striven, M. “Objectivity and Subjectivity in Educational Research.” In Philosophical Redirection ofEducational
Research, The 71st Yearbook of the National Society for the Study of Education. L.G.Thomas (ed.) Chicago:
University of Chicago Press, 1972, 94-142.
115. Striven, M. “Explanations, Predictions, and Laws.” Minnesotu Studies in the Philosophy of Science, vol. III.
Minneapolis: University of Minnesota, 1962, 70-230.
116. Skeddle, R.W. “Expected and Emerging Actual Results of a Major Technological Innovation-Float Glass.”
OMEGA: The International Journul of Management Science, vol. 8, no. 5, 1980, 553-567.
117. Starr, M. Production Management; Systems and Synthesis. Englewood Cliffs, NJ: Prentice-Hall, 1964.
118. Stoner, D.L. K.J. Tice, and J.E. Ashton. “Simple and Effective Cellular Approach to a Job Shop Machine Shop.”
Manufacturing Review, vol. 2, no. 2, June 1989, 19-128.
1 19. Strauss, A.L. Qualitative Analysis for Social Scientisrs. Cambridge: Cambridge University Press, 1987.
120. Sullivan, R.S. “The Service Sector: Challenges and Imperatives for Research in Operations Management.”
Journal of Operations Management, vol. 2, no. 4, 1982, 21 l-214.
121. Swamidass, PM. “Empirical Science: The New Frontier in Operations Management Research.” Unpublished
paper, College of Business and Public Administration, University of Missouri, Columbia, MO, 1988a.
122. Swamidass. PM. Manufacturing Flexibility. OMA Monograph no. 2. Austin. TX: Operations Management
Association, 1988b.
123. Taylor, J.C., PW. Gustavson, and W.S. Carter. “Integrating the Social and Technical Systems of Organizations.”
In Managing Technological Innovation, D.D. Davis. San Francisco: Jossey-Bass, 1986, 54-186.
124. Vitale, N.P “The Need for Longitudinal Designs in the Study of Computing Environments.” In Research Methods
,for Infi,rmation Systems, E. Mumford and R. Hirschheim (eds.) Amsterdam: North-Holland, 1985, 243-263.
125. Voss. C.A. Research in ProductioniOperarions Management. Brookfield, VT: Gower, 1984.
126. Voss, CA. “Implementation: A Key Issue in Manufacturing Technology: The Need for a Field of Study.”
Research Policy, vol. 17, no. 2, 1988, 55-63.
127. Warmington, A. “The Nature of Action Research.” Journal ofSystems Analysis, vol. 7, 1980, 52-57.
128. White, E.M., J.C. Anderson, R.G. Schroeder, and S.E. Tupy. “A Study of the MRP Implementation Process.”
Journal of Operutions Management, vol. 2, no. 1, May 1982, 45-153.
129. Whybark, D.C., and B. Rho. “A Worldwide Survey of Manufacturing Practice.” Discussion Paper no. 2.
Bloomington, IN: Indiana Center for Global Business, School of Business, Indiana University, May 1988.
130. Wright, G., and P Ayton. “The Psychology of Forecasting.” Futures, June 1986, 420-439.
131. Wynne, B.E., and N.J. Robak. “Entrepreneurs Enabled: A Comparison of Edelman Prize-Winning Papers.”
Interjizces, vol. 19, no. 2, March-April 1989, 70-78.

APPENDIX
THE HISTORY OF RESEARCH AND POST-POSITIVIST THOUGHT

This appendix briefly reviews the historical and philosophical development of research thought. It also describes a
number of attributes of modern post-positivist thinking. It then concludes with a discussion of the need to employ a
plurality of research methods in operations. More detailed discussion on these topics can be found in Meredith, Raturi,
Kaplan, and Amoako-Gyampah (1988)or the indicated references.
The history of research, scientific inquiry in particular, covers three main periods: the era of the positivists and the
associated development of the scientific method, the era of the empiricists and the application of new methodologies to
the social sciences because the positivist approach was found wanting, and the post-positivist era, which includes the
present. By limiting our discussion to these three approaches. we ignore a number of other philosophies pertaining to
metaphysics and epistemology, some of which occurred within the same periods. That is, we are following only one
philosophical thread here.
Also, positivism, our first topic, was not the first philosophy but developed as a reaction to other philosophies. And in
its turn, empiricism was only one of the alternatives posed to positivism. We only briefly sketch these eras to give a
foundation for the discussion on research paradigms.

324 Vol. 8, No. 4


The Positivists’ Approach

Auguste Comte (1798.1857) was a sociologist who initiated deliberations on positivist thought. The term “positive”
refers to positive, or “observable,” data; that is, sensory experience of external, objective reality (Martineau (1896)).
The positivist approach holds that only empirically verifiable or analytic propositions are meaningful (see, for example,
Hemple ( 1965)).
One theme that has consistently emerged from historical developments in the philosophy of science is that the methods
of the physical sciences (largely positivist) are only a subset of the methods for providing explanation and conducting
research. Yet, as Beged-dov and Klein (1970) and,Lee (1987) point out, the positivist approach remains the cornerstone
for conducting research in management science. As Klein and Lyytinen (1985, p. 136) note: “The scientific method
turns into scientistic orthodoxy when it entails a commitment to [the belief that reality exists independently of the
researcher, language, and culture; the empirical-analytical method is the only valid approach to research; and that
scientism applies not only to the domain of the so-called exact (viz. physical and mathematical) sciences, but also to
those of all other fields, in particular the study of human behavior.] It has found its most extreme implementation in the
practice of Management Science as manifested in most of the TIMS publications and likewise outlets.”
Researchers in the social sciences in the 19th century encountered significant problems trying to operate under these
guidelines. Difficulties arose in reconciling laws of behavior with human free will. More significantly, there were severe
problems trying to predict social events based upon multiple causal events through the use of the then-popular tool of the
physical sciences-the laboratory experiment. Recognizing the need for a new research paradigm, philosophers, at the
behest of social scientists, again addressed the basic question of what constitutes a scientific explanation and what does
not.

The Empiricists’ Approach

A number of philosophies were proposed as alternatives to positivism. Here we follow the antecedents of positivism
to a later development, the work of John Locke (1632.1704) and David Hume (171 l-1776) on empiricism. Empiricism
is the doctrine that all knowledge is based on experience, or obtained through the senses. The power of empiricism
derives from “predictiveness.” Friedman (1974) argues that the causal structure leading to “true” predictions may rest
upon false assumptions, but we are not bothered by it as long as the predictions remain true. He argues that across
scientists and across different time periods, certain phenomena are considered self-explanatory, or “natural.”
Explanation consists of relating other phenomena to these narural phenomena.
The primary methodology of empiricism is through rejection of the hypothesis (Popper (1963)). Because confirmation
cannot be empirically established, hypotheses have to be set so that they can be rejected, or “falsified.” Popper
proposed that research activity is the method of the “irrational rationals”; one is looking for statements to reject. Lee
(1987) notes, in addition, that the discipline of statistics has “distanced itself from the notion of induction.” Statisticians
are careful to point out that the action of increasing the sample size does not increase the probability that the observation
is true; rather, it increases the researcher’s level of confidence.

Post-Positivist Approaches

The two primary research approaches in operations have been management science and statistics. As argued earlier,
the normative thinking implicit in the former and the problems of verification in the latter are problematic, especially for
operations systems that include people. Since socio-cultural and organizational issues are implicit in our definition of
operations as a field, researchers have to deal with the criticisms of the positivist/normative and empiricist approaches.
While much has been written about problems with the positivist/normative approach in operations (e.g., Ackoff
(1979), Lee (1987)) there are problems with empiricism for the field of operations also. One is the limits of the stimulus-
response paradigm of laboratory experimentation and other empirical research modes. such as surveys, in producing
usable observations. We are not commenting here on experiments done in artificial settings, or with convenience
samples; this is the accepted practice of current research activity in most areas of management. Rather, we are concerned
with the much more stringent and dangerous assumption that social beings respond to a stimulus (e.g., the alternatives in
a questionnaire) in a finite number of ways that can be captured by experiments or instruments.
Also, because we seek universal laws concerning operations, we tend to do research using transhistorical or
transcultural generalizations. Recent research efforts concerning manufacturing systems in different nations (e.g.,
Whybark and Rho (1988)) conclude that it is difficult to discover laws or principles that are valid across time or cultures.
Critical social theorists like Habermas (197 1, 1979) (also see McCarthy (1978)) consider a researcher’s subjectivity and
a proposition’s cultural bias as significant research problems.
There are additional problems concerning observer neutrality. It is hard, or impossible, to study social systems from a
value-free perspective. Further, the repeatability of research findings is another positivist notion that presumes the

Journal of Operations Management 325


deductive, universalistic nature of scientific explanation. Despite the positivist requirement of repeatability, non-
repeatable findings represent interesting results in their own right and reflect a phenomenon’s different facets.
The last, and possibly most important, reason for developing alternate research paradigms in operations is the
limitation of current research methods. Vitale (1985) argues that exclusive dependence on the positivist model of
research has limited the exploration of methodological alternatives. This severely restricts the ability of the researcher in
understanding the phenomenon being studied. Methodological pluralism is much more attractive. Operations needs the
power offered by a plurality of research methods.
This plurality can be generated only if alternative philosophical interpretations of explanation are understood and
accepted. We currently dichotomize between theory (often incorrectly interpreted as normative, axiomatic, or
mathematical statements) and application/case studies, as if the former explains and the latter merely describes. For
example, early issues of Management Science were formally divided into two series: theory and applications. The illogic
apart, different interpretations of explanation have advantages in different research contexts and plurality can only lead
to richness in explanation
Moreover, the simultaneous use of multiple research methods, generally known as “triangulation.” offers increased
robustness and confidence in the results. Denzin (1978, p. 291) defines triangulation as “the combination of
methodologies in the study of the same phenomenon.” More specifically, triangulation, also known as “convergent
validation” in the social sciences, is the cross-validation that is achieved when data from different sources are found to
be congruent or when an explanation can account for all of the data when they diverge. The more research techniques
that can be applied to a situation, the more nuances of understanding will be developed. Frequently, details that one
approach misses will be caught by another. Or more significantly, hypotheses that cannot be rejected by one approach
may be rejected by another. By using cross-disciplinary research teams with different research or methodological skills,
much insight can be developed about a situation that may have been invisible to any one of the researchers alone. Kaplan
and Duchon (1988) provide an excellent example of such a situation in an information systems case study.
Operations needs a broader epistemological stance concerning knowledge creation. Philosophers have argued that
science is a problem-solving activity that uses certain conventions in the process. If so, then the difficulty of specifying
research methods shifts from correlations. optimality, and statistical significance (Mitroff and Mason (1984)) to
searching for an appropriate way to address a problem. If the researcher is only a tool user instead of a tool builder. we
run the risk of distorted knowledge acquisition: “For he who has but one tool, the hammer, the whole world looks like a
nail.”

326 Vol. 8, No. 4

View publication stats

También podría gustarte