Documentos de Académico
Documentos de Profesional
Documentos de Cultura
net/publication/222891768
CITATIONS READS
375 3,194
4 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Amitabh Raturi on 19 October 2017.
JACK R. MEREDITH*
AMITABH RATURI*
KWASI AMOAKO-GYAMPAH*
BONNIE KAPLAN**
EXECUTIVESUMMARY
Due to the heritage and history of operations management, its research methodologies have been
confined mainly to that of quantitative modeling and, on occasion, statistical analysis. The field has been
changing dramatically in recent years. Firms now face numerous worldwide competitive challenges.
many of which require major improvements in the operations function. Yet, the research methodologies in
operations have largely remained stagnant. The paradigm on which these methodologies are based, while
useful, limits the kinds of questions researchers can address.
This paper presents a review and critique of the research in operations, itemizing the shortcomings
identified by researchers in the field. These researchers suggest a new research agenda with an integrative
view of operations’ role in organizations, a wider application of alternative research methodologies,
greater emphasis on benefit to the operations manager, cross-disciplinary research with other functional
areas, a heavier emphasis on sociotechnical analysis over the entire production system, and empirical
field studies. Some of the alternative research methodologies mentioned include longitudinal studies,
field experiments, action research, and field studies.
Following a description of the nature of research, three stages in the research cycle are identified:
description, explanation, and testing. Although research can deal with any stage in this cycle, the
majority of attention currently seems to focus on the explanation stage. The paper then discusses
historical trends in the philosophy of science, starting with positivism, expanding into empiricism, and
then leading to post-positivism. The impacts of each of these trends on research in operations (which
remains largely in the positivist mode) are described. Discussion of the importance of a plurality of
research methods concludes the section.
A framework for research paradigms is then developed based on two key dimensions of research
methodologies: the rational versus existential structure of the research process and the natural versus
artificial basis for the information used in the research. These dimensions are then further explored in
terms of thirteen characteristic measures. Next, research methodologies commonly used in other fields as
well as operations are described in reference to this framework. Methodologies include those traditional
to operations such as normative and descriptive modeling, simulation, surveys, case and field studies as
well as those more common to other fields such as action research, historical analysis, expert panels,
scenarios, interviewing, introspection, and hermeneutics. Examples from operations or allied fields are
given to illustrate the methodologies.
Past research publications in operations are plotted on the framework to see the limitations of our
current paradigms relative to the richness of other fields. We tind that operations methodologies tend to
Manuscript received November 15, 1988; accepted December 22, 1989, after two revisions.
*University of Cincinnati, Cincinnati, OH 45221-0130
**American University, Washington, D.C. 20016
INTRODUCTION
The field of operations, or operations management (OM)t, faces multiple new research
challenges in the areas of service operations, productivity, quality, technology and many other
areas. Never before has the need for pragmatic research, directly useful to the operations
manager, been so important to the field, and to industry and society. One perspective on this is
offered by Galliers and Land (1987) in reference to information systems, but just as applicable to
operations: “[It is] an applied discipline, not a pure science. It follows, therefore, that if the
fruits of our research fail to be applicable in the real world, then our endeavors are relegated to
the point of being irrelevant.” Yet, OM researchers still tend to employ a limited range of
research paradigms to address the new challenges in operations.
A part of the reason is historical. Originally, the research orientation of operations was
entirely pragmatic: What procedures should be used in what situations? The presentation in
early textbooks on production (e.g., Mitchell (1939)) focused on the organization and
transformation process through a combination of descriptive and prescriptive discussion
(Andrew and Johnson (1982)). The unit of analysis was the production manager and the
definition of production management centered around what the production manager did. For
example, in explaining why plant location was left out of his book, Mayer (1962, p. v) stated:
“
. . there is a reason to believe that the production manager will play a relatively minor role in
these areas of decision making.”
Then in the 1950s the Ford (Gorden and Howell (1959)) and Carnegie (Pierson (1959))
Foundations’ reports severely criticized business colleges for their lack of rigor or a scientific
approach to business education and research (Laidlaw (1988)). Operations research (OR) was
moving from war applications into the business and industrial arena and with it, the opportunity
for business schools to gain academic respectability. OR quickly developed into a favorite tool
(e.g., Holt, Modigliani, Muth, and Simon (1960)) for conducting research in operations (Buffa
(1965, p. v.), Nistal (1979-80); Andrew and Johnson (1982)). It also allowed, for the first time,
the development of a systematic body of knowledge in operations based on a consistent and
rigorous framework.
Although useful to numerous areas of business, the predominant application of OR was to the
area of operations (Buffa (1968, p. 4)). Marketing, finance, and organizational behavior also
used the new tool but found it somewhat limited in its applicability to their problems (Hudson
and Ozanne (1988)). Ackoff (1979, p. 94), for example, remarked that the OR approach “came
to be identified with the use of mathematical models and algorithms rather than the ability to
formulate management problems, solve them, and implement and maintain their solutions in
turbulent environments.” Marketing and organizational behavior, in particular, developed a
number of other paradigms drawn heavily from the fields of psychology and sociology to address
their research problems.
But OM researchers were having great success with the new algorithmic modeling tools and
found no need to explore other paradigms (Andrew and Johnson (1982)). The new area of
operations research/management science (OR/MS) was steadily replacing the function of
AS described by the following critics, past research in operations has too frequently exhibited
a variety of shortcomings:
1. Narrow instead of broad scope
Focused on problems with a narrow scope (Buffa (1980))
l
Largely micro-oriented
l (Chase (1980))
Concerned a subsystem rather than a whole system (Buffa (1980))
l
Even in the few studies using real-world settings, the research approaches were charac-
l
terized by one-day visits, interviews, and the use of questionnaires (T. Hill (1987)).
In sum, it appears that OM research has failed to be integrative, is less sophisticated in its
research methodologies than the other functional fields of business, and is, by and large, not very
useful to operations managers and practitioners. Operations is an applied field and its research
should be usable, in some fashion, in practice. It is not, like management science or
organizational behavior, a tool area but a functional discipline. As Voss (1984, pp. 29, 30) notes:
“. .
the production/operations management person is concerned with procedure and process
. as well as . linking operating decisions and policies with company policies and the
decisions, technologies, and procedures they should adopt to maximize company competitive-
ness.” It is worth noting that this extends far beyond the normative perspective of management
science; it includes exploration and interpretation of procedures and processes which may not be
embedded in a rational, single-objective, or value-free context. The decision environment of the
real-world operations manager is not usually driven just by quantitative elements susceptible to
mathematical modeling-where does politics, law, ethics, or the environment fit in?
From the very first issue of the Journal of Operations Management, authors have called for
research that examines the unstructured real-world problems of operations practitioners,
considers multiple evaluation criteria, recognizes both the inter- and intrarelationships of
organizational units, and incorporates both parametric and nonparametric statistical tests (Buffa
(1980), Chase (1980)). But such prescriptions are rarely accompanied by an overview of the
systematic and methodological changes in research techniques that will support these forays into
new research settings. We hope to provide this overview here.
Several researchers have addressed the content problems in their proposals for a new OM
research agenda. For example:
l Miller and Graham (198 1) called for an integrative view of operations’ role in organizations
under the broad categories of operations policy, control, productivity, and services. Manufac-
turing strategy, the role of the customer in service delivery systems, and the effects of new
technologies on operations policies were mentioned as key issues in the agenda.
l As an extension to Miller’s agenda, Groff and Clark (198 1) called for a wider array of research
FIGURE 1
THE ONGOING CYCLE OF RESEARCH STAGES
Description. Descriptive research seeks to report and chronicle elements of situations and
events. As noted previously, the predominant activity of early research in operations was
descriptive. The approaches and techniques available for capturing this information depend on
the field and on the nature of the situations and events of interest. The result is a well-
documented characterization of the subject of interest. This characterization then may be used
for generating or testing theories, frameworks, and concepts regarding the situation. For
example, Meredith (1984) describes the complications that arose in the simple process of
attempting to purchase a copying machine for a university department and Heller (1951)
describes investment decisions in general.
A finer, more detailed level of description about a particular facet of the subject may require
what is sometimes known as exploratory research. Here, a particular aspect is investigated more
fully, based on the understanding that the preliminary descriptive research gave. This
Vol. 8, No. 4
understanding may have illuminated areas of confusion, unearthed contradictions in previous
concepts or “facts” about the situation, or given further meaning to areas of interest or to
existing knowledge. The result of the exploratory research is more detailed description that may
lead to further insight and understanding. Miles (1983) argues that such qualitative data are
attractive for many reasons: “They lend themselves to the production of serendipitous findings
and the adumbration of unforeseen theoretical leaps.”
A number of areas in operations need realistic descriptions. Examples include MRP and shop
floor control systems, new manufacturing technologies, operational problems relating to new
technologies and systems, what operations managers’ jobs consist of, and even the organization-
al decision-making process concerning the adoption of new operational imperatives such as JIT,
TQC, FMS, CIM.
Explanation. On the basis of, or in the process of producing, a description, some initial
concepts about the situation may be postulated. Perhaps some action-reaction or cause-effect
relationships may be inferred. Or possibly a more complex set of reactions or relationships may
be constructed to explain the observed behavior or events. If a complex, relatively closed set of
relationships appears to be operating, a “framework” may be constructed to explain the
dynamics of the situation. A framework offers a conceptual frame of reference to help
researchers design specific research studies, interpret existing research, and generate testable
hypotheses. One example in operations is Saladin’s (1984) conceptual model of the scope of the
field.
At a more integrative level and with further testing, the framework or sets of frameworks may
be developed into a theory describing the principles operating in the situation. There are many
definitions of “theory” such as “a set of general principles that explain observed facts,” but
more importantly, Dubin (1969) has identified a number of characteristics typical of all theories:
a theory must include the interrelationships between its variables and/or attributes as well as
some criteria that define its boundaries. The theory must also improve our understanding of the
non-unique phenomenon or help us make predictions about it. Finally, the theory must be
interesting (Davis (1971)), that is, non-trivial.
Note that, as Striven (1962) argues, a prediction is not the same as an explanation; the former
can be inferred from correlation but the latter has to address the underlying causal structure of
the theory. How can research activity in a field be conducted without the researcher having an
understanding of what it means to “explain”? Studies complaining about the lack of “usable”
research in operations lead to a fundamental issue: not what research topics need investigation,
but what research perspective we should hold.
Striven concludes that explanation is “a topically unified communication, the content of
which imparts understanding of some scientific phenomenon” (1962, p. 224). A description that
does not explain, although research, is incomplete. A prediction based on constructs that cannot
be explained, such as a crystal ball, is magic.
Hospers (1956) presents three common interpretations of the explanation of a phenomenon.
These may be summarized as:
(1) Stating the phenomenon’s goal or purpose. Research in operations strategy often gets
trapped here. Apart from the limited number of case studies, few researchers have devoted time
and effort to disseminate any knowledge about the resultant overall impact when a firm develops
and implements an operations strategy. Most arguments here are purposive: A firm should
develop an operations strategy because its purpose is sacrosanct. Similar arguments are used for
a number of other initiatives like inventory reduction.
A research paradigm is a set of methods that all exhibit the same pattern or element in
common. However, there are a number of dimensions on which research activity may be
classified. For example, it may be classified according to the technique used to gather the data
(model, literature, survey, observation, interview, experiment, laboratory, etc.), the methods
used to analyze the data (statistics, protocol analysis, taxonomy), the immediate purpose of the
research (exploration, description, evaluation, hypothesis generation, hypothesis testing), the
nature of the units of analysis (individuals, groups, processes), the duration/time points of data
collection, and so forth.
Though limited, there have been other frameworks offered for classifying research paradigms.
Beged-Dov and Klein (1970), for example, classify research in management science in terms of
formalism or empiricism, Reisman (1988) categorizes the range of management and social
science research strategies in terms of a Venn diagram-type of framework, and Chase (1980)
offers a matrix framework for classifying the research conducted in operations. A more generic
and comprehensive framework, similar in that sense to the framework constructed by Mitroff
and Mason (1984) for business policy, is presented by Paulin, Coffey, and Spaulding (1982) for
the field of entrepreneurship. Here we present a generic framework for a classification of
paradigms based on the framework generated by Mitroff and Mason (1984).
In discussing the underlying metaphysical assumptions inherent in business policy research,
Mitroff and Mason specify two key dimensions that shape the philosophical basis for research
activity. We have redefined their two dimensions, illustrated in Figure 2, to better fit operations.
The first is the “rutionallexistential dimension” which concerns the nature of truth and whether
it is purely logical and independent of man or whether it can only be defined relative to
individual experience. The second dimension is “naturallartificial” and concerns the source
and kind of information used in the research.
This dimension relates to the epistemological structure of the research process itself. It
involves the benefits and limitations of the philosophical approach taken to generating
knowledge; that is, the viewpoint of the researcher. At one extreme is rationalism, which uses a
formal structure and pure logic as the ultimate measure of truth. At the other extreme is
existentialism, the stance that knowledge is acquired through the human process of interacting
with the environment. Thus, in existentialism an individual’s unique capabilities, in concert with
the environment, are regarded as the basis of knowledge. The former conforms to the traditional
deductive approach to research; the latter to an inductive approach.
Our view of the rational/existential dimension includes four generic perspectives that
structure the research by different degrees of formalism. These four perspectivesin order of
degree of formal structure, are axiomatic, logical positivist/empiricist, interpretive, and critical
theory. We explain these briefly, using examples from operations.
The axiomatic perspective represents the theorem-proof world of research. A high degree of
knowledge is assumed, a priori, about the goals and the socio-technical structure of the
organization. The key organizing concepts are the presence’of formal procedures (e.g., lot
sizing), consensus, consistency of goals (such as cost minimization), and a work place ideology
characterized by scientific management principles. Operations research (OR) studies tend to fall
in this category, such as the many variations of the economic order quantity model. Additionally,
RATIONAL
gp;
DIRECT , , ARTIFICAL
OBSERVATION’ PERCEFTIONS ‘RECONSTR’CTN
.9
EXISTENTIAL
This second dimension concerns the source and kind of information used in the research. At
the natural end of the continuum is empiricism (deriving explanation from concrete, objective
data), while at the artificial end is subjectivism (deriving explanation from interpretation and
artificial reconstruction of reality). The progression from natural to artificial on this dimension
Figure 3 presents the two dimensions we established in the last section, with the meth-
odologies available to researchers placed in their appropriate cell(s). Note that some meth-
odologies logically could fall into a number of cells, or relate to only one of the dimensions. Also
some methodologies can fairly easily be used for any of the three stages of research-
description, explanation, or testing-and these are occasionally pointed out in passing. For
example, case studies can describe, explainor disprove a hypothesis.
The methods listed in this figure are described briefly below. Methodological references are
FIGURE 3
A FRAMEWORK FOR RESEARCH METHODS
AXIOMATIC MODELING
- DESCRIPTIVE
MODELING
People’s Perceptions
At this point we begin to consider those methodologies that rely on determining people’s
perceptions of object reality. These first two fall in the logical positivist/empiricist cell because
of their high rationality.
Structured interviewing. This method contrasts with field and case studies in that observation
is limited to the interview process and transcripts. The main reason for personal interviewing is
to control the situation and responses, thereby aiding uniformity in analysis. The results may
then be systematically analyzed through non-quantitative means or subjected to intensive
statistical analysis to identify factors, clusters, and other such relationships in a statistically
significant way. In structured interviewing, a fixed format is followed for the interview and the
details of every answer are carefully noted as the interview proceeds. All questions are the same
so that the typically constrained answers (check marks, values on a given scale) can be
compared across interviews, situations, plants, and so forth.
Some examples of this method are the interviews conducted by London, Stevenson and
Holmes (1987) on issues relating to manufacturing strategy, those conducted by Miller and
Toulouse (1986) with CEOs relating their personality to their firm’s strategy and structure, and
those conducted by Bourgeois and Eisenhardt (1988) of the top managers and decision makers of
firms competing in a high-velocity competitive environment. This latter study was particularly
well-designed in that the structured interviews were only one of the multiple methods used in
triangulating to ascertain the strategic decision processes unique to this industry.
Artificial
At this point we move to the methods that are based primarily on artificial reconstructions of
the object reality. The first three tend to be most rational and thus are classified in the axiomatic
cell.
Reasonllogical deductionltheorem proving. The approaches in this category all involve
rigorous, logical analysis that can be followed and replicated by other researchers. They differ
from simple introspective reflection because of both their formal rigor and their external, rather
than perceptual, focus. That is, introspective reflection is an attempt to evaluate one’s own
perceptions of a phenomenon whereas with these methods, a model of the object reality is
constructed. This model can be composed of a set of reasoned or logical statements or logico-
mathematical theorems or formulas. This method is frequently used to construct models that are
embodied in software for expert and decision support systems and in mathematical models of
operational systems. Examples are so common as to be unnecessary (see further discussion in
next section).
Normative analytical modeling. This approach is conducted at a relatively macro level with
simple, closed-form mathematical representations. The model is used to produce a prescriptive
result, typically through iteratively applying the analytical equations until some desirable value,
usually of one dependent variable, is achieved. Examples include linear programming and
inventory models (guaranteed optima) or the CRAFT layout algorithm (no guaranteed
optimum). In some cases, multiple dependent variables may be considered, as with goal
programming and scoring models.
Again, examples of these methods are common (see next section). But as Ijiri (197.5, p. 28)
notes: “Goal assumptions in normative models or goals advocated in policy decisions are often
stated purely on the basis of one’s conviction and preference, rather than on the basis of inductive
study of the existing system.”
Descriptive analytical modeling. This method also is conducted at a macro level with
relatively simple, closed-form mathematical representations. However, unlike normative model-
ing, this method is used to simply describe, by mathematical emulation, the actual workings of a
real-world system counterpart. For example, rather than representing a complex organizational
system by a linear program for the purpose of determining the system’s maximum productivity, a
discriminant function might be constructed instead. The plethora of quantitative models used in
operations-queuing theory, location analysis, inventory theory, and so on-are representative
of this method and examples are unnecessary.
Prototyping. The next four methods-prototyping, physical modeling, lab experimentation,
and simulation+mploy the logical positivist/empiricist tradition rather than the axiomatic
The majority of research on operations has been restricted to the northeast corner of the
research framework, as illustrated in Figure 4. This was determined by reviewing the operations
publications in Management Science, Decision Sciences, and Journal of Operations Management
for 1987 and 1977 (JOM began publication in 1981 and thus was unavailable for comparison in
1977). The review consisted of classifying the methodologies employed and plotting them on the
framework of Figure 3.
As illustrated by the percentages shown in Figure 4, 93% of all the articles fell in the
“artificial” paradigm and 91% fell in the highly rationalist paradigms of “axiomatic” or
“logical positivist/empiricist.” It is also interesting to note the changes over the years.
Management Science has increased its percentage of axiomatic types of operations publications
from 60 in 1977 up to 70 in 1987, while Decision Sciences has decreased theirs from 82 to 58.
However, Management Science is apparently more receptive than Decision Sciences to the more
“naturalistic” and “interpretive” paradigms and these percentages have remained relatively
stable over the decade. Interestingly, Journal of Operations Management has a different
paradigmatic thrust, with a minority of their publications being axiomatic (33%), though still
FIGURE 4
DISTRIBUTION OF JOURNAL ARTICLES ON OM TOPICS
TOTAL 62%
CRITICAL THEORY
V
EXISTENTIAL
MANAGEMENT SCIENCE MS 25 26
DECISION SCIENCES DS 17 12
JOURNAL OF OPERATIONS MANAGEMENT JOM 15
TOTAL 95
The inescapable conclusion is that our research in operations is still overwhelmingly artificial
in nature, though breaking the methodological tie with the field of management science has
allowed us to begin moving toward more existential (primarily interpretive) paradigms and to
move away from the more rationalistic, “scientific” paradigms (both axiomatic and logical
positivist/empiricist). We believe that a much stronger movement toward naturalistic paradigms
(especially direct observation via case, action, and field studies) and existential (primarily
One rich area within operations for discussing the application of the different paradigmatic
research stances is quality management. There are two primary reasons. The first is that quality
management recently has become a primary concern and even strategic thrust of many business
organizations. Secondly, the concept of the quality of a product or a service is multi-dimensional
and, to some extent, nebulous in definition. The accepted definition concerns “fitness for use”;
yet evolving social values and perceptions continually redefine the interpretation of this phrase.
Much difficulty in conducting research in quality is due to how products and, especially,
services are perceived. Abstract, subjective, and commercial characteristics (e.g., a restaurant’s
“atmosphere” or their staffs “courtesy”) can only be measured through people’s perceptions.
Moreover, each person may define these characteristics differently. Because the basic definition
of quality involves the degree of consumer satisfaction with a product or service, people’s
perceptions of the major components of quality provide a basis for quality assessment. The
notion of quality is created by a combination of such diverse attributes and thus, diverse research
methods may also be required (Alexander (1988)).
We first consider the application of the natural-artificial dimension of our framework for
research in quality management. The three major paradigms include direct observation,
determining people’s perceptions, and artificially reconstructing object reality.
Quality characteristics presumed inherent in the product or service (defects, durability) are,
by and large, directly observable. Field experiments and field studies are appropriate for
analyzing these characteristics of a product because the variables are clear and potentially
subject to control. For the more contextually-defined or situation-dependent quality characteris-
tics (such as aesthetic appeal), case studies and action research are required because of their
deeper analysis of context.
When assessing perceptions, methods such as surveys and structured interviews provide an
empirical approach to studying quality. But significant progress in this area may require more
interpretive research. Here, intensive interviewing, expert panels, scenario analysis, and even
introspective reflection could be valuable for uncovering basic attitudes, perhaps even
subconscious feelings, about what constitutes quality. Reflection offers the possibility of
merging the more rational/empirical quality concepts with the interpretive findings so as to move
to a higher level of understanding of the concept of quality.
Most of the existing research pertaining to quality has been conducted at the artificial end of
the scale through the methods of modeling, laboratory experimentation, and simulation. It might
be noted here that other “direct, observable” characteristics such as reliability and main-
tainability are, in practice, typically measured through artificial reconstructions of the object
reality. For example, a system reliability of 0.99 either is a statistical estimate or a guess based
on the hypothetical reconstruction of past experience. However, conceptual modeling for the
purpose of interpretation has been rare.
The same kind of arguments might be made for the topic of technology implementation. We
will only briefly sketch some of these points here.
Research on the implementation of new technologies is an area that lends itself particularly
well to the use of the interpretive paradigm. Previous operations studies on implementation have
been mainly in the positivist/empiricist mode and employed surveys and field studies (e.g., see
Ettlie (1988) Nutt (1986), White et al. (1982), Leonard-Barton and Kraus (1985)). Rogers
(1983) and Voss (1988) argue that the study of implementation must spread over the life span of
the implementation process. Action research, where the researcher is involved with other parties
in the analysis, planning, and execution of the implementation process, could help significantly
in improving OM researchers’ understanding of the implementation process (Warmington
(1980)). And case studies, particularly longitudinal, would allow detailed exploration of
unfolding processes in real time and avoid the frequently distorted responses of surveys and
structured interviews.
And since the implementation of new technology usually spans some years and involves many
participants, historical analysis could be used to capture the context of the social, political,
economic, and cultural issues that invariably come into play during such an extended process.
This might even allow the development of a framework of recurring patterns in implementation
processes. And also because of the extended time span of the implementation, Delphi and
scenario analyses would be feasible and possibly desirable to anticipate normally unexpected
developments.
ENDNOTES
?Throughout this paper we follow the example of the other functional business fields of finance, marketing, and human
resources and drop the unnecessary “management.”
REFERENCES
1. Abernathy, W.J., and J.E. Corcoran. “Relearning from the Old Masters: Lessons of the American System of
Manufacturing.” Journal of Operations Management. vol. 3, no. 4, August 1983, 155-167.
2. Ackoff, R. “The Future of Operational Research is Past.” Journal of the Operational Research Society, vol. 30,
no. 2, 1979, 93-104.
3. Agar, M.H. Speaking of Ethnography. Sage University Paper Series on Qualitative Research Methods, vol. 2.
Beverly Hills, CA: Sage Publications, 1986.
4. Alexander, C.P “Quality’s Third Dimension.” Quality Progress. July 1988, 21-23.
5. Amoako-Gyampah, K., and JR. Meredith. “The Operations Management Research Agenda: An Update.”
Journal of Operations Management, vol. 8, no. 3, 1989, 250-262.
6. Anderson, J.C., R.G. Schroeder, E.M. White, and SE. Tupy. “MRP: The State of the Art.” APICS Monograph.
Falls Church, VA: American Production and Inventory Control Society, Inc., Fall 1980.
7. Anderson, J.C., N.L. Chervanay, and R. Narasimhan. “1s Implementation Research Relevant for the OR/MS
Practitioner?” Inferfaces, vol. 9, no. 3, May 1979, 52-56.
8. Andrew, C.G., and GA. Johnson. “The Crucial Importance of Production/Operations Management.” Academy of
Managemenr Review, vol. 7, no. 1, 1982, 43-147.
9. Antill, L. “Selection of a Research Method.” In Research Methods for Information Systems, E. Mumford and R.
Hirschheim (eds.) Amsterdam: North-Holland, 1985.
10. Argyris, C., R. Putnam, and D.M. Smith. Action Science: Concepts, Methods, and Skills for Research and
Intervention. San Francisco: Jossey-Bass, 1985.
11. Babbie, E.R. Survey Research Methods. Belmont, CA: Wadsworth, 1973.
12. Basu, A., and R.G. Schroeder. “Incorporating Judgments in Sales Forecasts: Application of the Delphi Method at
American Hoist & Derrick.” Interfaces, vol. 7, no. 3, May 1977, 8-27.
13. Beged-dov, A.G., and T.A. Klein. “Research Methodology in the Management Sciences: Formalism or
Empiricism.” Operational Research Quarterly, vol. 21, no. 3, 1970, 31 l-326.
14. Benbasat, I., D.K. Goldstein, and M. Mead. “The Case Research Strategy in Studies of Information Systems.”
MIS Quarterly, vol. 11, no. 3, Sept., 1987, 369-386.
15. Benson, PG., A.V! Hill, and T.R. Hoffmann. “Manufacturing Systems of the Future-A Delphi Study.”
Production and fnventoy Management, vol. 23, no. 3, Third Quarter 1982, 87-106.
16. Bonoma, T.V. “Case Research in Marketing: Opportunities, Problems, and a Process.” Journal qf Marketing
Research, vol. 22, no. 2, May 1985, 99-208.
APPENDIX
THE HISTORY OF RESEARCH AND POST-POSITIVIST THOUGHT
This appendix briefly reviews the historical and philosophical development of research thought. It also describes a
number of attributes of modern post-positivist thinking. It then concludes with a discussion of the need to employ a
plurality of research methods in operations. More detailed discussion on these topics can be found in Meredith, Raturi,
Kaplan, and Amoako-Gyampah (1988)or the indicated references.
The history of research, scientific inquiry in particular, covers three main periods: the era of the positivists and the
associated development of the scientific method, the era of the empiricists and the application of new methodologies to
the social sciences because the positivist approach was found wanting, and the post-positivist era, which includes the
present. By limiting our discussion to these three approaches. we ignore a number of other philosophies pertaining to
metaphysics and epistemology, some of which occurred within the same periods. That is, we are following only one
philosophical thread here.
Also, positivism, our first topic, was not the first philosophy but developed as a reaction to other philosophies. And in
its turn, empiricism was only one of the alternatives posed to positivism. We only briefly sketch these eras to give a
foundation for the discussion on research paradigms.
Auguste Comte (1798.1857) was a sociologist who initiated deliberations on positivist thought. The term “positive”
refers to positive, or “observable,” data; that is, sensory experience of external, objective reality (Martineau (1896)).
The positivist approach holds that only empirically verifiable or analytic propositions are meaningful (see, for example,
Hemple ( 1965)).
One theme that has consistently emerged from historical developments in the philosophy of science is that the methods
of the physical sciences (largely positivist) are only a subset of the methods for providing explanation and conducting
research. Yet, as Beged-dov and Klein (1970) and,Lee (1987) point out, the positivist approach remains the cornerstone
for conducting research in management science. As Klein and Lyytinen (1985, p. 136) note: “The scientific method
turns into scientistic orthodoxy when it entails a commitment to [the belief that reality exists independently of the
researcher, language, and culture; the empirical-analytical method is the only valid approach to research; and that
scientism applies not only to the domain of the so-called exact (viz. physical and mathematical) sciences, but also to
those of all other fields, in particular the study of human behavior.] It has found its most extreme implementation in the
practice of Management Science as manifested in most of the TIMS publications and likewise outlets.”
Researchers in the social sciences in the 19th century encountered significant problems trying to operate under these
guidelines. Difficulties arose in reconciling laws of behavior with human free will. More significantly, there were severe
problems trying to predict social events based upon multiple causal events through the use of the then-popular tool of the
physical sciences-the laboratory experiment. Recognizing the need for a new research paradigm, philosophers, at the
behest of social scientists, again addressed the basic question of what constitutes a scientific explanation and what does
not.
A number of philosophies were proposed as alternatives to positivism. Here we follow the antecedents of positivism
to a later development, the work of John Locke (1632.1704) and David Hume (171 l-1776) on empiricism. Empiricism
is the doctrine that all knowledge is based on experience, or obtained through the senses. The power of empiricism
derives from “predictiveness.” Friedman (1974) argues that the causal structure leading to “true” predictions may rest
upon false assumptions, but we are not bothered by it as long as the predictions remain true. He argues that across
scientists and across different time periods, certain phenomena are considered self-explanatory, or “natural.”
Explanation consists of relating other phenomena to these narural phenomena.
The primary methodology of empiricism is through rejection of the hypothesis (Popper (1963)). Because confirmation
cannot be empirically established, hypotheses have to be set so that they can be rejected, or “falsified.” Popper
proposed that research activity is the method of the “irrational rationals”; one is looking for statements to reject. Lee
(1987) notes, in addition, that the discipline of statistics has “distanced itself from the notion of induction.” Statisticians
are careful to point out that the action of increasing the sample size does not increase the probability that the observation
is true; rather, it increases the researcher’s level of confidence.
Post-Positivist Approaches
The two primary research approaches in operations have been management science and statistics. As argued earlier,
the normative thinking implicit in the former and the problems of verification in the latter are problematic, especially for
operations systems that include people. Since socio-cultural and organizational issues are implicit in our definition of
operations as a field, researchers have to deal with the criticisms of the positivist/normative and empiricist approaches.
While much has been written about problems with the positivist/normative approach in operations (e.g., Ackoff
(1979), Lee (1987)) there are problems with empiricism for the field of operations also. One is the limits of the stimulus-
response paradigm of laboratory experimentation and other empirical research modes. such as surveys, in producing
usable observations. We are not commenting here on experiments done in artificial settings, or with convenience
samples; this is the accepted practice of current research activity in most areas of management. Rather, we are concerned
with the much more stringent and dangerous assumption that social beings respond to a stimulus (e.g., the alternatives in
a questionnaire) in a finite number of ways that can be captured by experiments or instruments.
Also, because we seek universal laws concerning operations, we tend to do research using transhistorical or
transcultural generalizations. Recent research efforts concerning manufacturing systems in different nations (e.g.,
Whybark and Rho (1988)) conclude that it is difficult to discover laws or principles that are valid across time or cultures.
Critical social theorists like Habermas (197 1, 1979) (also see McCarthy (1978)) consider a researcher’s subjectivity and
a proposition’s cultural bias as significant research problems.
There are additional problems concerning observer neutrality. It is hard, or impossible, to study social systems from a
value-free perspective. Further, the repeatability of research findings is another positivist notion that presumes the