Está en la página 1de 13

Public Administration Research and the

Emergence of Evidence-Based Public


Management
Scott E. Robinson

Bush School of Government and Public Service


Institute for Science, Technology, and Public Policy
Texas A&M University
srobinson@bushschool.tamu.edu
April 8, 2011

Abstract
The performance management movement of the 1990s and 2000s
has put public managers in the position of having to generate and an-
alyze a broad range of data in their organizations – often with highly
constrained resources. Inspired in part by the emerging literature
on evidence-based practices in health, education, and business man-
agement, this paper proposes a framework for evidence-based pub-
lic management. The paper discusses the experiences with evidence-
based practice in medicine, education, and private sector management.
Based on these experiences, the paper concludes with a discussion of
the lessons these experiences offer for the development of evidence-
based public management.

1
The phrase “evidence-based” is a buzzword in contemporary pub-
lic policy, with all the risk of triteness and superficiality that
buzzword status conveys (Rousseau 2006).

1 Introduction
Public management scholarship focused a great deal of attention over the
past decades on performance measurement and performance management.
Implicit in this strategy is a notion that public managers should collect per-
formance information and use this data as the basis for decision-making on
issues ranging from budget allocations to personnel assessment. Public man-
agement, though, is not the only field that has engaged issues related the
collection and use of data in professional practice.
In this paper, we will explore the experiences of three other practice-
oriented disciplines with evidence-based approaches to education, research,
and practice: medicine, education, and business management. These experi-
ences reveal the opportunities present within some of these efforts – as well
as the challenges these efforts encounter. Following this review, I will dis-
cuss the particular needs of a potential evidence-based public management
to adapt to the conditions of public management education, research, and
practice.

2 The Varied Experiences with Evidence-based


Practice
Practice-oriented fields face particular needs related to teaching, research,
and practice. Practice-oriented fields confront specific problems encountered
in practice in the normal course of teaching, research, and the observation
of practice. In this way, public management more closely resembles fields
like social work, education, and medicine than other academic fields more
often associated with public management like political science, sociology,
and economics.
Many of these practice-oriented fields of study have recently experienced
movements to increase the attention to the evidentiary basis of the practices
taught and promoted. This section will review the experiences of three such

2
fields ranging from medicine (where the origin of the term ”evidence-based”
is commonly traced) to education and private-sector management.

2.1 Evidence-based Medicine


While the notion of basing medical practices on evidence is not entirely a
recent innovation, a specific movement advocating evidence-based medicine
emerged in the 1990s to challenge what it perceived as lax educational and
research standards within medicine. Studies of medical practices (including
interviews with and surveys of practicing medical practitioners) revealed that
few were aware of the evidentiary basis (or its absence) of the practices they
were using. Practitioners tended to rely on the techniques taught to them
in school – sometimes decades prior – along with what they had discovered
in their own practice. As a result, many people were treated with strategies
that were essentially untested outside of personal, anecdotal experience. In-
spired by these reports, David Sackett and colleagues began to promote the
importance of evidence-based medicine as a discipline of rigorous testing of
treatment strategies and continuing education for medical professionals.
Sackett, Rosenberg, Gray, Haynes & Richardson (1996) stated

Evidence based medicine is the conscientious, explicit, and ju-


dicious use of current best evidence in making decisions about
the care of individual patients. The practice of evidence based
medicine means integrating individual clinical expertise with the
best available external clinical evidence from systematic research.
By individual clinical expertise we mean the proficiency and judg-
ment that individual clinicians acquire through clinical experience
and clinical practice.... By best available external clinical evi-
dence we mean clinically relevant research, often from the basic
sciences of medicine, be especially from patient centered clinical
research into the accuracy and precision of diagnostic tests (in-
cluding the clinical examination), the power of prognostic mark-
ers, and the efficacy and safety of therapeutic, rehabilitative, and
preventative regimens .

The advocates of evidence-based medicine rallied to this and similar


statements to call for reforms of medical research, teaching, and practice.
A study of medical practice in that time period revealed what has come

3
to represent a significant split within the evidence-based medicine commu-
nity. Research based on the care of 109 patients hospitalized in a univer-
sity medical service found that just over half (53%) of the treatments were
based on randomized control trials (RCTs – considered the gold standard
for evidence-based medicine) and about a third of the other treatments
(29%) were based on “convincing non-experimental evidence” (Ellis, Mul-
ligan, Rowe & Sackett 1995). Proponents of evidence-based management
could read this as a glass half-full or half-empty. Only half of the treatments
were based on the preferred type of evidence. Furthermore, over a quarter of
the treatments were not based on evidence that met the standard of “convinc-
ing non-experimental evidence.” Alternatively, critics of the evidence-based
medicine movement used the same results to suggest that contemporaneous
practice was already fairly well grounded in evidence and radical action was
not necessary.
Critics point out that the focus on RCTs, though downplayed in the
definition of evidence-based medicine, is a natural by-product of the effort to
aggregate and standardize the evidentiary basis of medical practice. Feinstein
and Horowitz argued that

Despite the broad range of information permitted when EBM


(evidence-based medicine) is practiced, the evidence collected for
EBM itself is confined almost entirely to randomized trials. Be-
cause meta-analysis can aggregate and evaluate but not change
the basic information, the RCTs themselves become the funda-
mental source of data, and for the scope of topics contained in
the EBM collection (1997)

The limitation of focus to RCTs has been the subject of a great deal of
criticism. In public health, researchers are concerned that the prioritiza-
tion of RCTs discourages the use of other research strategies that may be
better suited to other research areas. Public health scholars, for example,
argue that RCTs are an inappropriate method for the testing of large-scale
interventions (Victora, Habicht & Bryce 2004). In such large studies, ran-
domization is often impractical or unethical. Furthermore, the interventions
common in public health research are not as easily adaptable to RCTs as
clinical trials of specific therapeutic treatments. Large-scale interventions
may involve complex treatments that defy the simply logic of single admin-
istration interventions like a double-blind test for a new pill.

4
The controversy over evidence-based medicine continues. While there
is broad agreement that the aspirations of evidence-based medicine are ad-
mirable and desirable within specific domains of medicine, there is remaining
controversy over the domain of evidence-based medicine and how to integrate
external research with individual clinical experience. The reliance on RCTs
also suggests that broad appeals for evidence can be followed by narrow
devotion to a small number of preferred mythological strategies.

2.2 Evidence-based Education


Education research – a professional field that more closely resembles public
administration in many important ways – has also engaged in a broad-ranged
debate over the emergence of evidence-based education (EBE). In 2001, then
Assistant Secretary for the Office of Educational Research and Improvement
(OERI) made a widely-cited presentation calling for the development of EBE
practices (Smith 2003). In this presentation, Secretary Whitehurst argued
that future educational innovations should be based on rigorous scientific
research and that this research should be a precondition for funding educa-
tion programs. This demand for “scientifically based research” to ground
education programs was then written into The No Child Left Behind Act.
In his original presentation, Whitehurst advocated a hierarchy of research
designs. He argued that not all evidence is created equal. Some research de-
signs provide a more reliable foundation for practice than others. Accordingly
he created a priority list of research designs (in decreasing order of reliability).

1. Randomized trial
2. Quasi-experiment, including pre- and post- testing
3. Correlation study with statistical controls
4. Correlation study without statistical controls
5. Case studies

Whitehurst noted, like Sackett above, that EBE should include the inte-
gration of external research studies with the personal experience of practic-
ing educators. However, Whitehurst argued, current practices are informed
mostly by personal anecdotal experience with very few educational interven-
tions based on what he considered to be sound evidence. The hierarchy of

5
methodology has been taken to heart by funding agencies and legislators in
the crafting of recent education reforms.
Critics argue that the preference for randomized trials and experimental
research is poorly suited to education research. Education, critics argue,
is a domain of policy quite different from medicine. In medicine, there is
consensus on what constitutes success and that definition has been stable over
time. In education, the very definition of what a quality education entails is
the subject of continued debate1 . In policy areas where we simultaneously
test reform proposals – be they of curriculum, classroom management, or
administrative organization – while also debating the meaning of quality
education, experimental tests become less simple than testing alternative
pharmaceutical therapies. Biesta (2007) asserted

[E]ducation cannot be understood as an intervention or a treat-


ment because of the noncausal and normative nature of educa-
tional practice and because of the fact that the means and ends
of education are internally related. This implies that educational
professionals need to make judgments about what is education-
ally desirable. Such judgments are by their very nature normative
judgments.... [T]o suggest that research about “what works” can
replace such judgments not only implies an unwarranted leap from
“is” to “ought,” but also denies that educational practitioners the
right not to act according to evidence about “what works” if they
judge that such a line of action would be educationally undesir-
able (20).

Biesta makes problematic the very logic of “what works.” To Biesta, edu-
cation researchers have to confront important question of what constitutes
“working” before we can start to employ experimental studies to compare
alternative educational strategies.

2.3 Evidence-based Management


While the debates over evidence-based medicine and evidence-based educa-
tion raged within their respective disciplines, scholars in business schools
1
I am less convinced than many that the measures of performance are so clear and
objective as many allege. However, it is clear that education is a domain where the
measurement of successful education is deeply contested.

6
began to see the discipline of evidence-based logic as a potentially useful
palliative for problems within their schools.
Within the business research community, a long-standing concern is that
business practitioners are particularly fond of management fads and popular
texts. The former president of the Academy of Management proposed that:

Using evidence makes it possible for well-informed managers to


develop substantive expertise throughout their careers as opposed
to the faddish and unsystematic beliefs today’s managers espouse
(Rousseau & McCarthy 2007).

Given the omnipresence of fads and untested propositions, it is no surprise


that many management scholars saw merit in the importation of evidence-
based approaches to create a body of evidence-based management knowledge.
The proponents of evidence-based management have emphasized how
such an approach require a fundamental reorientation of research, practice,
and education in management.

If taken seriously, evidence-based management can change how


every manager thinks and acts. It is, first and foremost, a war
of seeing the world and thinking about the craft of management;
it proceeds from the premise that using better, deeper logic and
employing facts, to the extent possible, permits leaders to do
their jobs more effectively. We believe that facing the hard facts
and truth about what works and what doesn’t, understanding
the dangerous half-truths that constitute so much conventional
wisdom about management, and rejecting the total nonsense that
too often passes for sound advice will help organizations perform
better (Pfeffer & Sutton 2006a).

Pfeffer & Sutton (2006b) provide a number of examples where an evidence-


based orientation could have saved a great deal of consternation. One ex-
ample to which they return repeatedly is the cyclical popularity of forced
ranking evaluation systems. In these systems, evaluators decide on a specific
proportion of employees that will be judged high quality, average quality, and
low quality. Evaluators are then asked to assign all evaluated employees into
one of these three categories so that the overall proportions in each category
match the initial design. An evaluator may be told, for example, that 20% of

7
employees are “high performing” and the evaluator can list 25% of employees
in that category. This is designed to avoid the Lake Wobegone effect wherein
all (or most) of the evaluated employees are “above average.”
The problem with such a system is that a system that says that only
20% of the employees can be “high performing” also means that 20% (or
some other number) must also be “low performing”. Even if you are in an
organization with relatively few people who are actually performing poorly –
you have to guess how many and assigned people to this low category. Fur-
thermore, the role of cooperation in a business differs from in an educational
setting. As Pfeffer and Sutton note (2006b), cooperation in educational eval-
uation is called cheating. Cooperation may, however, be exactly what one
wants within a business.
The specific problems with forced ranking evaluation systems are less
important to the current topic than this systems reoccurrence. Pfeffer and
Sutton (2006b) note that this approach to evaluation has recently gained a
great deal of attention within business circles as well as within education
policy. However, the literature evaluating these systems from its prior pe-
riod of popularity has gone unconsulted. Pfeffer and Sutton note that these
systems have been tried, and largely abandoned, by a number of companies
after years of evaluation of the system. Scholars had conducted research into
the failure of these systems in its last period of popularity and built a body
of knowledge related to the reform. The reform quietly died off only to be
reborn over a decade later. The debate over the system, however, did not
engage the existing evidence on the system’s success (or, in this case, failure).
Instead, the debate focused on the plausibility of the system as a response to
bias in other evaluation systems and many business adopted forced ranking
systems - to predictably little success.
The aspiration of the proponents of evidence-based management is to
eliminate the cyclicity of business reforms by teaching managers to inves-
tigate the evidentiary basis of proposals. If managers had investigated the
historical record of forced ranking evaluation, it would not likely have been as
widely adopted in this most recent wave of popularity. Instead, proponents
of evidence-based management hope that management research could (albeit
slowly) accumulate knowledge rather than revisit the same topics repeatedly.
Given the experiences with evidence-based medicine and evidence-based
education, it is interesting to note the absence of specific methodological
strictures within the evidence-based management movement. Whereas the
prior evidence-based movements have codified specific methodological strate-

8
gies, some of the most vocal proponents of evidence-based management have
not voiced strong preferences for randomized trials, experimental methods, or
advanced statistical models. This suggests that the methodological myopia
criticized in other areas may not be inherent to evidence-based approaches.

3 Lessons for Evidence-based Public Manage-


ment
This brief review of the experiences with evidence-based practices in medicine,
education, and private-sector management provide a preview of what such a
development may look like within public administration. We can see in these
experiences the motivations for the emergence for evidence-based practice as
well as the trajectory and institutionalization of these projects.
In this final section, I will discuss what public administration can learn
from the experiences of these other fields with evidence-based practice.

3.1 The Integration of Practice and Research


The origins of contemporary evidence-based practice in medicine offers a
compelling model for research, education, and practice in public administra-
tion. Recall that the original rhetoric surrounding evidence-based medicine
involved the integration of external clinical evidence with individual clinical
experience. The original arguments did not call for the exclusive reliance
on RCTs or any specific laboratory metrology. Instead, the original call for
evidence-based medicine emphasized the importance of individual grounded
experience in addition to external clinical trials. Even in the case of education
where the rhetoric of grounded experiences is less common, Whitehead’s orig-
inal presentation discussed an imbalance between practical experience and
externally-validated evidence – but called for a balance between the two.
Public administration should adopt this early model for evidence-based
public management. Such a development could provide for a renewed engage-
ment of public management scholarship with public management practice.
The changes involved in such a re-orientation may be more dramatic than
they seem at first glance. It will not be enough to have practice-oriented
work published along side work that has a more distant connection to prac-
tice. The best research will that which fully integrates practice and is deeply

9
grounded within that practice. This will require one, seemingly simple, inno-
vation – the development of a better understanding of the act (or acts) that
constitute public management. A read through the leading texts in public
administration and public management often leads students to question what
it is exactly that public managers do. Our research has to get closer to what
public managers do if there is any hope that research inform practice.

3.2 Public Management as Continuous Education


The development of evidence-based public management requires more than
effort on the part of public management researchers and educators. Evidence-
based public management requires a reorientation of public management
practice towards a process of continuous education. Practicing managers
can, and should, rely on their personal experience; but with the mixture and
consultation of externally validated research evidence. In the end, it is up to
the manager to accomplish this mixture of individual professional experience
with external evidence.
Preparing managers for a world in which they mix personal and exter-
nal evidence may require a change in educational practice in addition to
the greater levels of engagement discussed in the previous section. We must
train managers to be active producers and consumers of research. This may
require some rethinking of what research skills are most needed for practic-
ing managers and the study of practical management problems. Practicing
managers are less likely to need the skills to collect and analyze broad panel
surveys than of the sorts of operational data available to them on the usage
and quality of social services.

3.3 Avoiding Methodological Dogmatism


The chief danger we must avoid in developing an evidence-based public man-
agement is the potential for methodological dogmatism. In evidence-based
medicine, a preference for RCTs emerged quickly and to the exclusion of a
variety of research methodologies that may be of great importance to large-
scale interventions common in public health. Interestingly, these large-scale
interventions are closer in nature to public management interventions than
the clinical trials preferred by traditional evidence-based medicine. It is un-
realistic to expect that random control trials will become the sole or even
predominate basis for evidence in public management.

10
In education research, the dogmatism is more developed and broader but
still dogmatic. The strong preference for experiments or advanced statistical
controls, to the exclusion of case study approaches, forecloses many impor-
tant research subjects and breeds cynicism that evidence-based approaches
are merely a cover for specific methodological preferences. In public man-
agement, case studies have and continue to provide important information
about emerging topics. As an example, consider the emergence of collabora-
tive public management. It was a series of case studies that motivated the
original attention to the subject and disrupted the prior assumptions about
public management (Robinson 2007). Comparative case studies have served
to further elaborate the nature of collaboration while differentiating types of
collaborative networks (Agranoff 2007) among other subjects.
In evidence-based public management, we must retain an open-mind
about the range of appropriate research methodology. The early experi-
ence with evidence-based management suggests that this is possible. With a
more robust mixture of personal managerial experience and a broader range
of relevant external evidence, evidence-based public management may take
a much different track than in medicine or education. The key is the skillful
matching of research design with specific research questions to inform specific
managerial action.
The link between research and action is key to our professional identity.
Ours is a field of applied research that educates people for practice. Our
version of evidence-based practice must match these demands. While RCTs
make sense within the context of testing easily differentiable medical inter-
ventions, they may not work as well to assess managerial practice in public
organizations. We need to promote the careful design and conduct of re-
search – particularly on the scale of the practicing public manager. We need
to make research a part of the managerial task and the critical assessment
of the evidentiary basis of a proposal as part of managerial decision-making.
The case for an evidence-based public management that integrates indi-
vidual managerial experience with externally conducted research is strong.
However, we need to create an evidence-based public management that fits
our field’s needs – not through simple conformity to other field’s standards
of evidence.

11
References
Agranoff, R. 2007. Managing within Networks: Adding Value to Public Or-
ganizations. Georgetown Univ Press.

Biesta, G. 2007. “Why” What Works” Won’T Work: Evidence-Based Prac-


tice and the Democratic Deficit in Educational Research.” Educational
Theory 57(1):1–22.

Ellis, J., I. Mulligan, J. Rowe & D.L. Sackett. 1995. “Inpatient General
Medicine Is Evidence Based.” Lancet 346:407–410.

Feinstein, A.R. & R.I. Horwitz. 1997. “Problems in the Evidence of Evidence-
Based Medicine.” The American Journal of Medicine 103(6):529–535.

Pfeffer, J. & R.I. Sutton. 2006a. “Evidence-based Management.” Harvard


Business Review 84(1):62.

Pfeffer, J. & R.I. Sutton. 2006b. Hard Facts, Dangerous Half-truths, and
Total Nonsense: Profiting from Evidence-based Management. Harvard
Business Press.

Robinson, Scott E. 2007. “A Decade of Treating Networks Seriously.” Policy


Studies Journal 34:589–598.

Rousseau, D.M. 2006. “Is There Such a Thing as Evidence-based Manage-


ment.” Academy of Management Review 31(2):256–269.

Rousseau, D.M. & S. McCarthy. 2007. “Educating Managers from an


Evidence-based Perspective.” Academy of Management Learning and
Education 6(1):84.

Sackett, D.L., W. Rosenberg, JA Gray, R.B. Haynes & W.S. Richardson.


1996. “Evidence Based Medicine: What It Is and What It Isn’t.” British
Medical Journal 312:71–72.

Smith, A. 2003. “Scientifically based research and evidence-based education:


A federal policy context.” Research and Practice for Persons with Severe
Disabilities 28(3):126–132.

12
Victora, C.G., J.P. Habicht & J. Bryce. 2004. “Evidence-Based Public
Health: Moving Beyond Randomized Trials.” American Journal of Pub-
lic Health 94(3):400.

13

También podría gustarte