Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Abstract
The performance management movement of the 1990s and 2000s
has put public managers in the position of having to generate and an-
alyze a broad range of data in their organizations – often with highly
constrained resources. Inspired in part by the emerging literature
on evidence-based practices in health, education, and business man-
agement, this paper proposes a framework for evidence-based pub-
lic management. The paper discusses the experiences with evidence-
based practice in medicine, education, and private sector management.
Based on these experiences, the paper concludes with a discussion of
the lessons these experiences offer for the development of evidence-
based public management.
1
The phrase “evidence-based” is a buzzword in contemporary pub-
lic policy, with all the risk of triteness and superficiality that
buzzword status conveys (Rousseau 2006).
1 Introduction
Public management scholarship focused a great deal of attention over the
past decades on performance measurement and performance management.
Implicit in this strategy is a notion that public managers should collect per-
formance information and use this data as the basis for decision-making on
issues ranging from budget allocations to personnel assessment. Public man-
agement, though, is not the only field that has engaged issues related the
collection and use of data in professional practice.
In this paper, we will explore the experiences of three other practice-
oriented disciplines with evidence-based approaches to education, research,
and practice: medicine, education, and business management. These experi-
ences reveal the opportunities present within some of these efforts – as well
as the challenges these efforts encounter. Following this review, I will dis-
cuss the particular needs of a potential evidence-based public management
to adapt to the conditions of public management education, research, and
practice.
2
fields ranging from medicine (where the origin of the term ”evidence-based”
is commonly traced) to education and private-sector management.
3
to represent a significant split within the evidence-based medicine commu-
nity. Research based on the care of 109 patients hospitalized in a univer-
sity medical service found that just over half (53%) of the treatments were
based on randomized control trials (RCTs – considered the gold standard
for evidence-based medicine) and about a third of the other treatments
(29%) were based on “convincing non-experimental evidence” (Ellis, Mul-
ligan, Rowe & Sackett 1995). Proponents of evidence-based management
could read this as a glass half-full or half-empty. Only half of the treatments
were based on the preferred type of evidence. Furthermore, over a quarter of
the treatments were not based on evidence that met the standard of “convinc-
ing non-experimental evidence.” Alternatively, critics of the evidence-based
medicine movement used the same results to suggest that contemporaneous
practice was already fairly well grounded in evidence and radical action was
not necessary.
Critics point out that the focus on RCTs, though downplayed in the
definition of evidence-based medicine, is a natural by-product of the effort to
aggregate and standardize the evidentiary basis of medical practice. Feinstein
and Horowitz argued that
The limitation of focus to RCTs has been the subject of a great deal of
criticism. In public health, researchers are concerned that the prioritiza-
tion of RCTs discourages the use of other research strategies that may be
better suited to other research areas. Public health scholars, for example,
argue that RCTs are an inappropriate method for the testing of large-scale
interventions (Victora, Habicht & Bryce 2004). In such large studies, ran-
domization is often impractical or unethical. Furthermore, the interventions
common in public health research are not as easily adaptable to RCTs as
clinical trials of specific therapeutic treatments. Large-scale interventions
may involve complex treatments that defy the simply logic of single admin-
istration interventions like a double-blind test for a new pill.
4
The controversy over evidence-based medicine continues. While there
is broad agreement that the aspirations of evidence-based medicine are ad-
mirable and desirable within specific domains of medicine, there is remaining
controversy over the domain of evidence-based medicine and how to integrate
external research with individual clinical experience. The reliance on RCTs
also suggests that broad appeals for evidence can be followed by narrow
devotion to a small number of preferred mythological strategies.
1. Randomized trial
2. Quasi-experiment, including pre- and post- testing
3. Correlation study with statistical controls
4. Correlation study without statistical controls
5. Case studies
Whitehurst noted, like Sackett above, that EBE should include the inte-
gration of external research studies with the personal experience of practic-
ing educators. However, Whitehurst argued, current practices are informed
mostly by personal anecdotal experience with very few educational interven-
tions based on what he considered to be sound evidence. The hierarchy of
5
methodology has been taken to heart by funding agencies and legislators in
the crafting of recent education reforms.
Critics argue that the preference for randomized trials and experimental
research is poorly suited to education research. Education, critics argue,
is a domain of policy quite different from medicine. In medicine, there is
consensus on what constitutes success and that definition has been stable over
time. In education, the very definition of what a quality education entails is
the subject of continued debate1 . In policy areas where we simultaneously
test reform proposals – be they of curriculum, classroom management, or
administrative organization – while also debating the meaning of quality
education, experimental tests become less simple than testing alternative
pharmaceutical therapies. Biesta (2007) asserted
Biesta makes problematic the very logic of “what works.” To Biesta, edu-
cation researchers have to confront important question of what constitutes
“working” before we can start to employ experimental studies to compare
alternative educational strategies.
6
began to see the discipline of evidence-based logic as a potentially useful
palliative for problems within their schools.
Within the business research community, a long-standing concern is that
business practitioners are particularly fond of management fads and popular
texts. The former president of the Academy of Management proposed that:
7
employees are “high performing” and the evaluator can list 25% of employees
in that category. This is designed to avoid the Lake Wobegone effect wherein
all (or most) of the evaluated employees are “above average.”
The problem with such a system is that a system that says that only
20% of the employees can be “high performing” also means that 20% (or
some other number) must also be “low performing”. Even if you are in an
organization with relatively few people who are actually performing poorly –
you have to guess how many and assigned people to this low category. Fur-
thermore, the role of cooperation in a business differs from in an educational
setting. As Pfeffer and Sutton note (2006b), cooperation in educational eval-
uation is called cheating. Cooperation may, however, be exactly what one
wants within a business.
The specific problems with forced ranking evaluation systems are less
important to the current topic than this systems reoccurrence. Pfeffer and
Sutton (2006b) note that this approach to evaluation has recently gained a
great deal of attention within business circles as well as within education
policy. However, the literature evaluating these systems from its prior pe-
riod of popularity has gone unconsulted. Pfeffer and Sutton note that these
systems have been tried, and largely abandoned, by a number of companies
after years of evaluation of the system. Scholars had conducted research into
the failure of these systems in its last period of popularity and built a body
of knowledge related to the reform. The reform quietly died off only to be
reborn over a decade later. The debate over the system, however, did not
engage the existing evidence on the system’s success (or, in this case, failure).
Instead, the debate focused on the plausibility of the system as a response to
bias in other evaluation systems and many business adopted forced ranking
systems - to predictably little success.
The aspiration of the proponents of evidence-based management is to
eliminate the cyclicity of business reforms by teaching managers to inves-
tigate the evidentiary basis of proposals. If managers had investigated the
historical record of forced ranking evaluation, it would not likely have been as
widely adopted in this most recent wave of popularity. Instead, proponents
of evidence-based management hope that management research could (albeit
slowly) accumulate knowledge rather than revisit the same topics repeatedly.
Given the experiences with evidence-based medicine and evidence-based
education, it is interesting to note the absence of specific methodological
strictures within the evidence-based management movement. Whereas the
prior evidence-based movements have codified specific methodological strate-
8
gies, some of the most vocal proponents of evidence-based management have
not voiced strong preferences for randomized trials, experimental methods, or
advanced statistical models. This suggests that the methodological myopia
criticized in other areas may not be inherent to evidence-based approaches.
9
grounded within that practice. This will require one, seemingly simple, inno-
vation – the development of a better understanding of the act (or acts) that
constitute public management. A read through the leading texts in public
administration and public management often leads students to question what
it is exactly that public managers do. Our research has to get closer to what
public managers do if there is any hope that research inform practice.
10
In education research, the dogmatism is more developed and broader but
still dogmatic. The strong preference for experiments or advanced statistical
controls, to the exclusion of case study approaches, forecloses many impor-
tant research subjects and breeds cynicism that evidence-based approaches
are merely a cover for specific methodological preferences. In public man-
agement, case studies have and continue to provide important information
about emerging topics. As an example, consider the emergence of collabora-
tive public management. It was a series of case studies that motivated the
original attention to the subject and disrupted the prior assumptions about
public management (Robinson 2007). Comparative case studies have served
to further elaborate the nature of collaboration while differentiating types of
collaborative networks (Agranoff 2007) among other subjects.
In evidence-based public management, we must retain an open-mind
about the range of appropriate research methodology. The early experi-
ence with evidence-based management suggests that this is possible. With a
more robust mixture of personal managerial experience and a broader range
of relevant external evidence, evidence-based public management may take
a much different track than in medicine or education. The key is the skillful
matching of research design with specific research questions to inform specific
managerial action.
The link between research and action is key to our professional identity.
Ours is a field of applied research that educates people for practice. Our
version of evidence-based practice must match these demands. While RCTs
make sense within the context of testing easily differentiable medical inter-
ventions, they may not work as well to assess managerial practice in public
organizations. We need to promote the careful design and conduct of re-
search – particularly on the scale of the practicing public manager. We need
to make research a part of the managerial task and the critical assessment
of the evidentiary basis of a proposal as part of managerial decision-making.
The case for an evidence-based public management that integrates indi-
vidual managerial experience with externally conducted research is strong.
However, we need to create an evidence-based public management that fits
our field’s needs – not through simple conformity to other field’s standards
of evidence.
11
References
Agranoff, R. 2007. Managing within Networks: Adding Value to Public Or-
ganizations. Georgetown Univ Press.
Ellis, J., I. Mulligan, J. Rowe & D.L. Sackett. 1995. “Inpatient General
Medicine Is Evidence Based.” Lancet 346:407–410.
Feinstein, A.R. & R.I. Horwitz. 1997. “Problems in the Evidence of Evidence-
Based Medicine.” The American Journal of Medicine 103(6):529–535.
Pfeffer, J. & R.I. Sutton. 2006b. Hard Facts, Dangerous Half-truths, and
Total Nonsense: Profiting from Evidence-based Management. Harvard
Business Press.
12
Victora, C.G., J.P. Habicht & J. Bryce. 2004. “Evidence-Based Public
Health: Moving Beyond Randomized Trials.” American Journal of Pub-
lic Health 94(3):400.
13