Documentos de Académico
Documentos de Profesional
Documentos de Cultura
4, 2010, 315328,
doi: 10.1093/ijtj/ijq023
Editorial Note
Colleen Duggan
Acknowledging and atoning for the systematic abuse of human rights and for violations of international humanitarian law is arguably one of the most controversial,
complex and unpredictable processes undertaken by governments and citizens in
societies transitioning from a violent past. As many scholars and practitioners
have noted, the implementation of transitional justice policies and mechanisms
has become increasingly entrenched in the response of the world community to
peacebuilding and social reconstruction. Yet, how do we know whether transitional
justice works? For those with a stake in the outcomes of transitional justice, the
impact question has come to occupy a position of singular importance. Concern is
growing that the enthusiasm of transitional justice proponents and the eagerness of
the international aid community to finance transitional justice mechanisms have
not been accompanied by commensurate efforts to evaluate the effects that these
interventions are having on the lives of the intended beneficiaries. Does transitional justice help or hinder democratic transition? Does it heal or hurt victims?
Does it contribute to reconciliation or just wake old ghosts?
Over some 20 years of transitional justice efforts across countries and continents,
an evolution has occurred in our understanding and thinking about type, timing,
political and social consequences, local approaches and the relationship of judicial
mechanisms to democratic consolidation. With this maturation comes a call for
greater scrutiny. The world over, individuals involved in transitional justice
government officials, victims, perpetrators, ordinary citizens, scholars and those
in the international aid community are beginning to ask hard questions about
the evidence base that speaks in favor of or against the place that transitional justice
occupies in the public policy arena.
But what counts as evidence in the world of public policy? In my remarks, I
am unapologetically single-minded in my perspective on what constitutes good
evidence. Over the last two decades, I have been an activist, a researcher and a funder
of research for development. As such, my worldview is shaped by my perception
of how research should serve the cause of human rights and social and economic
justice. As an emerging field of study and practice, in many ways transitional justice
is still finding its conceptual and political feet. However, what is abundantly clear
is that because transitional justice mechanisms aspire to catalyze processes of deep
social change at the global, national and local levels, transitional justice, by its very
nature, dwells in the realm of politics and public affairs. When seen through this
lens, a strong case can be made that research on transitional justice theory and
C The Author (2010). Published by Oxford University Press. All rights reserved.
practice is not a simple question of scholarship for the ivory tower. It should be
useful, accessible and able to inform public agendas, particularly for those who will
live with the legacy of transitional justice once policy makers predictably relocate
their interest and resources to new frontiers.
In this special issue of the International Journal of Transitional Justice, which we
have entitled Transitional Justice on Trial: Evaluating Its Impact, we will delve
more deeply into the question of how the field might be evaluated and what constitutes good evidence in transitional justice research and practice. Having spent
considerable time over the years following the literature and policy debates around
transitional justice, I am convinced that transitional justice knowledge production cannot indeed, should not be separated from knowledge dissemination,
translation and utilization.
One of most notorious failures to follow evidence was the South African governments delaying
the implementation of antiretroviral therapy. It has been estimated that this led to over 300,000
deaths that would likely otherwise have been averted, even given the difficulties of saving lives with
therapy. Kevin Kelly, The Vagaries of Research and Evaluation Influence in the HIV/AIDS Field,
in Evaluating Research in Violently Divided Societies (forthcoming).
Editorial Note
317
See, David Mendeloff, Truth-Seeking, Truth-Telling, and Postconflict Peacebuilding: Curb the
Enthusiasm? International Studies Review 6(3) (2004): 355380; Eric Brahm, Uncovering the
Truth: Examining Truth Commission Success and Impact, International Studies Perspectives 8(1)
(2007): 1635; Jack Snyder and Leslie Vinjamuri, Trials and Errors: Principle and Pragmatism
in Strategies of International Justice, International Security 28(3) (2003): 544; Erin Daly, Truth
Skepticism: An Inquiry into the Value of Truth in Times of Transition, International Journal of
Transitional Justice 2(1) (2008): 2341.
For a more in-depth discussion, see, Leslie Vinjamuri and Jack Synder, Advocacy and Scholarship
in the Study of International War Crime Tribunals and Transitional Justice, Annual Review of
Political Science 7 (2004): 345362.
Development evaluation stems from the field of evaluation science and is a generic
term for evaluations conducted in developing countries, usually focused on the
effectiveness of international aid projects, programs and organizations.7
Applied research and development evaluation have many similarities and a few
differences. Both rely on social science methods and examine multiple facets of a
problem, often using multimethod approaches. Both also collect and analyze data
in order to come to conclusions, and both utilize theory to inform work. While the
two processes of inquiry share more commonalities than differences, two important
features set evaluation apart: judgment, or valuing, and use. The primary purpose
of evaluation is to amass sufficient information to allow an evaluator to assess the
value or worth of something against a set of criteria. Evaluation is not just about
collecting and analyzing information; it is supposed to use data to make evaluative
judgments. These are fed back to a client in order to assist management and decision
making. Without this additional valuing, an evaluation is only a research project
that may increase knowledge but does not help in decision making.8
4
6
7
8
P. Cristian Gugiu, What Is Evaluation and How Does It Differ from Research? (paper presented at
the annual conference of the American Evaluation Association, Evaluation 2006: The Consequences
of Evaluation, Portland, OR, 14 November 2006).
This suggests a need to exercise caution and good sense in viewing transitional justice interventions
as experiments. Papers in this special issue underscore the importance and validity of using multiple approaches to evaluating impacts: experimental, quasi-experimental or non-experimental;
qualitative, quantitative or mixed method.
Deborah M. Fournier, Evaluation, in Encyclopedia of Evaluation, ed. Sandra Mathison (Thousand
Oaks, CA: Sage Publications, 2005).
Michael Quinn Patton, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (New York: Guilford Press, 2010).
I am grateful to Rick Davies for this observation.
Editorial Note
319
Why should we care about the seemingly subtle differences between applied
research and evaluation? In our discussion, these differences matter because they
define how researchers and evaluators are tackling the challenges of assessing
transitional justice impacts. Each seeks to answer evaluative questions about these
impacts for slightly different reasons. An understanding of the strengths and limitations of each field of activity can bring us closer to untying some of the knots
of transitional justice as a hard-to-measure area of inquiry and practice. This is
why it is both positive and timely that analysts of transitional justice bring the two
fields of research evaluation science and applied social science closer together
for greater learning.
As evidenced in this and in previous issues of IJTJ, over the past few years applied
researchers have shown an increasing interest in making use of the principles and
methods emerging from development evaluation to inform and improve applied
research.9 Part of this interest stems from the reality that a number of researchers
and advocates of transitional justice are finding themselves in the position of
acting as evaluators of international aid projects in support of transitional justice.
Similarly, researchers and advocates can find themselves in the uncomfortable
position of being evaluation subjects, that is, of having their research or advocacy
efforts questioned by donors and treated as a unit of analysis in which quality and
impacts are assessed or measured against criteria or standards that oftentimes feel
at odds with original intentions or hypotheses.10
11
See, Phuong Pham and Patrick Vinck, Empirical Research and the Development and Assessment of
Transitional Justice Mechanisms, International Journal of Transitional Justice 1(2) (2007): 231248.
Hence my call for a new research for development framework that prioritizes use and assesses research excellence in terms of conceptual/methodological quality, relevance and contextual
actionability.
In the social sciences, interpretivists believe that we should be concerned not only with quantifying
what actually happens in social phenomena but also with providing an interpretation of events and
phenomena in terms of how the people involved understand their own experience. Positivists believe
that scientific method is the best approach to uncovering the processes by which both physical and
human events occur and the only authentic knowledge is that based on sense experience and
positive verification.
Hugo van der Merwe, Delivering Justice during Transition: Research Challenges, in Assessing the
Impact of Transitional Justice: Challenges for Empirical Research, ed. Hugo van der Merwe, Victoria
Baxter and Audrey R. Chapman (Washington, DC: United States Institute of Peace Press, 2009).
Editorial Note
321
factor here is that transitional justice has widened to envelop a host of social justice
goals that are usually ascribed to international development,13 thus multiplying
the menu of theories that need to be tested.
Given this state of affairs, there are good reasons for ensuring that when we
discuss impact, we are clear about which theory is being interrogated and who
decided that this theory is the most relevant one to pursue in the first place. In
the aid world, this means that transitional justice projects and programs should
be underpinned by theory-based evaluation.14 As its name implies, theory-based
evaluation can uncover the difficulties or deficiencies in the original theory underpinning a programs or projects logic. In an ideal world, transitional justice
programming would be accompanied by deeper empiric research. However, as
this is most often not the case and as transitional justice projects generally unfold
in limited time frames (for example, an average truth commission may run two
or three years), theory-based evaluation offers short-term possibilities for clarifying some of the most important questions that inform failure or success (for
example, transitional justice for what and for whom).
Transitional justice is also almost always the result of political compromise and
seldom reflects the ideal state of justice. Yet, in transitional justice research and
practice, mechanisms are almost always measured against someones ideal concept
of justice. Culture, ideology and politics also muddy the waters. Funders and those
being funded often have very different perceptions of what constitutes justice.
Most unfortunately, the record of development evaluation indicates that it is often
the donor that determines the broad parameters around what transitional justice
success or failure look like. This is not always a good thing; in some cases, donors
have been complicit in human rights abuse legacies and may be keen to whitewash
their own reputations. Recipients of transitional justice programs (governments,
nongovernmental organizations (NGOs), victims groups) may have good reason
to distrust bilateral or multilateral donors. In such contexts, external evaluation
can understandably be viewed as an extension of repressive tactics. In order to
level the playing field and offset power imbalances, donors should consider using
a combination of external and internal evaluation. The success or failure of a
program in support of transitional justice must pay heed to the equally important
imperatives of vertical accountability to the donor and horizontal accountability
to a wider base of civil society stakeholders.
Similarly, those being evaluated often view the parachuting of external evaluators
into socially volatile environments as an imposition and a liability to be managed,
as they assume that any negative findings will be taken up and exploited by political
adversaries of transitional justice. It is not uncommon to come across this same
13
14
See, Special Issue: Transitional Justice and Development, International Journal of Transitional
Justice 2(3) (2008).
See, Carol H. Weiss, Theory-Based Evaluation: Past, Present, and Future, in Progress and Future
Directions in Evaluation: Perspectives on Theory, Practice, and Methods, ed. Debra J. Rog and Deborah
Fournier (San Francisco, CA: Jossey-Bass, 1997).
16
17
See, Paul Gready, Reasons to Be Cautious about Evidence and Evaluation: Rights-Based Approaches
to Development and the Emerging Culture of Evaluation, Journal of Human Rights Practice 1(3)
(2009): 380401.
There are notable exceptions and in some agencies, practice is moving toward greater transparency
through public access.
Michael Bamberger, Jim Rugh and Linda Mabry, RealWorld Evaluation: Working Under Budget,
Time, Data and Political Constraints (Thousand Oaks, CA: Sage Publications, 2005).
Editorial Note
323
Terry Smutylo, Outcome Mapping: A Method for Tracking Behavioural Changes in Development
Programs, Institutional Learning and Change Brief 7 (August 2007).
Process use refers to individual changes in thinking, attitudes and behavior, and program or
organizational changes in procedures and culture that occur among those involved in evaluation
as a result of the learning that occurs during the evaluation process. Michael Quinn Patton,
Utilization-Focused Evaluation (Thousand Oaks, CA: Sage Publications, 2008), 155. Process use in
evaluation is akin to the benefits ascribed to action research.
process that renews strained relationships and builds strong local organizations.
At the end of the day, it is local actors who need to be convinced of the impacts of
transitional justice and be in possession of the necessary analytical skills to make
these sorts of judgments.
While applied researchers can encounter constraints around time, resources
and political conditions, in development evaluation, these factors are experienced
tenfold. As mentioned, evaluation has a client who wants to know something and
is used by funders to make real-time management decisions that are often linked
to project or program funding and continuity. Brandon Hamber, Liz S evcenko and
Ereshnee Naidus article on evaluating sites of conscience in Italy, Bangladesh and
Chile and Patrick Vinck and Phong Phams article outlining their evaluation of the
International Criminal Courts outreach program in the Central African Republic
provide useful portrayals of the tensions that evaluators face in finding evidence
of results for a client, as well as around measuring short or intermediate outcomes
versus long-term impacts the everyday realpolitik of development evaluation.
The sort of long-term impacts that transitional justice mechanisms hope to bring
about (for example, rebuilding civic trust, creating a human rights culture or
generating empathy between former adversaries) are the result of years and years
of investment and can take generations. Not surprisingly, Hamber et al. lament
that showing impacts on attitudinal change should be satisfactory in itself, while
acknowledging that in times of transition, institutions that have clear political
content must be seen to be influencing or effecting wider social change.
This gaping chasm between outcomes20 and impacts21 suggests that transitional
justice is an area where applied, social science research and development evaluation
could collaborate more effectively. As David Backers article on victims attitudes in
South Africa demonstrates,22 longitudinal research that tracks changing attitudes
and perceptions across time can be particularly useful for understanding if and
how transitional justice makes a difference to victims. Of course, the difficulty
lies in making the leap from cognitive change (an outcome that is not readily
evident) to change that suggests forward movement toward greater impact. What
victims say and what they do are often two different things. The world of development evaluation may be able to make a contribution here. Outcome mapping,23 a
planning, monitoring and evaluation methodology that uses behavior change as a
key indicator of social change and was specifically designed to track the effects of
research, has been used by a limited number of actors in the field of transitional
20
21
22
23
The likely or achieved short-term and medium-term effects of the products, capital goods and
services of a development intervention. Organisation for Economic Cooperation and Development, Development Assistance Committee, Glossary of Key Terms in Evaluation and Results-Based
Management (2002).
Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. Ibid.
See also, James L. Gibson, Jeffrey Sonis and Sokhom Hean, Cambodians Support for the Rule of
Law on the Eve of the Khmer Rouge Trials, International Journal of Transitional Justice 4(3) (2010):
377396.
Sarah Earl, Fred Carden and Terry Smutylo, Outcome Mapping: Building Learning and Reflection
into Development Programs (Ottawa: International Development Research Centre, 2001).
Editorial Note
325
justice.24 Embedding empiric research initiatives into mainstream transitional justice programs that are funded by the international aid community might also offer
new opportunities for elucidating how transitional justice works (not just if it
works), for improving its effectiveness and for moving away from the templatization of the processes a trend that is worrying many scholars and advocates and
is noted in this issue by Thoms et al.
Good development evaluation should adhere to the universally accepted standards of utility, propriety, feasibility and accuracy.25 Herein lies one of the root
problems of evaluating transitional justice. As a field that brings together multiple
disciplines and is still under conceptual construction, transitional justice can and
does fall victim to poor-quality evaluation. The regulation of evaluation as an
area of professional practice is still in its infancy, with heated discussions about
the pros and cons of accreditation taking place in evaluation associations in the
global North and South. Some argue that accreditation would boost the prestige
of the profession and help weed out unethical and incompetent evaluators. The
counterargument is that accreditation would be one more means of shutting out
evaluators from the developing world. There is a certain ring of truth in this perspective, as evaluation as a field of research and practice was born and has grown
up in the North. With notable exceptions, development evaluation is dominated
by professionals from North America and Europe (few university-accredited programs in evaluation are available elsewhere), which raises serious questions about
who decides what counts as good evidence and whose values shape an evaluation.
Good evaluators who understand what transitional justice is, let alone who can
untangle and measure the fields multilayered and often complex goals, are in
short supply. In this sense, building the field of evaluation in the global South is as
urgent a priority as the need to build a cohort of transitional justice researchers in
countries of the developing world.
There is also a problem of vocabulary and dialogue between applied researchers
and development evaluation specialists. As I read through the articles and peer
reviews submitted for this issue, I was fascinated to see how often individuals of
diverse academic and professional backgrounds accused authors of using jargon
and unintelligible language (a tendency I hope I have not compounded with this
editorial). This was most marked in articles written by development evaluators and
reviewed by nonevaluators. As is the case with all fields, development evaluation
has generated a particular language that can be unclear or inaccessible to those
less familiar with this field.26 This is not to say that evaluators should have to
24
25
26
Experiences have been mixed, however, and I am not suggesting that outcome mapping is appropriate for tracking the effects of all transitional justice interventions. See, Colleen Duggan, Show
Me Your Impact: Evaluating Transitional Justice in Contested Spaces, Journal of Evaluation and
Program Planning (forthcoming).
These are the program evaluation standards adopted by the American Evaluation Association.
Similar standards exist in the other regional and national evaluation associations in both the
developing and the developed world.
I am not the first to remark upon this issue. The American Evaluation Associations New Directions
in Evaluation journal devoted an entire special issue to the question of language in evaluation.
27
Special Issue: How and Why Language Matters in Evaluation, New Directions for Evaluation 86
(2000).
Michael Ignatieff, Articles of Faith, Index on Censorship 25(5) (1996): 113.
Editorial Note
327
Ray Pawson, Evidence-Based Policy: A Realist Approach (Thousand Oaks, CA: Sage Publications,
2006).
Patton, supra n 6.
Concluding Remarks
These reflections are by no means an exhaustive examination of the promise and
pitfalls of evaluating transitional justice. Evaluation as a field of inquiry and practice
is as varied in its methods and ideologies as is social science research. It can be
empowering or punitive. It can be qualitative, quantitative or mixed method. There
is no perfect evaluation model or approach for evaluating transitional justice, only
choices that need to be made. This editorial seeks to open up the debate around the
myriad ways and multiple places in which evidence of the impact of transitional
justice can be found. That IJTJ received more high-quality contributions than could
be published in this special issue is testimony to the importance that scholars,
advocates and policy makers attach to evidence-based decision making in the
administration of international aid in the area of transitional justice.