Documentos de Académico
Documentos de Profesional
Documentos de Cultura
OPINION
Let’s make science metrics more scientific
To capture the essence of good science, stakeholders must combine forces to create an open, sound and
consistent system for measuring all the activities that make up academic productivity, says Julia Lane.
M
easuring and assessing academic prioritize research money risks missing out on
performance is now a fact of scientific an important discovery from left field. It is true Summary
life. Decisions ranging from tenure to that good metrics are difficult to develop, but ● Existing metrics have known flaws
the ranking and funding of universities depend this is not a reason to abandon them. Rather it ● A reliable, open, joined-up data
on metrics. Yet current systems of measure- should be a spur to basing their development infrastructure is needed
ment are inadequate. Widely used metrics, in sound science. If we do not press harder for ● Data should be collected on the full
from the newly-fashionable Hirsch index to the better metrics, we risk making poor funding range of scientists’ work
50-year-old citation index, are of limited use1. decisions or sidelining good scientists. ● Social scientists and economists
Their well-known flaws include favouring older should be involved
researchers, capturing few aspects of scientists’ Clean data
jobs and lumping together verified and discred- Metrics are data driven, so developing a and publishing communities to create unique
ited science. Many funding agencies use these reliable, joined-up infrastructure is a necessary researcher identifiers using the same prin-
metrics to evaluate institutional performance, first step. Today, important, but fragmented, ciples as the Digital Object Identifier (DOI)
compounding the problems2. Existing metrics efforts such as the Thomson Reuters Web of protocol, which has become the international
do not capture the full range of activities that Knowledge and the US National Bureau of standard for identifying unique documents.
support and transmit scientific ideas, which can Economic Research Patent Database have been The ORCID (Open Researcher and Contribu-
be as varied as mentoring, blogging or creating created to track scientific outcomes such as tor ID) project, for example, was launched in
industrial prototypes. publications, citations and patents. These efforts December 2009 by parties including Thomp-
The dangers of poor metrics are well known are all useful, but they are labour intensive and son Reuters and Nature Publishing Group. The
— and science should learn lessons from the rely on transient funding, some are proprietary engagement of international funding agencies
experiences of other fields, such as business. and non-transparent, and many cannot talk would help to push this movement towards an
The management literature is rich in sad exam- to each other through compatible software. international standard.
ples of rewards tied to ill-conceived measures, We need a concerted international effort to Similarly, if all funding agencies used a
resulting in perverse outcomes. When the Heinz combine, augment and institutionalize these universal template for reporting scientific
food company rewarded employees for divi- databases within a cohesive infrastructure. achievements, it could improve data qual-
sional earnings increases, for instance, managers The Brazilian experience with the Lattes ity and reduce the burden on investigators.
played the system by manipulating the timing of Database (http://lattes.cnpq.br/english) is In January 2010, the Research Business
shipments and pre-payments3. Similarly, narrow a powerful example of good practice. This Models Subcommittee of the US National
or biased measures of scientific achievement can provides high-quality data on about 1.6 mil- Science and Technology Council recom-
lead to narrow and biased science. lion researchers and about 4,000 institutions. mended the Research Performance Progress
There is enormous potential to do better: to Brazil’s national funding agency recognized in Report (RPPR) to standardize the reporting
build a science of science measurement. Glo- the late 1990s that it needed a new approach to of research progress. Before this, each US sci-
bal demand for, and interest in, metrics should assessing the credentials of researchers. First, ence agency required different reports, which
galvanize stakeholders — national funding it developed a ‘virtual community’ of federal burdened principal investigators and rendered
agencies, scientific research agencies and researchers a national overview of science investments
organizations and publishing “If we do not press harder to design and develop the impossible. The RPPR guidance helps by
houses — to combine forces. Lattes infrastructure. Sec- clearly defining what agencies see as research
They can set an agenda and for better metrics, we ond, it created appropriate achievements, asking researchers to list every-
foster research that estab- risk making poor funding incentives for researchers thing from publications produced to websites
lishes sound scientific met- decisions or sidelining and academic institutions created and workshops delivered. The stand-
rics: grounded in theory, to use the database: the data ardized approach greatly simplifies such data
built with high-quality data good scientists.” are referred to by the federal collection in the United States. An interna-
and developed by a commu- agency when making fund- tional template may be the logical next step.
nity with strong incentives to use them. ing decisions, and by universities in deciding Importantly, data collected for use in
Scientists are often reticent to see themselves tenure and promotion. Third, it established metrics must be open to the scientific commu-
or their institutions labelled, categorized or a unique researcher identification system to nity, so that metric calculations can be repro-
ranked. Although happy to tag specimens ensure that people with similar names are cred- duced. This also allows the data to be efficiently
as one species or another, many researchers ited correctly. The result is one of the cleanest repurposed. One example is the STAR MET-
do not like to see themselves as specimens researcher databases in existence. RICS (Science and Technology in America’s
under a microscope — they feel that their On an international level, the issue of a Reinvestment — Measuring the Effects of
work is too complex to be evaluated in such unique researcher identification system is Research on Innovation, Competitiveness and
simplistic terms. Some argue that science is one that needs urgent attention. There are Science) project, led by the National Institutes
unpredictable, and that any metric used to various efforts under way in the open-source of Health and the National Science Foundation
488
© 2010 Macmillan Publishers Limited. All rights reserved
NATURE|Vol 464|25 March 2010 OPINION
489
© 2010 Macmillan Publishers Limited. All rights reserved