Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Christopher Francis 1
Human beings love to measure and compare all kinds of things, from cars to detergents.
We just don’t like doing it in the workplace. Apart from anything else, we suspect that it
will be used against us during our annual performance review, in the event of a re-
structuring of the organisation or as part of a best value analysis, if you are in Victoria.
This would not be as difficult a problem for us if it were not for the fact that performance
measurement has been one of the drivers in public sector reform over the past twenty
years – if not longer. It is here to stay. For some commentators, such as Warren
McCann [2001:111] the battle to have a performance-driven public sector is largely won:
Whilst this might be true at the Federal and State levels, the same might not be so
confidently said about local government.
It should be remembered that, prior to 1995, the States had taken only a passing interest
in local government performance. There were exceptions - NSW for example had been
using a set of performance indicators to measure and compare councils since 1991 –
but there is no doubt that 1995 marks a turning point. Firstly, the concept of national
performance indicators was proposed at the Local Government Ministers’ Conference;
and secondly, National Competition Policy was introduced through the Competition
Principles Agreement. 2
It was, therefore, opportune for the Federal Government to direct the Industry
Commission to inquire into the feasibility of developing national performance indicators
for local government. The Commission [1997:33] concluded that a “nationally consistent
approach to performance measurement for local government is not warranted at this
time” because each State had a different view on its relationship with local government
and this affected their expectation of the level of performance information that local
government should provide. By and large, the situation has not changed dramatically
since 1997. As Aulich [1999:17] observed, the States accept that performance
information is necessary for local government reform, but choose to drive the reform at
arm’s length.
I might add, as way of an aside, that the UK Department of Environment, Transport and
Regions, in its Best Value Performance Indicators for 2001/2002, recognised the need to
have both national performance indicators and also local indicators for local government:
1
authorities to reflect local priorities and tailor best value to suit local
circumstances. [2001:12]
Of more interest was the Commission’s view that performance measurement “may be
somewhat confronting for some local governments, as it is for other organisations,
especially if it is seen as a beating stick” [1997:10]. Apart from the reservations of staff
about performance information, we should not forget that performance measurement
could be used by the Commonwealth or the States in allocating funds. Obviously a set of
national performance indicators would enable greater transparency and accountability at
this level, something perhaps not in the self-interest of local government.
Martin’s study of the attitudes of staff in 26 Victorian councils confirms the overall
position I have presented:
Whilst this is a necessarily admirable goal and one consistently touted as the bedrock of
successful performance-driven organisations [eg Eccles 1991; National Performance
Review 1997], it is hard to do.
On this point, the Western Australian Government, like many other public sector bodies,
is ambivalent for on the one hand, it states: “[where] possible, the strategic objective to
be achieved….should be linked to the performance criteria of a responsible officer” and
yet cautions:
Despite the strong and desirable link between the high level objectives of
the organisation and the lower level objectives for individual staff
performance, overseas experience indicates that a high-level performance
measurement system should never be linked to staff salaries. [2001:2.13]
During the 1980s and 1990s, as private and public sector managers struggled to make
sense of the world, they began to seek a quick-fix solution. They began to fad surf, the
practice of “riding the crest of the latest management panacea and then riding out again
just in time to ride the next one; always absorbing for managers and lucrative for
consultants; frequently disastrous for organizations” [Shapiro 1998:xiii].
Enamoured with management theory and a plethora of techniques and strategies, the
public sector spent heavily on consultants and private sector philosophies.
Unfortunately, as Pollitt [1997] and Mickelthwait and Wooldridge [1996:313-345] have
2
argued, the uncritical adoption of the new public management and its has been at the
expense of true reform and has resulted in widespread confusion, disillusionment and
waste.
Within this environment, it was inevitable that performance measurement would be seen
as a fad and this, in fact, is what was suggested as far back as 1992, when David
Corbett [1992:179] wrote: “Among the buzz-words of public sector management in the
1990s, none is more widely discussed than performance indicators.”
Performance measurement is here to stay. For all its difficulties and failures, and there
have been many, it is still the most powerful and reliable tool in our managerial kit. The
reality is that performance measurement is necessary for organisational and personal
success, or as John Lynch, of the WA Department of Local Government, writes: “Any
judgement of organisational success must be able to stand up to external scrutiny
therefore, it must be objective not intuitive.”
Or, more effusively, according to Osborne and Gaebler [1993:138-165]: What gets
measured gets done. If you don’t measure results, you can’t tell success from failure. If
you can’t see success, you can’t reward it. If you can’t reward success, you’re probably
rewarding failure. If you can’t see success, you can’t learn from it. If you can’t recognise
failure, you can’t correct it. If you can demonstrate results, you are a winner!
Critics of performance measurement point out that even the most extensive applications
of performance measurement in the public sector failed to make significant progress. For
the purpose of this paper I will focus on the Clinton Administration’s reform program from
1993-2000.
The major legislative plank of the reform program was the Government Performance
and Results Act 1993. The genesis for the legislation was threefold:
(1) Waste and inefficiency in Federal programs had undermined public confidence in the
Government and had reduced the Government's ability to address adequately vital
public needs;
(2) Federal managers were seriously disadvantaged in their efforts to improve program
efficiency and effectiveness because of insufficient articulation of program goals and
inadequate information on program performance; and
3
To redress these problems the Act sought to
(2) Initiate program performance reform with a series of pilot projects in setting program
goals, measuring program performance against those goals, and reporting publicly
on their progress; and
(3) Improve program effectiveness and public accountability by promoting a new focus
on results, service quality, and customer satisfaction.
Another key aspect of the Act was the time-frame for achieving its aims. The
Administration recognised that the scope of the proposed reform required a long time.
For example, agencies were given until 30 September 1997 to submit a strategic plan
and until 31 March 2000 to submit a report on program performance for the previous
fiscal year.
So, was the GPRA a failure? As Pollitt [1997] has remarked, most public sector
management reform is more a matter of faith than carefully evaluated and proven
achievement and evaluation is usually couched in politically expedient terms that favour
the reformers. Bearing this mind, throughout the reform process the Clinton
Administration and the Congress used the General Accounting Office as an independent
auditor of the program’s achievements. Let us briefly examine its findings over some
nine years.
• 1996 - the GAO [1996] verified that only 294 action items of the target 1,203
identified in the National Performance Review had been completed.
• 1997 - the GAO [1997] reviewed 27 agencies’ draft strategic plans, noting that a
significant amount of work remained to be done.
• 2001 – GAO releases its Performance and Accountability Series looking at individual
agencies.
• 2001 GAO [2001] reported that its survey of 3,816 federal managers found that there
was wide variation in implementing the principles of the GPRA. However, an
encouraging note was that significantly more managers reported having performance
measures for their programs than previously.
Our work over the past several years has identified limitations in agencies’
abilities to produce credible program performance and cost data and
identify performance improvement opportunities. These limitations are
4
substantial and long-standing, and they will not be quickly or easily
resolved. [2000:23]
Not surprisingly, the Bush Administration believes that the reform program has been a
failure. President Bush asserts that government likes to begin things, but good
beginnings are not the measure of success. In a curious attack of Dubbya English, he
[2001:3] stated that “[what] matters in the end is completion. Performance. Results.”
Despite its desire to distinguish itself from its predecessor, the Bush Administration still
relies on the GPRA:
The GPRA has been a defining piece of legislation and the fact that it recognises the
considerable time required to have genuine reform gives me some hope that during this
decade will have hard evidence that what was started in 1993 has not been a failure. If
the largest bureaucracy in the world can slowly reform itself, there is hope for us.
Clearly then, within our tradition of government, there is an expectation that public
money will be accounted for. The question is, what does this actually mean? Taken at
face value, to account for the public’s money means just that, to produce an accurate
statement of expenditure. For this purpose the performance indicators we require are
financial.
In its aftermath, the public and political outcry forced the accounting profession to
develop more effective means of recording, analysing and reporting on a company’s
financial health. Implicit in the development of financial performance information was the
realisation that, for the economy to operate efficiently, there had to be a level of
confidence in the integrity of its operations.
3
Australian National Audit Office [1996]
4
See O’Faircheallaigh et al [1999:7ff] for a discussion of this.
5
Consequently, during the 1930s accounting theory and accounting standards developed
(eg the American Accounting Association’s A Tentative Statement of Accounting
Principles Affecting Corporate Reports of 1936). As well, federal legislation was enacted,
such as the Securities Acts of 1933 and 1934 on initial public offerings and external
reporting requirements of public companies [Miranti 2001].
However, even financial performance information tells us nothing about how the money
has been spent and this problem was recognised even in the private sector, well before
it became an issue for the public sector. As far back as 1951, Ralph Cordiner (CEO of
General Electric), commissioned an internal task force to identify key corporate
performance indicators and that group identified the following: profitability, market share,
employee attitudes, public responsibility and the balance between short and long-term
goals [Eccles 1991:132]. Therefore, it was only a matter of time before Eccles [1991]
and Kaplan and Norton [1992] would be urging business to look beyond financial
performance indicators. In the case of the latter writers, the Balanced Scorecard
Approach has proved to be, dare I say it, a bit of a fad itself.
The public expects that public money will be used wisely, perhaps a quaint term, but one
which connotes judgement, knowledge and experience. The public expects that we will
spend their money on things relevant to their needs, that such expenditure will be done
efficiently so as not to waste their money and that such expenditure will be effective, that
is, meeting their needs. These are the elements of relevance, efficiency and
effectiveness.
How then does the public know that this is happening? This is the dilemma addressed
by Professor Bob Walker [1999:1], former head of NSW’s Council on the Cost of
Government:
The urgency attached to finding a solution to this problem can be traced back to the
rapid expansion in the size and complexity of government since World War Two. In the
1950s the US Government looked at what it called performance budgeting, in which
money was allocated according to the tasks to be performed, and not the items of
expenditure. In the 1960s the US and Canadian Governments introduced Program
6
Performance Budgeting Systems with little long-term success, despite a general
agreement that such an approach was needed.
Despite these initiatives, it became ever more difficult for elected officials and public
servants to manage, and with this came a decline in accountability. As Foley [1982:251]
reflected, by the 1970s, the public demanded action because it had become difficult to
bring “government to account for both its excesses and its deficiencies, or just simply
[find] out what it does, (which is, of course, a necessary condition for meaningful policy
analysis)...” One of the major problems was the public sector’s slavish adherence to line-
item budgeting which “gave too little attention to the purposes for which money was
being spent.” [Corbett 1992:100]
The issue was not just about how and why money was being spent, but also whether or
not the money was being spent effectively, given the hundreds of government programs
dealing with a myriad of social issues and services. It became clear that the only means
of assessing the effectiveness of government expenditure was through evaluation using
performance indicators.
Since then, the evaluation of public sector programs has been at the forefront of public
sector reform. For example, in Australia, the Royal Commission on Australian
Government Administration (1976) considered that each department or agency should
regularly review its programs and that a central agency should oversee this to ensure
that these reviews were done using a common methodology that looked at three aspects
to program evaluation: fiscal accountability, efficiency and effectiveness.
Subsequently, for the past twenty years, program evaluation and evaluation
methodology have gone hand in hand with performance measurement. Although we
might debate the finer points of both, there is no valid reason to reject them.
Conclusion
In such a short time I have not attempted to cover all the issues that this topic warrants.
However, I have attempted to focus your attention on three areas that give rise to
performance anxiety:
7
problems. I hope that I have provided three such arguments today to assist you. Thank
you.
8
REFERENCES
Aristotle, Politics
Australian National Audit Office [1996], Better Practice Guide – Performance Information
Principles
Bush, G W [2001] The President’s Management Agenda Fiscal Year 2002, Executive Office of
the President and Office of Management and Budget.
Corbett, D [1992] Australian Public Sector Management, Allen and Unwin: Sydney
Department of Environment, Transport and Regions Best Value Performance Indicators for
2001/2002.
Foley, K J [1982] “The Public Bodies Review Committee of the Victorian Parliament”, in
Nethercote [1982].
General Accounting Office [1996], Management Reform – Completion Status of Agency Actions
under the National Performance Review, June 1996
General Accounting Office [1997], Managing for Results – Critical Issues for Improving Federal
Agencies’ Strategic Plans (GAO/GGD-97-180)
General Accounting Office [2001], Managing for Results – Federal managers’ Views on Key
Management Issues vary Widely Across Agencies (GAO-01-592)
Kaplan, R S and D P Norton [1992], “The Balanced Scorecard – Measures that Drive
Performance”, Harvard Business Review.
McCann, W [2001] “Institution of Public Administration Australia: Some Observations about the
Profession of Public Service”, Australian Journal of Public Administration 60(4):110-115
9
National Performance Review [1993], From Red Tape to Results – Creating a Government that
Works better and Costs Less, Report of the National Performance Review
National Performance Review [1997], Serving the American Public: Best Practices in
Performance Measurement
Nethercote, J L (Ed), [1982] Parliament and Bureaucracy, Hale and Ironmonger: Sydney
O’Faircheallaigh, C, J Wanna and P Weller, [1999] Public Sector Management in Australia, (2nd
edition) Macmillian
Pollitt, C [1997] “Evaluation and the New Public Management: An international perspective”,
Evaluation Journal of Australasia 9(1/2):7-15
Walker, B [1999] Reporting on Service Efforts and Accomplishments in the NSW Public Sector, NSW
Government
Walker, D [2000], Managing in the New Millennium – Shaping a More Efficient and Effective Government
for the 21st Century, (GAO/T-OCG-00-9) General Accounting Office, March 2000
10