Está en la página 1de 10

Performance Anxiety or being beaten up with numbers

Christopher Francis 1

Human beings love to measure and compare all kinds of things, from cars to detergents.
We just don’t like doing it in the workplace. Apart from anything else, we suspect that it
will be used against us during our annual performance review, in the event of a re-
structuring of the organisation or as part of a best value analysis, if you are in Victoria.

This would not be as difficult a problem for us if it were not for the fact that performance
measurement has been one of the drivers in public sector reform over the past twenty
years – if not longer. It is here to stay. For some commentators, such as Warren
McCann [2001:111] the battle to have a performance-driven public sector is largely won:

We now have a firmly established performance culture in the public sector


throughout Australia. There is room for improvement….but the basic
notion itself – we are driven to perform – is now an integral part of
everything we do.

Whilst this might be true at the Federal and State levels, the same might not be so
confidently said about local government.

It should be remembered that, prior to 1995, the States had taken only a passing interest
in local government performance. There were exceptions - NSW for example had been
using a set of performance indicators to measure and compare councils since 1991 –
but there is no doubt that 1995 marks a turning point. Firstly, the concept of national
performance indicators was proposed at the Local Government Ministers’ Conference;
and secondly, National Competition Policy was introduced through the Competition
Principles Agreement. 2

It was, therefore, opportune for the Federal Government to direct the Industry
Commission to inquire into the feasibility of developing national performance indicators
for local government. The Commission [1997:33] concluded that a “nationally consistent
approach to performance measurement for local government is not warranted at this
time” because each State had a different view on its relationship with local government
and this affected their expectation of the level of performance information that local
government should provide. By and large, the situation has not changed dramatically
since 1997. As Aulich [1999:17] observed, the States accept that performance
information is necessary for local government reform, but choose to drive the reform at
arm’s length.

I might add, as way of an aside, that the UK Department of Environment, Transport and
Regions, in its Best Value Performance Indicators for 2001/2002, recognised the need to
have both national performance indicators and also local indicators for local government:

Authorities are encouraged to develop and use local performance


indicators, in addition to those specified by the Government. Local
indicators are an important measure of local performance and of the
responsiveness of the authority to meeting local needs. They allow
1
Director, Organisation Strategy, City of Ballarat
2
See Aulich [1999]

1
authorities to reflect local priorities and tailor best value to suit local
circumstances. [2001:12]

Of more interest was the Commission’s view that performance measurement “may be
somewhat confronting for some local governments, as it is for other organisations,
especially if it is seen as a beating stick” [1997:10]. Apart from the reservations of staff
about performance information, we should not forget that performance measurement
could be used by the Commonwealth or the States in allocating funds. Obviously a set of
national performance indicators would enable greater transparency and accountability at
this level, something perhaps not in the self-interest of local government.

Martin’s study of the attitudes of staff in 26 Victorian councils confirms the overall
position I have presented:

The job performance dimension highlights inconsistency surrounding this


important area of work. While staff were in agreement that the emphasis
on achieving results they have weaker views about individual rewards
being based on performance….[Staff] believe that their colleagues care
about and strive for excellent performance. They also accept people on
the basis of their results. They are, however, less convinced that there is a
clear way of measuring performance in their organisation. Clearly
managers can do more to connect individual work performance with
organisational performance. [1999:30-31]

Whilst this is a necessarily admirable goal and one consistently touted as the bedrock of
successful performance-driven organisations [eg Eccles 1991; National Performance
Review 1997], it is hard to do.

On this point, the Western Australian Government, like many other public sector bodies,
is ambivalent for on the one hand, it states: “[where] possible, the strategic objective to
be achieved….should be linked to the performance criteria of a responsible officer” and
yet cautions:

Despite the strong and desirable link between the high level objectives of
the organisation and the lower level objectives for individual staff
performance, overseas experience indicates that a high-level performance
measurement system should never be linked to staff salaries. [2001:2.13]

What follows are three observations about performance measurement.

1. Performance measurement is not a management fad

During the 1980s and 1990s, as private and public sector managers struggled to make
sense of the world, they began to seek a quick-fix solution. They began to fad surf, the
practice of “riding the crest of the latest management panacea and then riding out again
just in time to ride the next one; always absorbing for managers and lucrative for
consultants; frequently disastrous for organizations” [Shapiro 1998:xiii].

Enamoured with management theory and a plethora of techniques and strategies, the
public sector spent heavily on consultants and private sector philosophies.
Unfortunately, as Pollitt [1997] and Mickelthwait and Wooldridge [1996:313-345] have

2
argued, the uncritical adoption of the new public management and its has been at the
expense of true reform and has resulted in widespread confusion, disillusionment and
waste.

Within this environment, it was inevitable that performance measurement would be seen
as a fad and this, in fact, is what was suggested as far back as 1992, when David
Corbett [1992:179] wrote: “Among the buzz-words of public sector management in the
1990s, none is more widely discussed than performance indicators.”

Performance measurement is here to stay. For all its difficulties and failures, and there
have been many, it is still the most powerful and reliable tool in our managerial kit. The
reality is that performance measurement is necessary for organisational and personal
success, or as John Lynch, of the WA Department of Local Government, writes: “Any
judgement of organisational success must be able to stand up to external scrutiny
therefore, it must be objective not intuitive.”

Or, more effusively, according to Osborne and Gaebler [1993:138-165]: What gets
measured gets done. If you don’t measure results, you can’t tell success from failure. If
you can’t see success, you can’t reward it. If you can’t reward success, you’re probably
rewarding failure. If you can’t see success, you can’t learn from it. If you can’t recognise
failure, you can’t correct it. If you can demonstrate results, you are a winner!

2. Performance measurement initiatives have not failed

Critics of performance measurement point out that even the most extensive applications
of performance measurement in the public sector failed to make significant progress. For
the purpose of this paper I will focus on the Clinton Administration’s reform program from
1993-2000.

Performance measurement became the defining tool of the Administration. In March


1993 it established the National Performance Review and embarked on a six-month
review of the way the government worked. It focussed on the “performance deficit” or not
what government does, but how it works. The report [NPR 1993] contained 384
recommendations intended to make the government work better and cost less.
Subsequently, 1,203 action items were identified as necessary to implement these
recommendations.

The major legislative plank of the reform program was the Government Performance
and Results Act 1993. The genesis for the legislation was threefold:

(1) Waste and inefficiency in Federal programs had undermined public confidence in the
Government and had reduced the Government's ability to address adequately vital
public needs;

(2) Federal managers were seriously disadvantaged in their efforts to improve program
efficiency and effectiveness because of insufficient articulation of program goals and
inadequate information on program performance; and

(3) Congressional policymaking, spending decisions and program oversight were


seriously handicapped by insufficient attention to program performance and results.

3
To redress these problems the Act sought to

(1) Improve public confidence by systematically holding agencies accountable for


achieving program results;

(2) Initiate program performance reform with a series of pilot projects in setting program
goals, measuring program performance against those goals, and reporting publicly
on their progress; and

(3) Improve program effectiveness and public accountability by promoting a new focus
on results, service quality, and customer satisfaction.

Another key aspect of the Act was the time-frame for achieving its aims. The
Administration recognised that the scope of the proposed reform required a long time.
For example, agencies were given until 30 September 1997 to submit a strategic plan
and until 31 March 2000 to submit a report on program performance for the previous
fiscal year.

So, was the GPRA a failure? As Pollitt [1997] has remarked, most public sector
management reform is more a matter of faith than carefully evaluated and proven
achievement and evaluation is usually couched in politically expedient terms that favour
the reformers. Bearing this mind, throughout the reform process the Clinton
Administration and the Congress used the General Accounting Office as an independent
auditor of the program’s achievements. Let us briefly examine its findings over some
nine years.

• 1996 - the GAO [1996] verified that only 294 action items of the target 1,203
identified in the National Performance Review had been completed.

• 1997 - the GAO [1997] reviewed 27 agencies’ draft strategic plans, noting that a
significant amount of work remained to be done.

• 2001 – GAO releases its Performance and Accountability Series looking at individual
agencies.

• 2001 GAO [2001] reported that its survey of 3,816 federal managers found that there
was wide variation in implementing the principles of the GPRA. However, an
encouraging note was that significantly more managers reported having performance
measures for their programs than previously.

The most comprehensive discussion of the reform program is contained in the


Testimony of David Walker (GAO Comptroller General) before the Committee on
Governmental Affairs in March 2000. He noted that “some progress in agency efforts to
manage more economically and efficiently. But, more needs to be done to achieve real
and sustained improvements” [2000:15]; and “Our work has consistently shown that
many agencies face long-standing and substantial challenges to further progress”
[2000:16]. And finally:

Our work over the past several years has identified limitations in agencies’
abilities to produce credible program performance and cost data and
identify performance improvement opportunities. These limitations are

4
substantial and long-standing, and they will not be quickly or easily
resolved. [2000:23]

Not surprisingly, the Bush Administration believes that the reform program has been a
failure. President Bush asserts that government likes to begin things, but good
beginnings are not the measure of success. In a curious attack of Dubbya English, he
[2001:3] stated that “[what] matters in the end is completion. Performance. Results.”

Despite its desire to distinguish itself from its predecessor, the Bush Administration still
relies on the GPRA:

Agency performance measures tend to be ill defined and not properly


integrated into agency budget submissions and the management and
operation of agencies. Performance measures are insufficiently used to
monitor and reward staff, or to hold program managers accountable [Bush
2001:27].

The GPRA has been a defining piece of legislation and the fact that it recognises the
considerable time required to have genuine reform gives me some hope that during this
decade will have hard evidence that what was started in 1993 has not been a failure. If
the largest bureaucracy in the world can slowly reform itself, there is hope for us.

3. Performance measurement is a tool for accountability

Performance information is a tool for ensuring accountability. 3 If we are to identify one


quality that distinguishes the private and public sectors then it ought to be that of
accountability to the public. 4 This is not a new idea. Long ago Aristotle said that, in order
to avoid embezzlement of public money, “the transfer of the revenue” should be made in
public and that “[since] many, not to say all, of these offices handle the public money,
there must of necessity be another office which examines and audits them, and has no
other functions.”

Clearly then, within our tradition of government, there is an expectation that public
money will be accounted for. The question is, what does this actually mean? Taken at
face value, to account for the public’s money means just that, to produce an accurate
statement of expenditure. For this purpose the performance indicators we require are
financial.

Performance measurement is logically and historically linked to accounting. As Michael


Chatfield [1977] has argued, the measurements developed by managers in the industrial
revolution stimulated the evolution of modern accounting. Yet by the 20th century,
accounting theory was inadequate to deal with the modern capitalist economy,
particularly in the United States. The scene was set for the Crash of 1929.

In its aftermath, the public and political outcry forced the accounting profession to
develop more effective means of recording, analysing and reporting on a company’s
financial health. Implicit in the development of financial performance information was the
realisation that, for the economy to operate efficiently, there had to be a level of
confidence in the integrity of its operations.

3
Australian National Audit Office [1996]
4
See O’Faircheallaigh et al [1999:7ff] for a discussion of this.

5
Consequently, during the 1930s accounting theory and accounting standards developed
(eg the American Accounting Association’s A Tentative Statement of Accounting
Principles Affecting Corporate Reports of 1936). As well, federal legislation was enacted,
such as the Securities Acts of 1933 and 1934 on initial public offerings and external
reporting requirements of public companies [Miranti 2001].

Today we have primary measures of a company’s performance, such as profit or


earnings per share, that are derived from financial information. Undeniably, the purpose
of these performance indicators is to inform the market, investors, stakeholders and
shareholders. Whether these specific indicators should be used by public sector
accountants is an on-going debate [Corbett 1992:117-119].

However, even financial performance information tells us nothing about how the money
has been spent and this problem was recognised even in the private sector, well before
it became an issue for the public sector. As far back as 1951, Ralph Cordiner (CEO of
General Electric), commissioned an internal task force to identify key corporate
performance indicators and that group identified the following: profitability, market share,
employee attitudes, public responsibility and the balance between short and long-term
goals [Eccles 1991:132]. Therefore, it was only a matter of time before Eccles [1991]
and Kaplan and Norton [1992] would be urging business to look beyond financial
performance indicators. In the case of the latter writers, the Balanced Scorecard
Approach has proved to be, dare I say it, a bit of a fad itself.

The public expects that public money will be used wisely, perhaps a quaint term, but one
which connotes judgement, knowledge and experience. The public expects that we will
spend their money on things relevant to their needs, that such expenditure will be done
efficiently so as not to waste their money and that such expenditure will be effective, that
is, meeting their needs. These are the elements of relevance, efficiency and
effectiveness.

How then does the public know that this is happening? This is the dilemma addressed
by Professor Bob Walker [1999:1], former head of NSW’s Council on the Cost of
Government:

Current arrangements for the accountability of public sector agencies


place particular emphasis on the publication of annual reports, and the
presentation of extensive information about financial matters. These
requirements…are undoubtedly important [but]… they do not tell much
about the way in which those agencies have undertaken their core tasks
[which are mainly] the provision of services to the community…Hence the
primary measures of their performance must be largely derived from non-
financial information. That has led to acceptance of the need to prepare
and publish non-financial measures of performance.

The urgency attached to finding a solution to this problem can be traced back to the
rapid expansion in the size and complexity of government since World War Two. In the
1950s the US Government looked at what it called performance budgeting, in which
money was allocated according to the tasks to be performed, and not the items of
expenditure. In the 1960s the US and Canadian Governments introduced Program

6
Performance Budgeting Systems with little long-term success, despite a general
agreement that such an approach was needed.

Despite these initiatives, it became ever more difficult for elected officials and public
servants to manage, and with this came a decline in accountability. As Foley [1982:251]
reflected, by the 1970s, the public demanded action because it had become difficult to
bring “government to account for both its excesses and its deficiencies, or just simply
[find] out what it does, (which is, of course, a necessary condition for meaningful policy
analysis)...” One of the major problems was the public sector’s slavish adherence to line-
item budgeting which “gave too little attention to the purposes for which money was
being spent.” [Corbett 1992:100]

The issue was not just about how and why money was being spent, but also whether or
not the money was being spent effectively, given the hundreds of government programs
dealing with a myriad of social issues and services. It became clear that the only means
of assessing the effectiveness of government expenditure was through evaluation using
performance indicators.

Since then, the evaluation of public sector programs has been at the forefront of public
sector reform. For example, in Australia, the Royal Commission on Australian
Government Administration (1976) considered that each department or agency should
regularly review its programs and that a central agency should oversee this to ensure
that these reviews were done using a common methodology that looked at three aspects
to program evaluation: fiscal accountability, efficiency and effectiveness.

In 1979 the SA Government introduced Program-Performance Budgeting (PPB) which


was defined as a “plan which relates input resources (for example, money, manpower
and plant) to expected output results (service volumes, performance indicators or
measures) using a classification scheme which groups similar endeavours” [Strickland
1982:116]. By the early 1980s evaluation was a non sine qua of public accountability:
the “goals for each program should be clearly stated” and “performance should be
assessed and evaluated regularly” [Matthews 1982:101].

Subsequently, for the past twenty years, program evaluation and evaluation
methodology have gone hand in hand with performance measurement. Although we
might debate the finer points of both, there is no valid reason to reject them.

Conclusion

In such a short time I have not attempted to cover all the issues that this topic warrants.
However, I have attempted to focus your attention on three areas that give rise to
performance anxiety:

• It is a fad and will soon be forgotten.


• Even if we implement it, it will fail.
• It is not important or relevant to us as public servants.

As we strive to implement effective performance measurement systems and develop


effective indicators, there will be many opportunities for critics to question the value of
this work. We must not respond with ideological or emotional entreaties, but rather with
fully-informed arguments that do not shy away from the intellectual and practical

7
problems. I hope that I have provided three such arguments today to assist you. Thank
you.

8
REFERENCES

Aristotle, Politics

Aulich, C [1999] “From Convergence to Divergence: Reforming Australian Local Government”,


Australian Journal of Public Administration 58(2):12-23

Australian National Audit Office [1996], Better Practice Guide – Performance Information
Principles

Bush, G W [2001] The President’s Management Agenda Fiscal Year 2002, Executive Office of
the President and Office of Management and Budget.

Chatfield, M [1977] A History of Accounting Thought, Kreiger Publishing

Corbett, D [1992] Australian Public Sector Management, Allen and Unwin: Sydney

Department of Environment, Transport and Regions Best Value Performance Indicators for
2001/2002.

Department of Local Government (Western Australia) [2001], Performance Measurement


Guidelines for Western Australian Governments

Eccles, R G [1991], “The Performance Measurement Manifesto”, Harvard Business Review,


January-February

Foley, K J [1982] “The Public Bodies Review Committee of the Victorian Parliament”, in
Nethercote [1982].

General Accounting Office [1996], Management Reform – Completion Status of Agency Actions
under the National Performance Review, June 1996

General Accounting Office [1997], Managing for Results – Critical Issues for Improving Federal
Agencies’ Strategic Plans (GAO/GGD-97-180)

General Accounting Office [2001], Managing for Results – Federal managers’ Views on Key
Management Issues vary Widely Across Agencies (GAO-01-592)

Industry Commission [1997], Performance Measures for Councils

Kaplan, R S and D P Norton [1992], “The Balanced Scorecard – Measures that Drive
Performance”, Harvard Business Review.

McCann, W [2001] “Institution of Public Administration Australia: Some Observations about the
Profession of Public Service”, Australian Journal of Public Administration 60(4):110-115

Martin, J [1999] “Leadership in local Government reform: Strategic Direction v Administrative


Compliance”, Australian Journal of Public Administration 58(2):24-37

Matthews, R [1982] “Expenditure control in the Victorian Parliament”, in Nethercote [1982].

Micklethwait, J and A Wooldridge [1996] The Witch Doctors, Heinemann

Miranti, P J “US Financial Reporting Standardization 1840-2000”, World Bank website.

9
National Performance Review [1993], From Red Tape to Results – Creating a Government that
Works better and Costs Less, Report of the National Performance Review

National Performance Review [1997], Serving the American Public: Best Practices in
Performance Measurement

Nethercote, J L (Ed), [1982] Parliament and Bureaucracy, Hale and Ironmonger: Sydney

O’Faircheallaigh, C, J Wanna and P Weller, [1999] Public Sector Management in Australia, (2nd
edition) Macmillian

Osborne, D and T Gaebler [1993] Reinventing Government, Plume

Pollitt, C [1997] “Evaluation and the New Public Management: An international perspective”,
Evaluation Journal of Australasia 9(1/2):7-15

Shapiro, E C [1998] Fad Surfing in the Boardroom, Capstone

Strickland, A J [1982] “South Australia’s Program-Performance Budgeting experience”, in


Nethercote [1982].

United States of America, Government Performance and Results Act 1993

Walker, B [1999] Reporting on Service Efforts and Accomplishments in the NSW Public Sector, NSW
Government

Walker, D [2000], Managing in the New Millennium – Shaping a More Efficient and Effective Government
for the 21st Century, (GAO/T-OCG-00-9) General Accounting Office, March 2000

10

También podría gustarte