CHAPTER 5 PERFORMANCE INFORMATION
Introduction
As the committee noted previously, one of the promised benefits of accrual
budgeting and the new reporting framework is the increased attention placed
on the reporting of performance. The efficiency of agencies in producing
outputs is to be demonstrated in terms of price, quantity and quality
indicators, while effectiveness indicators are to assess the extent to
which outcomes have been achieved. Performance indicators are outlined
in the PBS and reported against in the annual report for the year under
review.
It has been constantly stressed that performance reporting should not
be an end in itself, but should support agency decision-making. The committee
was heartened by the number of agencies which indicated that this was
case. For example, Jeff Buckpitt of the Australian Customs Service was
one to comment at the committee's public hearing that the extra emphasis
on performance information was having a very useful impact within his
organisation in terms of timely management reporting. [1]
In the estimates hearings, James Kelaher of the Australian Federal Police
made it clear that the performance measures reported in the PBS were the
same measures used internally. [2]
Performance reporting per se is not new, but its presence in annual reports
has not attracted a great deal of systematic scrutiny from senators. As
the committee has noted in its previous reports on the PBS and their predecessors,
to make sense of specific performance information, an adequate knowledge
base is required and the vagaries of political life frequently work against
the acquisition of such knowledge. While individual senators may develop
expertise in a given subject area, it is a random process. The presence
of specific performance indicators in the 1999-2000 PBS attracted only
a modest amount of senatorial attention in the estimates process but this
is perhaps not surprising as they were mostly promissory notes for future
reporting in the 1999-2000 annual reports.
As with the definition of the outputs and outcomes framework, so too
the level of reporting on performance varied. The Department of Immigration
and Multicultural Affairs explained its choice as follows:
we presented performance information one level below outputs, at what
we call the output component level, because leaving it at
the output level (where our resource figures are reported) was not as
informative. For example when defining quality, quantity and timeliness
measures associated with our non-humanitarian visas, some visas, for
example a visitor visa, can be issued within half-an-hour and others,
for example a permanent resident visa, may take nine months or more.
Thus to provide performance information relating to our non-humanitarian
entry and stay output at the output level would not be very useful.
[3]
Other agencies adopted a similar practice.
Another area in which performance information varied across portfolios
was in the treatment of enabling services. Some agencies specified them
as a separate output, while others incorporated them across all outputs.
Under accrual budgeting, the intended focus is on output delivery, and
therefore there would appear to be little justification for an output
which is merely a means to an end. Senators have traditionally sought
information on internal management issues, however, and as DFAT suggested,
`there is a need to integrate this type of information in a transparent
and suitable format'. [4]
In presenting performance indicators on effectiveness for each outcome,
some agencies opted for a tabular format while others chose a narrative
one. DETYA explained its choice of the latter as allowing some historical/baseline
information to be presented which set the current performance indicators
in context. [5]
In the remainder of this chapter, the committee considers specific issues
relating to performance information which emerged from the estimates hearings.
Quantitative performance information
From the evidence of the estimates hearings, the only performance indicators
which were commended by senators were those which were unequivocally quantifiable.
The Defence estimates hearings raised a number of quantitative performance
indicators which caused senators to question the whole process of setting
performance standards. The number of ship days at minimum level of capability
and the provision of a certain number of patrol boat days for surveillance
of the Australian Fishing Zone were questioned as serious indicators,
with one senator describing the process as akin to asking senators to
buy a pig in a poke. Defence officers explained that until the patrols
had been completed, it was impossible to know what they could achieve
in terms of fishing boats intercepted, vessels boarded, et cetera. Defence
has chosen to express its force element group outputs in capability terms,
rather than by activity which can involve more than one force element;
and specific tasking cannot be predicted accurately at the beginning of
the budget year hence targets such as `as required'.
A further performance information problem in Defence emerged, namely
the fact that the preparedness directives are highly classified information.
Officers indicated that in cases where `full achievement' of preparedness
was required, they would explain carefully any failure to
fully achieve. Also, a review of security classifications against readiness
and sustainability for all force elements was underway. [6]
The committee believes that it is an important accountability principle
that estimates hearings be conducted in public. Should individual senators
or committees wish to be privy to classified information, the mechanisms
already exist for them to seek such information through other scrutiny
hearings in which their powers are not restricted to taking evidence in
public. [7]
Another performance information issue which affected some portfolios
and agencies, and Education and Youth Affairs in particular, was the problem
of activities which operated on a calendar year basis, thus affecting
their comparability with financial year basis activities.
One requirement of quantifiable performance indicators is that they be
precise and clear in what they purport to measure. Estimates questioning
revealed that the Work for the Dole scheme was regarded as meeting its
objectives if a certain percentage of individuals gained employment. `Employment'
however was defined as individuals reporting that they were in paid employment,
even for as short a period as one hour per week. [8]
While one might agree that any move of a long-term unemployed person into
any sort of paid employment was an advantage, more specificity in the
indicators would be helpful.
Many agencies were able to set quite specific numerical targets for their
outputs or sub-outputs. Such diverse agencies as DOFA and the Australian
Customs Service were commended for the precise targets they were able
to set, in the latter case, as an example, cargo facilitation of 97 per
cent. [9] In some cases, the latter's targets
are even enshrined in legislation, the completion of anti-dumping cases
with 155 days, for example. [10]
However, in the Legal and Constitutional Legislation Committee, the problem
of relating the activities of the courts, the Australian Federal Police
and similar agencies to `outcomes' was explored. An `outcome' of no arrests
in a given year could be a positive, and achieved efficiently and effectively.
Similarly a `target' of so many arrests or so many convictions a year
might be not only inappropriate but even dangerous. Quantities of particular
outputs such as special investigations or cases of need for close personal
protection might be inappropriate to foreshadow in a public document such
as the PBS. The throughput of so many court cases in a given year might
be achievable, but without quality decisions such an achievement would
be pointless. [11]
In a submission to the committee, the Department of Health and Aged Care
raised another issue with numerical target setting, pointing out that
it was meaningless in cases where the number of outputs produced was purely
demand driven. [12] The committee is not so
sure about this: if the `target' is really an estimated level of activity,
and for some reason that estimate varies from reality, then it alerts
readers to question why and to contemplate the flow-on effects. Health
and Aged Care presented this committee, and its formal scrutiny committee,
with much to ponder on in performance indicator terms. That 81 per cent
of people on hospital waiting lists are dealt with within certain timeframes
is commendable: but what about the remaining 19 per cent? How much longer
do they have to wait? Who are they? Where are they? What is their medical
condition? The departmental secretary agreed that while the indicators
available from the states had improved substantially, he had no doubt
that there were areas in which they could be further improved. [13]
A number of timeliness targets attracted positive comment from senators.
AFFA's second edition of its action plan for Australian Agriculture was
slated to be produced by the end of 1999, causing Senator Forshaw to comment
in the estimates hearings, `It is easy to understand what a measure like
that is. It is specific. It says, `This will happen by such and such a
time.'' [14]
Target-setting can problematic. An unrealistically high target may attract
undeserved attention for a failure to meet it, whereas a more modest target
may attract undeserved praise for overachievement. In the context of its
continuing review of the PBS, the committee proposes to review annual
reporting against targets and in particular, any examination of such reporting
in additional estimates hearings.
Qualitative information
Qualitative performance information is perforce subjective and open to
challenge. Quality indicators such as `To Minister's satisfaction' invariably
attracted fairly scathing comment from senators. But even indicators such
as `increased levels of awareness amongst regional Australians of relevant
Commonwealth programs and services' was questioned as to how they could
be accurately measured. In that case, senators were informed,
We are not planning a major survey at this stage for measurement, because
that could be quite costly, but we do have constant liaison with ACCs.
We are hoping to use that as one means of gleaning information on the
effectiveness of our communications strategy. Also, in different ways,
through our regional forums, et cetera, and through the division's relationships
with local governments, we feel that we have got enough ability to gauge
feedback to give us some indication on that effectiveness. [15]
The committee, and presumably the questioning senator, will be more than
interested to see how such feedback is translated into performance reporting
in the relevant annual report, not only in this instance but across the
board.
Another interesting but inconclusive discussion on quality indicators
occurred in the Community Affairs Legislation Committee examination of
the estimates of the Health and Aged Care portfolio, whose PBS recorded
that `National leadership' was to be measured by `A high level of satisfaction
of stakeholders with the quality and timeliness of Commonwealth inputs
to national policy, planning and strategy development and implementation'.
[16] Departmental secretary Andrew Podger suggested:
we have to work out how we are going to seek some feedback from our
key stakeholders, being the state departments. We are looking at the
moment at some form of market research that the department ought to
undertake in a systematic way. For example, the Health Insurance Commission
... has annual surveys of community reaction. We are looking at whether
there is something of that sort that the department might do. [17]
He went on to suggest that he would be interested in the views of the
parliament as to how a department gets feedback on its arrangements. This
committee believes that the appropriate forum for such feedback exists
in the regular reports by Senate legislation committees on agency annual
reports, if only all committees took their responsibilities in this matter
seriously.
DOFA has already attempted to stiffen up its quality indicators by the
use of quality rating boxes on ministerial documentation. It aims to achieve
90 per cent of briefings, ministerial replies and parliamentary responses
rated `excellent' or `above average' by the minister. At least these ratings
will be externally verifiable in 30 years' time, when the papers will
be open for scrutiny in Australian Archives. In the meantime, however,
the reporting of the subjective results has to be taken on trust.
Performance information for administered items
As the committee noted previously, one of the more difficult features
of the change to output-based accrual budgeting for senators to come to
terms with was the distinction in treatment of administered compared with
departmental items. DOFA placed particular emphasis on the latter for
PBS reporting purposes. But for portfolios such as Education, Training
and Youth Affairs, more than 98 per cent of whose total expenditure is
accounted for by administered items, this made little sense and accordingly,
DETYA grouped both its administered and departmental items in a single
output group. Nevertheless, specific performance indicators were outlined
for administrative items such as the infrastructure funding for the schools
system. The situation is complicated by the fact that educational accountability
for Commonwealth schools programs is met through participation in the
Annual National Report on Schooling in Australia, a comprehensive description
of developments in schooling during the year and analysis of achievements.
Where such alternative sources of information are available, it seems
sensible to the committee that they be referred to, as DETYA did, in the
PBS without replication.
Effectiveness in contributing to outcomes
The most difficult issue for many agencies will be in disentangling the
contribution of the agency towards outcomes whose achievement depend in
part on other players or other factors. As Ian Kemish of the Department
of Foreign Affairs and Trade pointed out,
Perhaps more than most agencies, DFAT operates in a context which is
largely outside its own direct control. The international context is
a fluid one, and Australia is only one of many players on the world
scene. This context made it difficult for us to develop concrete, measurable,
effectiveness targets, particularly, for example, for our first output
about the protection and advancement of Australia's interests through
the diplomatic network and Australia based activity. [18]
Of almost equal difficulty will be the measurement of progress towards
outcomes which are long term, or even permanent, goals. DFAT has devised
broad `general effectiveness indicators' for its outcomes, supported by
somewhat more measurable `milestones' or steps along the way in pursuing
those outcomes. At the former level, there is `Contribution made to a
positive regional security environment for Australia by encouraging shared
strategic perceptions among key alliance and regional partners on a bilateral,
multilateral and regional basis'; amongst the contributing milestones
there is `Human rights and democracy promoted in China through regular
dialogue with the Chinese government, targeted representations and support
for an effective technical assistance program'. [19]
Education, Training and Youth Affairs too stressed that the objectives
of many of its Commonwealth-funded programs were subject to influence
not only by the performance of the ETYA portfolio but also by the actions
of State and Territory Governments in school and vocational education
and training; similarly, the employment prospects of tertiary education
graduates depended in large part on the prevailing state of general economic
and labour market conditions, a matter well beyond the responsibilities
of the ETYA portfolio. It remains to be seen how the respective contributions
of the various parties to the achievement of any given education outcome
will be weighted and how meaningful they will be.
Those agencies which attempted to outline indicators for asessing effectiveness
in achieving outcomes are to be commended for their efforts. Nevertheless,
there does appear to be a quantum leap of faith in many cases between
the indicators and the achievement of the outcome, even in agencies highly
experienced in these matters. DOFA's outcome 2, `Improved and more efficient
government operations', is supported, inter alia, by secretariat support
for the Remuneration Tribunal. An indicator of the achievement of the
outcome is `Remuneration Tribunal reports and determinations are compliant
with legislation and government policy'. But surely that has always been
the case and its mere continuance would do nothing to improve government
operations?
The committee does not propose, at this early stage in the process, to
highlight what it perceives to be deficiencies in effectiveness indicators.
It is more important in this first year of accrual budgeting in an outcomes/outputs
reporting framework to concentrate on the outputs and the proposed quantitative
and qualitative indicators for those outputs. It does note, however, the
warning from ACT Government experience that initial detailed outcome statements
had changed after three years to a more summary, strategic outcome statement
because of the difficulty in actually being able to measure achievement
towards outcomes.
Conclusions
The committee is aware that a tremendous amount of work has gone into
the development of performance indicators. Indeed, it shares the view
of Peter O'Keeffe of the Department of the Senate that `in this entire
process we need to be very careful that [performance information] and
reporting on it does not become the end in itself'. [20]
As so many agencies indicated, however, their 1999-2000 indicators were
a best effort at the time and they would be reassessed in terms of their
appropriateness and measurability in the light of experience. This implies
that for some, 1999-2000 data will not be usable as a base year for comparative
assessment purposes. If it is, considerable resources will need to be
devoted to explaining the inevitable changes to performance measures.
This reflects exactly the pattern experienced in other jurisdictions.
The ACT Government's budget estimates documentation in the early years
of accruals was littered with `reasons for variation' indicators such
as `measure discontinued', `issue ongoing', `new measure', `the decrease
represents a reprioritisation of resources', `new measure', `survey to
be undertaken', `the 1997-98 target assumes a return to historic levels
of output', `the increase is a worst case projection', `changed practice',
`based on partial year results' et cetera.
With refinement to performance indicators, not only is the assessment
of agency performance delayed, but the much-heralded opportunity to benchmark
performance across agencies will also be affected. The committee also
believes there are significant benefits to be gained from benchmarking
performance across jurisdictions. It was interested to learn of the ACT
Government experience with benchmarking TAFE services. Ms Smithies indicated
that performance measures and comparative pricing information suggested
that the ACT TAFE was a high-cost low-satisfaction service and as a result
suffered significant budgetary cuts. [21] If
appropriate benchmarking is to be achieved federally, truly comparable
benchmark data will be required. The challenge will then be to translate
these data into material which is useful to senators and members. Ultimately
the value of performance information is measured by its use in government
decision-making processes, such as the budget process, and review processes
such as Senate estimates.
The committee notes that the Western Australian Auditor-General is legislatively
required to audit that state's performance information and that the ACT
Auditor-General does so as well. In evidence to the committee, Lyn O'Brien
of the ANAO indicated `we would obviously see we had a role in auditing
that information in the future' but would prefer that agencies had the
opportunity to get their information up to a suitable standard before
such an audit was commenced. [22] This is probably
sensible. Nevertheless the committee can see benefit in the early involvement
of the ANAO in this process. It therefore recommends that, in the
short term, the ANAO consider the development of a `best practice' performance
information guide and in the longer term - but no later than 2002-2003
the ANAO consider across-the-board performance information audits.
In terms of parliamentary examination of performance information, the
committee notes that Senate legislation committees will have the opportunity
to examine it in the context of the examination of the additional estimates.
If no additional funding is sought by a given portfolio, and hence it
has no `additional estimates' per se to be referred to the relevant legislation
committee, this committee expects that our colleagues will use the other
provisions available to them under Senate Standing Order 25(2)(b) to examine
the annual reports of that portfolio.
Footnotes
[1] Senate Finance and Public Administration
Legislation Committee, Hansard, 17 June 1999, p. 73.
[2] Senate Legal and Constitutional Legislation
Committee, Hansard, 31 May 1999, p. 81.
[3] Senate Finance and Public Administration
Legislation Committee, PBS Submissions, p. 52.
[4] ibid., pp. 79-80.
[5] ibid., p. 96.
[6] Senate Foreign Affairs, Defence and Trade
Legislation Committee, Hansard, 7 June 1999, p. 10.
[7] See Standing Order 25(15), Senate Standing
Orders, February 1999.
[8] Senate Employment, Workplace Relations,
Small Business and Education Legislation Committee, Hansard, 8
June 1999, pp. 91-92.
[9] Senate Legal and Constitutional Legislation
Committee, Hansard, 31 May 1999, p. 103.
[10] Senate Legal and Constitutional Legislation
Committee, Hansard, 1 June 1999, p. 128.
[11] Senate Legal and Constitutional Legislation
Committee, Hansard, 31 May 1999, pp. 80-85.
[12] Senate Finance and Public Administration
Legislation Committee, PBS Submissions, p. 27.
[13] Senate Community Affairs Legislation Committee,
Hansard, 1 June 1999, p. 137.
[14] Senate Rural and Regional Affairs and
Transport Legislation Committee, Hansard, 3 June 1999, p. 349.
[15] Senate Rural and Regional Affairs and
Transport Legislation Committee, Hansard, 31 May 1999, p. 65.
[16] Health and Aged Care portfolio, Portfolio
Budget Statements 1999-2000, p. 64.
[17] Senate Community Affairs Legislation Committee,
Hansard, 31 May 1999, p. 69.
[18] Senate Finance and Public Administration
Legislation Committee, Hansard, 17 June 1999, p. 68.
[19] Foreign Affairs and Trade portfolio, PBS
1999-2000, pp. 33-34.
[20] Senate Finance and Public Administration
Legislation Committee, Hansard, 17 June 1999, p. 70.
[21] Senate Finance and Public Administration
Legislation Committee, Hansard, 17 June 1999, pp. 18-19.
[22] ibid., pp. 70-71
Top
|