
Released under FOI Act
Brie f
Page 1 of 38

Released under FOI Act
Further information
For information on other department initiatives please see the department’s website at:
www.industry.gov.au/OCE
For more information or to comment on this publication please contact:
Dr Katherine Barnes
Evaluation Unit
Department of Industry, Innovation and Science
GPO Box 2013
CANBERRA ACT 2601
Telephone: +61 2 6102 8901
© Commonwealth of Australia 2017
ISSN: 2208-9268
Copyright
This work is copyright. Apart from use under Copyright Act 1968, no part may be reproduced or altered
by any process without prior written permission from the Australian Government. Requests and
inquiries concerning reproduction and rights should be addressed to xxxxxxxxxxxxxx@xxxxxxxx.xxx.xx
For more information on the work of the Office of the Chief Economist please access the department’s
website at: www.industry.gov.au/OCE
Creative Commons License
With the exception of the Coat of Arms, this publication is licensed under a Creative Commons
Attribution 3.0 Australia Licence.
Creative Commons Attribution 3.0 Australia Licence is a standard form license agreement that allows
you to copy, distribute, transmit and adapt this publication provided that you attribute the work. A
summary of the licence terms is available from http://creativecommons.org/licenses/by/3.0/au/deed.en
The full licence terms are available from http://creativecommons.org/licenses/by/3.0/au/legalcode
The Commonwealth’s preference is that you attribute this publication (and any material sourced from it)
using the following wording:
Page 2 of 38
link to page 4 link to page 6 link to page 6 link to page 9 link to page 10 link to page 11 link to page 13 link to page 14 link to page 14 link to page 15 link to page 16 link to page 16 link to page 17 link to page 17 link to page 17 link to page 18 link to page 18 link to page 20 link to page 20 link to page 24 link to page 24 link to page 24 link to page 25 link to page 25 link to page 26 link to page 26 link to page 28 link to page 28 link to page 28 link to page 29 link to page 30 link to page 35
Released under FOI Act
Contents
Introduction
4
1. PRINCIPLES AND PLANNING
6
Performance measurement and reporting
6
Impact of evaluation activity
9
What is evaluation?
10
Good evaluation practices
11
The role of the Evaluation Unit
13
Evaluation and Audit — what is the difference?
14
Assurance and Audit Committee
14
Program Assurance Committee
15
Program tiering
16
The department’s four-year Evaluation Plan
16
2. EVALUATION READY
17
Evaluation Ready
17
Program logic model
17
Data matrix
18
Program evaluation strategy
18
3. EVALUATIONS
20
Approaches to evaluation
20
Business Grants Hub
24
Evaluation Ready is required for all on-boarded programs
24
Evaluations conducted by the Evaluation Unit
24
Governance of evaluations
25
Policy and program delivery areas
25
Program Assurance Committee
26
Executive Board
26
4. LEARNING AND CAPABILITY
28
Lessons learnt
28
Completed evaluations
28
Evaluation capacity building
29
Evaluation maturity
30
Appendix A
35
3
Page 3 of 38
Released under FOI Act
Introduction
This Evaluation Strategy (the Strategy) provides a framework to guide the
consistent, robust and transparent evaluation and performance
measurement of programs and policies in the Department of Industry,
Innovation and Science (the department).
Evaluations, reviews and performance monitoring provide assurance that policies and
programs are delivering outcomes as intended, performance is tracked — allowing for
correction to occur — and informs future policy and program design. As Australia is called
to adapt to changing economic and policy environments, the evidence gained from
evaluations and other forms of performance measurement and assessment supports the
decision-making of government.
For the government, and this department, the continual questioning of how we are
performing is a critical part of good performance management and accountability. We need
to know:
have we achieved what we set out to do?
how are we progressing in achieving the department’s strategic objectives?
could we have done things better?
should we continue to do this or do something else?
Through asking these types of questions we gain an understanding of what works and what
doesn’t work and why, what is being done well and what is not, what should be pursued and
what should not. This knowledge can improve the design and implementation of effective
interventions.
The
Public Governance, Performance and Accountability Act 2013 (PGPA Act) established a
core set of obligations that apply to all Commonwealth entities. The Enhanced
Commonwealth Performance Framework has brought an increase in external scrutiny, and
introduced new requirements for strategic planning, measuring and assessing performance,
and reporting.
The department published its first Strategy in 2015. In 2017, an
Evaluation Strategy Post-
Commencement Review (the Review) was conducted, seeking internal and external
stakeholder feedback to assess the department’s overall evaluation skills, capability and
attitudes; and identify where improvements could be made.
In response to the Review, the Strategy has been updated and revised (
Evaluation Strategy
2017-21). This revision updates the original Strategy’s approach to evaluation in the
department and provides guidance on evaluation activities and internal governance
arrangements. The principles outlined in this Strategy will strengthen evaluation and
performance measurement capacity in the department and support building a culture of
evaluative thinking, ultimately leading to better resource allocation and decision-making and
the evolution of programs.
This Strategy:
outlines the department’s approach to performance measurement and reporting,
according to good evaluation practice
establishes a protocol for policy and program areas to plan for evaluation across the
lifecycle of a program
provides a strategic, risk-based, whole-of-department approach to prioritising
evaluation effort, and illustrates how evaluations may be scaled based on the value,
impact and risk profile of a program
describes how evaluation findings can be used for better decision-making
4
Page 4 of 38
Released under FOI Act
describes how the department is building evaluation capability and a culture of
continuous improvement
outlines how the department will measure its progress in implementing this Strategy.
The Strategy is not intended to be a complete guide to evaluation and performance
measurement. It is supported by a range of internal and external resources including:
the department’s guidance material and templates for planning and conducting an
evaluation
the department’s templates for Evaluation Ready
the department’s Performance Measurement and Reporting Framework
the Department of Finance Enhanced Commonwealth Performance Framework
the Australian National Audit Office Better Practice Guide — Successful
Implementation of Policy Initiatives.
This Strategy is divided into four sections which outline:
1. Principles and planning
2. The Evaluation Ready process
3. Advice for conducting evaluations
4. Learning and capability.
5
Page 5 of 38
Released under FOI Act
1. Principles and planning
Performance measurement and reporting
The department’s performance measurement and reporting framework
supports the implementation of the Enhanced Commonwealth Performance
Framework under the PGPA Act.
The Enhanced Commonwealth Performance Framework enables Commonwealth entities to
develop the necessary links between their performance information and their external
reporting. Entities are encouraged to adopt performance measurement methodologies that
better assess the results of activities and articulate their performance story. The framework
introduces a more transparent and cohesive form of performance reporting related to the
activities of an entity in achieving its purpose.
The department conducts performance planning and reporting on an annual basis
through five key documents:
Corporate Plan
Strategic Plan
Portfolio Budget Statements
Portfolio Additional Estimates Statements
Annual Performance Statements.
The department’s Strategic Plan is an overarching document designed for planning and
performance management. This Plan provides a high‐level overview of its strategic direction
and key areas of focus by articulating the department’s vision, strategic priorities, objectives
and challenges over a four‐year period. The Corporate Plan, Portfolio Budget Statements
(PBS) and Annual Performance Statements are the core elements of the Enhanced
Commonwealth Performance Framework.
The department’s overall planning and reporting framework is summarised in Figure 1.
6
Page 6 of 38

Released under FOI Act
Figure 1: The department’s overall planning and reporting framework
Source: Department of Industry, Innovation and Science (2017),
DIIS Performance Measurement and Reporting Framework, p.2
7
Page 7 of 38
Released under FOI Act
The department uses a hierarchy of performance criteria to measure and assess its
performance (see Table 1). Three levels of criteria are applied to the department’s ultimate
outcomes; intermediate outcomes; and outputs and immediate outcomes.
Figure
Table 1: Overview of the department’s performance measurement framework
Level of
Results oriented
Performance criteria
Performance criteria
performance performance
set out in the
set out in the
criteria
measurement
Portfolio Budget
corporate plan
Statements (PBS)
and Portfolio Budget
Estimates Statement
(PAES)
Level 1
Ultimate
Measure and assess
Measure and assess
how Australia is
how Australia is
outcome
performing with
performing with
regard to the
regard to the
department’s outcome
department’s vision
Level 2
Intermediate
Measure and assess
Measure and assess
the impacts of the
the department’s
outcome
department’s
performance in
programs
achieving its purposes
Level 3
Outputs and
Measure and assess
Measure and assess
immediate outcome
the effectiveness and
the effectiveness and
efficiency of the
efficiency of the
department’s
department’s activities
subprograms and
and their components
their components
Source: Department of Industry, Innovation and Science (2017),
DIIS Performance Measurement and
Reporting Framework, p.3
The level 1 and 2 performance criteria are used to monitor key trends and conditions within
the areas of the department’s responsibility. These levels of criteria will generate performance
information to achieve an improved understanding of ‘where we are’ and ‘where we need to
take action’ in our effort to achieve the department’s outcome, vision and purposes.
The level 3 performance criteria are used to measure the effectiveness and efficiency of the
department’s activities and components, such as policy advice, initiatives, services, projects
and administered programs. Through this level of performance criteria, we can assess and
report on the department’s contributions to achieving the outcome, vision and purposes
attributable to specific activities and components.
The department sets out its vision and four strategic priorities in the
Strategic Plan 2016-20.
The department’s vision is to enable growth and productivity for globally competitive
industries. The priorities are:
supporting science and commercialisation
growing business investment and improving business capability
simplifying doing business
building a high performance organisation.
8
Page 8 of 38
Released under FOI Act
Impact of evaluation activity
Good performance information will draw on multiple sources that offer different perspectives
on the achievement of a program’s objectives. The performance story of a program is likely
to be best supported through a diverse set of measures.
Evaluations provide a balanced performance story through their incorporation of program
logic models, and assessment against outcomes. They provide meaningful information and
evidence on a component’s aim and purpose in terms of its effectiveness and efficiency and
the activities that focussed on that purpose. Evaluations provide an opportunity to look
beyond performance monitoring and reporting and consider how well the program is
achieving its outcomes.
The department responds to growing demand for evidence-based analyses of policy and
program impacts by applying robust research and analytical methods, both quantitative and
qualitative, to determine and isolate what works in industry, innovation and science policies
and programs.
9
Page 9 of 38
Released under FOI Act
What is evaluation?
Evaluation is an essential part of policy development and program
management. The continual questioning of what we are trying to achieve
and how we are performing enables us to learn and improve what we do,
ensuring that decision-making is informed by the best available evidence.
Policy and program evaluations involve collecting, analysing, interpreting and communicating
information about the performance of government policies and programs, in order to inform
decision-making and support the evolution of programs.
Evaluation helps to answer questions such as:
is the policy contributing to the intended outcomes and are there any unintended
outcomes?
are there better ways of achieving these outcomes?
how have programs been implemented?
how are programs currently tracking?
what has been the impact of the program?
is the policy still aligned with government priorities, particularly in light of changing
circumstances?
should the current program be expanded, contracted or discontinued?
is there a case to establish new programs?
can resources be allocated more efficiently by modifying a program or a mix of
programs?1
Evaluation is integral to continual improvement. It is a not a one-off, or ‘tick the box’ exercise.
Evaluation supports:
Evidence-based Policy Development
‒ better informed decision-making
‒ a stronger basis for informing government priorities
‒ more efficient resource allocation
Learning
‒ shared learning to improve policy development and future program design and
delivery
‒ a culture of organisational learning within the department
Public Accountability
‒ the public accountability requirements of program sponsors and governments
‒ the department’s risk-management processes, helping to encourage greater public
trust in government
Performance Reporting
‒ the analysis and assessment of balanced and meaningful performance information
to report on progress in achieving strategic outcomes
‒ an enhanced ability to achieve government priorities.
1 Davis G & Bridgman P (2004),
Australian Policy Handbook, pp.130-131
10
Page 10 of 38
Released under FOI Act
Good evaluation practices
If evaluations are to be valuable to decision-makers across government,
consistency in approach and planning are required. Evaluations should be
conducted to a standard that ensures the information is credible and
evidence-based.
The summary below outlines the key principles used to guide evaluation in the department.2
Evaluations should be…
Characteristics of the evaluation principles
Integrated
evaluation is core business for the department and
is not simply a compliance activity
evaluation planning is undertaken at the new policy
proposal (NPP) stage and completed early in the
design of programs
evaluation results are communicated widely and
inform decision-making and policy development.
Fit for purpose
the scale of effort and resources allocated to an
evaluation is proportional to the value, impact,
strategic importance and risk profile of a program
the evaluation method is selected according to the
program lifecycle, feasibility of the method,
availability of data and value for money.
Evidence-based
the department applies robust research and
analytical methods to assess impact and outcomes
collectors of administrative data strive to attain
baseline measurements and trend data in forms that
are relatable to external data sets.
Timely
evaluation planning is guided by the timing of critical
decisions to ensure sufficient bodies of evidence are
available when needed.
Transparent
all evaluation reports are communicated internally
unless there are strong reasons to limit circulation
the department will move towards publishing more
content externally to strengthen public confidence
and support public debate.
Independent
evaluation governance bodies have a level of
independence from the responsible policy and
program areas
2 Adapted from Department of the Environment (2015),
Evaluation Policy, p.7
11
Page 11 of 38
Released under FOI Act
Evaluations should be…
Characteristics of the evaluation principles
evaluators should be independent of the responsible
program and policy areas.
12
Page 12 of 38
Released under FOI Act
The role of the Evaluation Unit
The Office of the Chief Economist (OCE) is responsible for providing objective, robust and
high quality economic analysis to inform policy development. The Evaluation Unit (the Unit)
is located in the Insights and Evaluation Branch (the Branch) of the OCE, providing a level of
independence from policy and program areas. The increased accountability and scrutiny
under the PGPA Act further reinforce the critical role of the Evaluation Unit as the authoritative
source for guidance on evaluation.
The Branch produces evaluation reports and research output to measure the impact of
programs and gain a better understanding of business characteristics, behaviour and
performance. Amongst other activities, the Branch specialises in the econometric analysis of
various dynamics of industry and firm performance, such as assessing the impact of
participation in the department’s programs. The Evaluation Unit applies mixed methods
research approaches combining quantitative and qualitative research methods. The Unit also
draws upon general and sector-specific expertise from the OCE and other areas of the
department.
Working in collaboration with policy and program staff, the Unit is tasked with assessing the
performance of the department’s programs and providing evidence to influence future policy
and program decisions.
The Unit maintains the department’s four-year Evaluation Plan and reports progress against
the Plan to the department’s Executive.
The Evaluation Unit is responsible for:
conducting or contributing to evaluations of departmental programs (post-
commencement, monitoring and impact evaluations)
scheduling and prioritising evaluations through the Evaluation Plan
providing expert advice and guidance to program and policy areas in planning and
conducting evaluations including developing templates and guidance materials
strengthening the department’s capability for evaluative thinking
supporting programs to be Evaluation Ready, including developing:
‒ program logic models
‒ data matrices
‒ a program evaluation strategy
providing advice as members of Reference Groups, including drafting Terms of
Reference
maintaining a library of completed evaluations and sharing report findings to inform
future policy and program design.
Support from the Evaluation Unit is provided when:
‒ costing evaluations during NPPs
‒ undergoing Evaluation Ready
‒ planning for an evaluation
‒ conducting an evaluation.
13
Page 13 of 38
Released under FOI Act
Differences with other areas
Evaluation and Audit — what is the difference?
The roles of evaluation and audit are quite different. Both are important for ensuring
accountability and contribute to performance reporting, but they approach it from different
angles. Both are fundamental links in the accountability chain and contribute to better
program management and endeavour to identify better practice. They share similar
approaches and analytical techniques.
However:
Audits are more closely aligned with compliance and risk.
Evaluations have a strong focus on assessing policy design and program effectiveness
(including measuring impact and return on investment), while audits are focussed on
assessing the administration of a program (including checking compliance against
regulations and considering risk management).
Evaluations may look at the appropriateness of government policy (including whether
the government was justified in intervening in the market). Audits do not.
Evaluation findings can be provided directly to policy and program managers to assist
with program design and delivery. Audits are usually provided to audit committees (or
directly to the Parliament in the case of the Australian National Audit Office) to ensure
independence.
Assurance and Audit Committee
The department’s Assurance and Audit Committee was established in accordance with the
PGPA Act and provides independent advice and assurance to the Executive on the
appropriateness of the department’s accountability and control framework, independently
verifying and safeguarding the integrity of the department’s financial and performance
reporting.
The Annual Audit Plan provides an overview of the delivery of internal audit services, which
include General audits, ICT audits, Management Initiated Reviews and Assurance Advisory
Services.
Internal audit provides an independent and objective assurance and advisory service to the
Secretary. This ensures the financial and operational controls designed to manage the
organisation's risks and achieve its objectives are operating in an efficient, effective and
ethical manner. Internal audit assists the Executive and senior managers to discharge their
responsibilities effectively and improve risk management, control and governance including
business performance. Internal audit also advises the Assurance and Audit Committee
regarding the efficient, effective and ethical operation of the department.
The department’s four-year Evaluation Plan is provided to the Assurance and Audit
Committee for noting annually. Final evaluation reports will be provided to the Committee for
information.
14
Page 14 of 38
Released under FOI Act
Program Assurance Committee
The department’s Program Assurance Committee (PAC) provides the Executive Board with
assurance in relation to programs within its remit. This includes program monitoring and
reviewing. The PAC supports Accountable Officers and Senior Responsible Officers to drive
excellence in program design and delivery by providing an open forum for exchanging and
sharing ideas.
The PAC facilitates peer learning, including risk and evaluation planning for programs.
Evaluation activities are important to allow the PAC to monitor the performance of programs
against key performance indicators and identify significant or systemic program issues. As a
result, the PAC are significant stakeholders in Evaluation Ready and evaluations.
15
Page 15 of 38
Released under FOI Act
Program tiering
The department prioritises evaluation effort and resourcing based on the following criteria:
total funding allocated for the program
internal priority (importance to the department’s and Australian Government’s goals)
external priority (importance to external stakeholders)
overall risk rating of the program
track record (previous evaluation, the strength of performance monitoring and lessons
learnt).
The department’s four-year Evaluation Plan
The department has a strategic, risk-based, whole-of-department approach to prioritising
evaluation effort. The scale of an evaluation should be proportionate to the size, significance
and risk profile of the program (sometimes referred to as ‘fit for purpose’). Evaluative effort
and resources should not be expended beyond what is required to satisfy public
accountability and the needs of decision-makers.
The department’s Evaluation Plan covers a four-year period (over the forward estimates)
using the tiering system to identify evaluations of the highest priority and strategic importance.
Elements of the Evaluation Plan will be published externally by the department, including in
the Corporate Plan and Annual Performance Statement.
The Evaluation Plan is developed in consultation with divisions, using the above criteria as a
guide to how and when evaluations should be conducted. To reduce duplication and leverage
effort, the department takes account of audit and research activity when developing its
Evaluation Plan.
16
Page 16 of 38

Released under FOI Act
2. Evaluation Ready
Evaluation Ready
Planning for evaluation at an early stage helps identify the questions an evaluation will need
to address and when, so that meaningful data can be collected to measure a program’s
outcomes and impact. The Evaluation Unit facilitates this planning as it works with policy and
program delivery areas during the Evaluation Ready process (
Error! Reference source not
found.3). Evaluation Ready creates a monitoring and evaluation framework to ensure
programs are prepared for future evaluations and helps instil an evaluative mindset from the
outset.
At the heart of each program is a ‘theory of change’ by which policy planners determine the
outcomes sought and how that change can be achieved.
Each of the following components of the Evaluation Ready process is crucial to ensuring a
program is adequately prepared for future evaluations:
setting out the program’s theory of change (the program logic model)
identifying key evaluation questions, indicators and data sources (the data matrix)
selecting appropriate types and timing of future evaluations (combined with the logic
model and data matrix in the program’s evaluation strategy).
Both program and policy areas should be involved in Evaluation Ready.
The program logic, data matrix and a program’s evaluation strategy are approved at the
Executive Level (EL) level as they are completed. The General Manager of the policy area
then approves a program’s evaluation strategy as a whole following the Evaluation Unit’s
signoff. The strategy is then presented to the PAC for noting.
All Evaluation Ready templates can be found on the internal Evaluate website.
Program logic model
A program logic model is the visual representation of a program’s theory of change and
underlying assumptions.3 It describes how an intervention contributes to a chain of results
flowing from the inputs and activities to achieve short, medium and long-term outcomes.
Figure 2: Program logic model components
Notes: Amended from the department's program logic model template
3 W.K. Kellogg Foundation (2004),
Logic Model Development Guide
17
Page 17 of 38
Released under FOI Act
The Evaluation Unit guides policy and program delivery areas to articulate the underlying
theory of change which translates into the logic model. Underpinning the program logic are
four factors:
1. the situation, which describes the need for government intervention, such as a market
failure
2. the objectives which will address the need for the program
3. the assumptions that were made as part of the theory of change
4. external factors which could influence the performance and outcomes of the program.
5. The Evaluation Unit uses a rubric (see Appendix A) to assess whether program logics
are appropriate before moving on to further stages of Evaluation Ready.
Data matrix
A data matrix is a tool for organising evaluation questions and sub-questions and developing
plans for collecting the information needed to address them.4
The Evaluation Unit provides support for:
developing evaluation questions from the program logic
identifying performance indicators and data sources
articulating data collection responsibilities and timeframes
identifying the required data, including limitations of particular sources.
Program managers need to be aware of any administrative data they wish to capture prior to
developing service documents (such as program guidelines, application forms and progress
reporting) for the program.
Program evaluation strategy
A program’s individual evaluation strategy outlines the rationale for future evaluation activity
over the specified years and captures all the Evaluation Ready materials in a consolidated
document. For each program, the strategy should cover:
reasons behind particular types and timings of evaluation activity planned
scope of each evaluation
the relevant risks for the evaluation.
The program logic and data matrix will be included as attachments.
Tier One programs are expected to have a more detailed and comprehensive evaluation
strategy including consideration of the strengths and limitations of available indicators and
data sources.
Completing an evaluation strategy is important to retain corporate knowledge and ensure
future staff have access to the thinking behind evaluation planning for their program. The
Evaluation Unit prepares a program’s evaluation strategy with input from the relevant policy
and program areas. The evaluation strategy is subsequently endorsed by the General
Manager of the policy area and as noted above, it is the responsibility of the policy area to
present the program’s evaluation strategy to the PAC for noting.
4 University of Wisconsin (2002),
Enhancing Program Performance with Logic Models
18
Page 18 of 38

Released under FOI Act
Figure 3: Process map for completing Evaluation Ready
Source: Department of Industry, Innovation and Science (2017)
19
Page 19 of 38
Released under FOI Act
3. Evaluations
Approaches to evaluation
The decision to conduct an evaluation is strategic rather than simply routine.
Decision-makers need to think through what they want an evaluation to
address and when an evaluation should occur. Evaluations should be
appropriate to the particulars of a given program; they are not a ‘one size fits
all’ arrangement.
Evaluation activity has different purposes at different points in the program lifecycle. All policy
and program areas need to consider evaluation requirements from the early policy and
program design stage, ideally at the NPP stage. Policy and program delivery staff should
consult the Evaluation Unit at this stage to develop their thinking about future evaluations of
the program and provide preliminary costings for future evaluations.
The selection of an evaluation method should take into account the program lifecycle and
feasibility of the method, the availability of data and value for money.
The types of evaluations the Evaluation Unit conducts are:
post-commencement evaluations
monitoring evaluations
impact evaluations.
A typical evaluation takes six months from commencement to finalisation.
The table below shows the fundamental issues to consider in determining the scale of an
evaluation. There may also be times where Cabinet or legislation determine the type of
evaluation and when it should be conducted.
20
Page 20 of 38
Released under FOI Act
Table 2: Characteristics which determine a program’s tier and likely characteristics of the associated evaluation
Tier One
Tier Two
Tier Three
Characteristics of
Significant funding
Moderate funding
Relatively small
program
Highest risk
Medium risk
funding or single
Strategically
New or untried
payment grants
significant
program that requires
Low risk
May be Flagship
testing of assumptions
Lesser strategic
program
and or data
importance
High public profile and
Medium level of
Not widely publicised
expectations
strategic importance
Similar to other
Politically sensitive
Moderate public
programs that have
profile and
been subject to
expectations
evaluation activity
Likely
Formal process
Greater level of data
Informal process
characteristics of
Extensive consultation
collection and analysis
Can be completed
evaluation
High resource
Multiple evaluation
internally
allocation
points during the
Limited data
Central agencies may
development and
requirements
be involved
implementation
Low resource
Wide public release
Regular process
allocation
reporting
Limited consultation
Low profile release
Evaluation Unit
Evaluation Unit leads
Evaluation Unit may
Evaluation Unit may
role
the development of
lead the development
be consulted on the
methodology/terms of
of methodology/terms
development
reference
of reference
of methodology/terms
Reference Group
Reference Group
of reference
should be chaired by
should be chaired by
Upon request, the
the General Manager
the General Manager
Evaluation Unit may
of the Insights and
of the Insights and
provide evaluation
Evaluation Branch or
Evaluation Branch
advice
the Chief Economist
Independent evaluator
The evaluator is likely
Independent evaluator
could be internal or
internal to the
could be internal or
external to the
department
external to the
department
department
Source:
Department of Industry, Innovation and Science (2017)
21
Page 21 of 38

Released under FOI Act
Evaluation questions
An important part of planning for evaluations is to determine which evaluation questions need
to be addressed during the life of a program. The department’s approach to evaluation
activities takes into account both the program evaluation hierarchy of Rossi, Lipsey and
Freeman and the Department of Finance Expenditure Review Principles.5
The grouping of evaluation questions allows evaluation activities to focus on specific domains
of the hierarchy. This should be considered when planning evaluations, as part of a program’s
evaluation strategy and scheduling evaluations on the department’s Evaluation Plan.
Figure 4: Types of evaluations the department conducts in relation to program evaluation hierarchy
Source: Department of Industry, Innovation and Science
One year into a program a
post-commencement evaluation can be conducted.
At the 18–24 month mark, the data collection and performance measures of a program can
be tested through a
monitoring evaluation. Depending on the nature and timeframes of the program, after three to five years an
impact evaluation can be conducted.
Ultimately, however, the choice of focus areas for an evaluation should reflect the needs of
the program and policy areas. The evaluation Reference Group will decide which focus
areas are appropriate before signing off on the Terms of Reference for each evaluation.
Post-commencement evaluation
Post-commencement evaluations ‘check in’ on a program soon after its commencement. This
type of evaluation focuses on the initial implementation, design and delivery, to identify issues
early on. Post-commencement evaluations typically occur 12 months post-program
implementation and focus on reporting to internal stakeholders. Program managers have a
responsibility to make the best use of public resources to ensure their programs achieve their
5 Rossi P, Lipsey M, and Freeman H (2004),
Evaluation: a systematic approach, Sage
Publications; Department of Finance (2013),
Expenditure Review Principles
22
Page 22 of 38
Released under FOI Act
outcomes. Post-commencement evaluations provide recommendations for decision-makers
to take corrective action early on.
Monitoring evaluation
Monitoring evaluations draw on performance information to monitor the program’s progress,
following 18-24 months post-program implementation. They are usually suited to programs
which are at a ‘business as usual’ stage in the program lifecycle and look at both short-term
and medium-term outcomes. A monitoring evaluation provides an opportunity to test the
program’s data sources, to see whether they are providing the required performance
information. This provides an indication of performance, contribution to the measurement of
the department’s strategic objectives and forms a basis for future reviews. Monitoring
evaluations are primarily intended for internal stakeholders but can include external
stakeholders.
Impact evaluation
Impact evaluations are usually large and more complex evaluations, which allow for the
assessment of a program’s performance. They assess the impact of a program and may
determine its value for money. Where possible they would test this against a ‘counterfactual’:
they seek to compare program outcomes with a prediction of what would have happened in
the absence of the program and may include research about program alternatives to allow
comparison of results. They may involve a cost-effectiveness or cost-benefit analysis.
Impact evaluations often use a mixed-method research approach to find objectively verifiable
results and contribute to the measurement of the department’s strategic objectives and
outcomes. These evaluations commonly occur at least three years post-program
implementation and measure the medium-term and long-term outcomes. Impact evaluations
should be published externally to strengthen public confidence and support public debate.
An economic impact analysis investigates the impact a program or intervention has on
participants, via quasi-experimental techniques. Similar to an impact evaluation, program
participant performance (at the firm-level) is compared to a counterfactual or comparison
group (to accurately measure attribution) in a range of measures including business turnover,
employment, capital expenditure and survival rates. In contrast to impact evaluations,
economic impact analysis typically does not account for broader social and environmental
impacts.
23
Page 23 of 38
Released under FOI Act
Responsibility for conducting evaluations
Priority, scale and methodology will inform who will conduct an evaluation. Subject-matter or
technical expertise should also be considered, as should resource availability, time and cost.
Options include:
engaging the Evaluation Unit to conduct the evaluation (subject to timing and
resourcing considerations)
seconding policy or program staff into the Evaluation Unit
engaging an external consultant or academic.
There should be a level of independence from the areas responsible for policy and program
delivery. For evaluations of lesser strategic importance or terminated programs, this could be
through advice from the Evaluation Unit. Consultancies should be managed by the Evaluation
Unit, for greater independence. Seconding individuals into the Evaluation Unit from the policy
and program areas is a viable option to provide some independence, build capability and
alleviate resourcing constraints. See Table 2 for further details on the Evaluation Unit’s
involvement with different tiered programs.
It is essential that evaluation funding is quarantined for planned evaluations and not
redirected to other purposes.
Business Grants Hub
The Digital Transformation Agenda, announced as part of the 2015-16 Federal Budget,
includes a Streamlining Grants Administration Program to improve the way grants are
delivered across the Australian Government. AusIndustry, the department’s service delivery
division, has implemented a Business Grants Hub.
The department’s Evaluation Unit provides evaluation services to programs on-boarded
through the Business Grants Hub on a cost-recovery basis. These services assist policy
planners and program managers in planning program performance measurement and
conducting independent evaluations, which support policy and program improvements and
provide accountability and transparency for government investment.
Evaluation Ready is required for all on-boarded programs
All programs managed by the Business Grants Hub are required to be Evaluation Ready (see
Section 2: Evaluation Ready for further information). At the end of the Evaluation Ready
service, clients will have:
completed a program logic model and data matrix (performance indicators and data
collection methods)
determined the type and timing of evaluation(s) relevant for the program
identified evaluation questions for future evaluations
consolidated all these components into an evaluation strategy.
Evaluations conducted by the Evaluation Unit
Business Grants Hub clients may opt to engage the Evaluation Unit to undertake an
independent evaluation of their program. Refer to Section 3: Evaluations for the types of
evaluations the Unit conducts. If these additional services are not selected, the policy partner
retains the responsibility to ensure an evaluation is undertaken in accordance with the
Commonwealth Grants Rules and Guidelines.6
6 Department of Finance (2017),
Commonwealth Grants Rules and Guidelines
24
Page 24 of 38
Released under FOI Act
Governance of evaluations
Office of the Chief Economist
For independence, the OCE both owns and is responsible for approving Tier One and Tier
Two evaluation reports conducted by the Evaluation Unit.
The General Manager, Insights and Evaluation Branch or the Chief Economist signs off on
the report.
For Tier Three programs, the Evaluation Unit’s role is limited to providing advice and guidance
if requested.
Approval through the OCE provides a level of independence from policy and program
areas.
Policy and program delivery areas
Policy and program delivery areas are major stakeholders for evaluation activities. Relevant
General Managers typically participate in an evaluation Reference Group to provide area-
specific perspectives and advice. Their support during the evaluation process is critical. Their
management comments in response to findings and recommendations are attached to the
final evaluation report and they attend the relevant Executive Board meeting which considers
endorsement of recommendations.
The Accountable Officer is responsible for implementing the recommendations endorsed by
the Executive Board.
Policy and program delivery areas sit on the Reference Group.
Reference Groups
All Tier One and Tier Two evaluations conducted by the department are guided by a
Reference Group. Unlike a Steering Committee, a Reference Group does not formally sign
off on the evaluation report.
Reference Groups will typically be chaired by the General Manager of the Insights and
Evaluation Branch. The Chief Economist may chair the Reference Group in specific cases.
The Reference Group comprises representatives of the evaluated program and policy areas.
Membership of a Reference Group reflects the role, rather than the individual’s expertise or
experience. Staff with specific expertise or knowledge may be invited to serve in an advisory
capacity on a case-by-case basis.
A Reference Group’s key functions are to:
approve the Terms of Reference of an evaluation
provide input and oversight during the evaluation process.
A Reference Group typically meets three times, to: approve the evaluation Terms of
Reference; discuss preliminary findings and recommendations; and provide feedback on the
draft evaluation report. Further meetings can be scheduled if required.
Reference Groups approve an evaluation’s Terms of Reference.
25
Page 25 of 38
Released under FOI Act
Program Assurance Committee
Programs that have completed Evaluation Ready need to provide their evaluation strategy to
the PAC for noting. The Evaluation Unit provides bi-monthly reports to the PAC on the
Evaluation Ready status of programs.
Early findings and recommendations of evaluations conducted by the Evaluation Unit are
socialised with the PAC for discussion. Upon endorsement of an evaluation report by the
department’s Executive Board, the final evaluation report is also provided to the PAC for
noting.
Provide comments on early findings and recommendations of evaluations.
Executive Board
Chaired by the Secretary, the Executive Board is responsible for the overall governance,
management, policy leadership and strategic direction of the department. It has responsibility
for oversight of the department’s evaluation activity, including:
considering the evaluation report for approval, including whether the department will
implement the recommendations
determining whether the report will be published publicly in its entirety or executive
summary only.
Program and Policy SES are invited to attend the relevant Executive Board meeting and
provide management comments.
Determines whether the department will publish the evaluation report and whether
recommendations are implemented.
26
Page 26 of 38

Released under FOI Act
Figure 5: Governance structure of Tier One and Tier Two evaluations conducted by the Evaluation Unit
Notes:
EB Executive Board;
GM General Manager;
IEB Insights and Evaluation Branch;
PAC Program Assurance Committee;
RG Reference Group;
SES Senior Executive Service
27
Page 27 of 38
Released under FOI Act
4. Learning and capability
Lessons learnt
Policy making is a process of continuous learning, rather than a series of
one-off, unrelated decisions. Effective use of organisational knowledge in
policy development enables policy makers to learn from previous successes
and failures to develop better policy. Program evaluations provide the
evidence base to inform best practice expenditure of public funding and the
development of policy.7
Evaluations increase understanding of the impact of government policy, programs and
processes, and form just one of the key sources of performance information that help the
department to assess whether it is achieving its strategic priorities. Along with research and
audit findings, the outcomes from evaluations are a valuable resource; they support evidence-
based policy and the continual improvement and evolution of programs.
Organisational learning uses past experiences to improve policy, recognising that the
government may repeatedly deal with similar problems. Developing a culture of
organisational learning can make an organisation more responsive to the changes in its
environment and facilitate adaptation to these changes.8
It is expected that evaluation findings will be communicated widely across the department,
particularly to inform decision-making, with resulting recommendations acted upon routinely.
It is also expected that evaluation findings and emerging trends are captured, reported and
communicated, and brought to the attention of the Executive Board as appropriate. Evaluation
findings will also be disseminated to PAC, the Assurance and Audit Committee and the
Minister’s Office.
Completed evaluations
To improve the sharing of evaluation findings and make them accessible across the
department, all evaluations commissioned or undertaken by the department will be accessible
internally through a completed evaluations library. All completed evaluations will contain a
one page lessons learnt summary. The completed evaluations library will be maintained by
the Evaluation Unit and program and policy staff should provide copies of completed
evaluations to ensure the collection is comprehensive. By default, all evaluation reports will
be published internally.
The completed evaluations library provides significant insight to the approaches used to
design policy and implement departmental programs. It captures the lessons learnt from
completed evaluations of programs, so future programs can learn from past experience –
whether positive or negative.
Presentations
The Evaluation Unit can communicate the findings and recommendations of evaluations
conducted by the Unit. This includes:
all staff presentations
specific presentations for the evaluated program or policy areas.
7 Department of Industry and Science (2014),
Policy Development Toolkit 8 Ibid
28
Page 28 of 38
Released under FOI Act
Evaluation capacity building
Building capacity and capability in performance measurement and
evaluation is not limited to technical skills and knowledge. Performance
measurement and evaluation need to be integrated into the way we work
and think.
Fostering a culture of evaluative thinking
As we are called to adapt to changing economic and policy environments, measuring how we
are performing and providing credible evidence becomes paramount. This cannot be
achieved without a shift to a culture of evaluative thinking and continuous improvement.
Organisational culture significantly influences the success of evaluation activity and requires
strong leadership. This department is building a supportive culture, led by the Executives,
that encourages self-reflection, values results and innovation, shares knowledge and learns
from mistakes.
Without such a culture, evaluation is likely to be resisted, perceived as a threat rather than
an opportunity or treated as a compliance exercise.
To develop a culture of evaluative thinking the department requires:
a clear vision for evaluation and continuous improvement
clear responsibilities and expectations to empower staff, along with appropriate training
and guidance material
knowledge-sharing and tolerance for mistakes to encourage learning and improve
performance
a culture of reward to showcase effective evaluations
support for the outcomes of robust evaluation to build trust, welcoming the identification
of problems or weaknesses.9
Building capability
A culture of evaluative thinking and capability building go hand in hand — both are required
to achieve a high level of evaluation maturity within a high-performing organisation.
Conducting an evaluation requires significant knowledge, skill and experience. The
department is committed to building performance measurement and evaluation capability and
technical skills to support staff in planning and conducting evaluations and undertaking
performance monitoring.
Learning continues for staff in the Evaluation Unit and across the department in specialised
evaluation techniques and methods. The Evaluation Unit is made up of evaluation
professionals who are members of the Australasian Evaluation Society (AES) and other
professional organisations. The Evaluation Unit encourages its staff to undertake or maintain
formal training in evaluation and related areas.
The role and responsibilities of the Evaluation Unit include building capability through
providing expert advice and guidance, and ensuring the department is meeting its external
reporting accountabilities.
9 ACT Government (2010),
Evaluation Policy and Guidelines
29
Page 29 of 38
Released under FOI Act
Gaining evaluation experience
Staff often increase their evaluation awareness and knowledge during Evaluation Ready
workshops. For further hands-on experience in evaluation, formal arrangements can be made
for staff of evaluated programs to be seconded into the Evaluation Unit for the duration of the
evaluation. At times, opportunities to work in the Evaluation Unit may be available on the
Skills Marketplace and the Unit encourages rotation expressions of interest from graduates.
Supporting guidance material
The Evaluation Unit has developed comprehensive guidance material to support on the job
learning. The topics covered range from planning for an evaluation to how to conduct an
evaluation or develop a Terms of Reference. The material is designed to be used in
conjunction with advice available from the Evaluation Unit.
The Evaluation Unit offers targeted learning on program logic and developing performance
measures, as part of Evaluation Ready. The Unit also contributes to policy development and
program management training in the department.
Evaluation maturity
Developing and maintaining evaluation maturity is an ongoing process that must be balanced
with other organisational objectives. This Strategy establishes a framework to guide the
department through the stages of maturity which encompass good evaluation practices.10
To establish a baseline from which we can identify strengths and weaknesses and priorities
for improvement, the Evaluation Unit has assessed the department’s current evaluation
maturity. While it is following best practice in some elements of evaluation maturity, overall it
is between the ‘developing’ and ‘embedded’ stages of maturity.11
The Evaluation Maturity Matrix is adapted from:
ACT Government (2010),
Evaluation Policy and Guidelines, p.17
11 Department of Industry, Innovation and Science (2017),
Evaluation Strategy Post-
Commencement Review
30
Page 30 of 38
Released under FOI Act
Table 3: The department’s Evaluation Maturity Matrix
Principle
Beginning
Developing
Embedded
Leading
Integrated
Awareness of the
Appreciation of
A culture of
Evaluations
benefits of
the benefits of
evaluative
motivate
evaluation is low.
evaluation
thinking and
improvements
Evaluation is seen
improving.
continual
in program
as a compliance
Evaluation is
improvement
design and
activity and threat.
being viewed as
is embedded
policy
Fear of negative
core business for
across the
implementation
findings and
the department,
department,
.
recommendations
not simply
with lessons
Demonstrated
leads to a
a compliance
learnt being
commitment to
perception
activity.
acted upon.
continuous
of ‘mandatory
A culture of
Evaluation is
learning and
optimism’
evaluative
seen as an
improvement
regarding program
thinking and
integral
throughout the
performance.
continual
component of
agency.
Insufficient
improvement is
sound
Department is
resources
introduced and
performance
recognised for
allocated to
communicated
management.
its evaluation
evaluation
across the
General
and
activities.
department.
evaluation
performance
Evaluation and
Skills in
skills
monitoring
performance
performance
widespread.
expertise, and
measurement
measurement
Improved skills
innovative
skills and
and evaluation
and knowledge
systems and
understanding
developed
in developing
procedures.
limited, despite
through targeted
quality
pockets of
training and
performance
expertise.
guidance
measures.
materials.
Evaluation Unit
Evaluation
team members
website and
have high
guidance
order skills and
materials
experience
developed.
which
The role of the
are leveraged
Evaluation Unit is
by the
widely
department.
communicated.
Evaluation Unit
Unit seen as the
team members
authoritative
hold and are
encouraged to
31
Page 31 of 38
Released under FOI Act
Principle
Beginning
Developing
Embedded
Leading
source for
undertake
advice.
formal
Developing
qualifications
further expertise
in evaluation
in the Evaluation
and related
Unit.
subjects.
Fit for
Frequency and
Guidelines for
Priority
Evaluation
Purpose
quality of
prioritising and
programs are
effort is scaled
evaluation
scaling
evaluated.
accordingly.
is lacking.
evaluation
Evaluations
Specialist and
activity are used.
use fit for
technical skills
purpose
well developed
methodologies
to apply
.
appropriate
methodologies.
Evidence-
Data holdings and
Planning at
A range of
The
based
collection methods
program outset
administrative
department
are insufficient or
improves data
and other data
continually
of poor quality.
holdings and
is used in the
develops
collection
assessment of
and applies
methods.
performance.
robust research
Developing skills
Robust
and analytical
and knowledge in
research and
methods to
applying robust
analytical
assess impact
research and
methods are
and outcomes.
analytical
used to assess
Evaluation and
methods to
impact and
performance
assess impact
outcomes.
measurement
and outcomes.
Evaluations
conform to
Quality of
conform to
recognised
evaluations is
departmental
standards of
improving.
standards.
quality.
Timely
Effort and
Evaluation
The
The
resources are
activity is
department
department’s
allocated in an ad
coordinated. An
employs
approach to
hoc and reactive
evaluation plan is
strategic risk-
evaluation and
manner with little
in place and
based, whole-
performance
foresight.
regularly
of-department
planning is
Developing
monitored.
criteria to
seen as the
performance
Strategically
prioritise
exemplar.
information at the
significant and
evaluation
32
Page 32 of 38
Released under FOI Act
Principle
Beginning
Developing
Embedded
Leading
inception of a
risky programs
effort.
All programs
program is ad hoc
are prioritised.
Evaluation
have been
and of variable
Planning for
plans are
signed off and
quality.
evaluation and
updated
are Evaluation
performance
annually and
Ready.
monitoring is
progress is
being integrated
monitored on a
at the program
regular basis.
design stage.
Planning for
All programs are
evaluation and
assessed for
performance
being Evaluation
measurement
Ready.
is considered a
fundamental
part of policy
and program
design.
All programs
have program
logic,
performance
and evaluation
plans in place.
Transparent
Findings and
Findings and
Findings
Findings are
recommendations
recommendation
widely
consistently
held in program
s viewed as an
disseminated
used to
and policy areas.
opportunity to
and drive
optimise
No follow up on
identify lessons
better
delivery and
the implementation
learnt.
performance.
have influence
of recommendatio
Evaluations are
Website and
outside the
ns.
available in the
guidance
department.
completed
materials
evaluations
are a valuable
library to improve
resource for
the dissemination
staff.
of lessons learnt
Evaluation
and inform policy
findings and
development.
reports
are published
where
appropriate.
33
Page 33 of 38
Released under FOI Act
Principle
Beginning
Developing
Embedded
Leading
Independent
Independent
There is an
All evaluations
Evaluations
conduct and
improved level of
include a level
conducted by
governance
independence in
of
the Evaluation
of evaluations is
the conduct and
independence.
Unit are viewed
lacking.
governance of
externally as
Evaluations are
evaluations.
independent.
conducted and
overseen by the
policy or program
areas responsible
for delivery of the
program.
Source: ACT Government (2010), Evaluation Policy and Guidelines
Reviewing the Evaluation Strategy
This Strategy will be periodically reviewed to assess whether it is meeting the needs of the
department. The measures of success will include that it is:
consistent with the PGPA Act
efficiently allocating evaluation effort
leading to more effective conduct of evaluations
fostering a culture of evaluative thinking
ultimately contributing to more effective programs.
Results of the review will be communicated to the Executive Board. The review will include
an assessment of the department’s level of evaluation maturity two years on, a needs
assessment and provide concrete examples of progress.
34
Page 34 of 38
Released under FOI Act
Appendix A
Program logic rubric
Satisfactory
Good (includes all ‘satisfactory’ criteria plus
Section of
Requires improvement
those listed below)
program logic
Overall
The logic linking activities/outputs to outcomes
Adequately represents the views of the main
The template has been adapted to a sensible
is not convincing.
stakeholders: policy, program and Evaluation
extent to capture differences between
Arrows not well matched to timescale.
Unit.
programs.
Theory of change ill-defined or not evidence-
The theory of change is clear and indicated by
A key is provided where useful/applicable.
based.
arrows.
Acronyms are explained.
Not comprehensive across the columns.
The outcomes are realistic relative to the
Isn’t cluttered, with a suitable level of detail.
Some components incorrectly placed in
inputs and activities (not changing the world).
The logic linking activities/outputs to outcomes
columns.
Uses active, not passive voice.
is based on evidence.
Doesn’t fit on one page.
The focus is evaluative rather than
promotional.
All components are in correct columns.
Outputs and/or outcomes are linked to
activities.
The logic linking activities/outputs to outcomes
is plausible.
Fits on one page.
Has been cleared/approved at GM level or
other where appropriate.
Has been presented to PAC for noting.
Inputs and
Is either not comprehensive or is inaccurate in
Inputs section includes staffing.
Includes in-kind inputs where relevant.
participation
relation to inputs, stakeholders.
35
Page 35 of 38
Released under FOI Act
Satisfactory
Good (includes all ‘satisfactory’ criteria plus
Section of
Requires improvement
those listed below)
program logic
Omits staffing and/or administered funding.
Inputs section includes formal external inputs
Clarifies target market — distinguishes
Lists government under participation
where the department is not the sole funder.
between primary and secondary beneficiaries.
(unless the program targets government as
Funding for inputs is broken down by
Participation is represented so as to align with
the beneficiary).
administered and departmental, where known.
activities and outcomes.
Inputs section includes a clear timeframe for
Includes all stakeholders impacted, not just
funding, either across the lifetime of the
program participants.
program or other clear timeframes.
Concise.
Participation section identifies target recipients
for the program. The focus is on beneficiaries,
not deliverers of it, such as government.
If many participants, these are grouped into
logical subgroups.
Activities
Too much detail on generic administration
Identifies who does what to whom.
Uses action verbs to identify activities.
and/or outputs
processes such as for granting programs.
Separates Commonwealth and participant
Outcomes are informed by evidence and
Outputs are confused with or substitute for
activities as necessary.
experience / lessons learnt.
outcomes.
Shows ordering of key activities and links to
Activities don’t link to outputs and outcomes.
outcomes.
Activities/outputs are directly related to
objectives and can be monitored and
assessed.
Avoids too much detail on generic
administration processes such as for granting
programs.
Outcomes
Outcomes are not comprehensively identified.
Identification of outcomes is suitably
Uses feedback loops if appropriate.
Outputs are confused with outcomes.
comprehensive.
36
Page 36 of 38
Released under FOI Act
Satisfactory
Good (includes all ‘satisfactory’ criteria plus
Section of
Requires improvement
those listed below)
program logic
No theory of change (no connecting links
Articulates who the outcomes relate to (who is
Marks external factors and assumptions in
between boxes or every box connects to every
benefiting/being affected).
links.
other box).
Uses evaluative, not promotional language.
Outcomes link backwards to outputs and
Outcomes are aspirational and/or not able to
Language is proportional increase and not just
activities.
be assessed.
number.
Links such as between shorter and longer-
Simply restates policy objectives.
Provides realistic timeframes for outcomes.
term outcomes are based on evidence.
Doesn’t consider short/medium/long-term
Uses SMART indicators.1 Outcomes that can’t
outcomes.
be measured are clearly indicated.
Links between shorter and longer-term
Outcomes align with objectives.
outcomes aren’t convincing.
Outcomes are well connected with a logical
Outcomes are out of proportion to inputs.
flow from short-term to long-term.
Demonstrates logic links and clearly
articulates anticipated changes.
Doesn’t restate activities/outputs.
Links between shorter and longer-term
outcomes are plausible
External
Not included or not clearly identified.
Key external factors and assumptions
Assumptions supported by evidence/theory of
factors and
Not supported by evidence.
identified.
change and risks.
assumptions
Informed by lessons learnt.
Assumptions comprehensively state the
conditions required for the program to function
effectively.
Notes: 1
SMART Specific, Measurable, Attainable, Relevant, and Time-bound
Source: Department of Industry, Innovation and Science (2017)
37
Page 37 of 38

Released under FOI Act
38
Page 38 of 38
Document Outline