Capacity development & learning in evaluation

Download Report

Transcript Capacity development & learning in evaluation

Slide 1

Capacity
development &
learning in
evaluation
Uganda Evaluation Week
19th to 223rd May 2014


Slide 2

Contents
1. What do we mean by evaluation capacity?
2. Links to managing for results
3. Evidence from organisations who have
introduced results management systems
4. The challenge of evaluation culture
5. The experience of DFID
6. Results system or results culture
7. Conclusions
2


Slide 3

What do we mean by evaluation
capacity? A process of good governance?
Building blocks?
Individual

Capacity
empowers
stakeholders
to …

Both ?
Organisation

Enabling
environment
systems

Question
policies and
practices

Promoting
accountability,
transparency
& learning

Building
blocks
Leads to
demand,
supply and
use of
evaluation

Process

3


Slide 4

Individual Level
Individuals’ knowledge, skills and competences

• Senior management capacity for strategy & planning
• At mid-management level, understanding of the role of
evaluation as a tool for effectively achieving development
results.
• Behavioural independence and professional competences of
those who manage and/ or conduct evaluations.
4
Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led
monitoring and evaluation systems.


Slide 5

Institutional Level
The institutional framework






Evaluation policy exists and is implemented.
An evaluation unit with a clearly defined role, responsibilities.
Functional Quality Assurance system.
Independence of funding for evaluations.
System to plan, undertake and report evaluation findings in an
independent, credible and useful way exists.
• Open dissemination of evaluation results.
• Knowledge management systems in support of the evaluation function
exists and is used.

5
Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and
evaluation systems.


Slide 6

Enabling Environment
A context that fosters the performance and
results of individuals and organizations
• Functioning National voluntary organisation for professional
evaluation.
• National policy on evaluation
• Strong evaluation culture.
• Public administration committed to transparency and managing for results
and accountability.
• Political will to institutionalize evaluation.
• Existence of adequate information and statistical systems.
• Legislation and/or policies to institutionalize monitoring and evaluation
systems

6
Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led
monitoring and evaluation systems.


Slide 7

All or nothing?
Neglect of…

Means that …

Individual

Skills are lacking
at all levels

Demand doesn’t
emerge

Organisation

Lack of
structures and
processes

Individuals’ skills
are not applied

Environment

Absence of
legislation &
supportive
policies

Lack of clarity in
roles and
responsibilities

7


Slide 8

Results management framework
Strategic Results
Framework
• Objectives, indicators &
strategy
• Roles & responsibility

Credible performance
reporting
• Relevant, timely & reliable
reporting

Use results to improve
performance
• Adjust the programme
• Develop lessons & good practices

Programme Results
Framework
• Results chain & theory of change
• Align with strategic framework
• Performance indicators

Credible measurement &
analysis
• Measure & assess results
• Assess contribution to strategic
objectives

8
Source: Itad Ltd, adapted from ‘Managing for Development Results Handbook’


Slide 9

Expectations for managers
Planning

• Understanding the theory of change
• Setting out performance expectations

Implementation

• Measure and analyse results and assess
contribution

Decision-making
& learning

• Deliberately learn from evidence and analysis

Accountability

• Reporting on performance achieved against
expectations

9
p

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4


Slide 10

Evaluations of RBM
• UNDP (2007)
• Significant progress was made on: on sensitising staff to results; and on
creating the tools to enable a fast and efficient flow of information.
• Managing for results has proved harder to achieve. Stronger emphasis on
resource mobilisation and delivery, a culture fostering a low level of risktaking, weak information systems at country level, the lack of clear lines of
accountability and the lack of a staff incentive structure all work against
building a strong culture of results.

• Finland (2010)
• Tools and procedures are comprehensive and well established . Good
standards of project design are not consistently applied.
• Low priority given by managers to monitoring, reporting and evaluation.
Most monitoring reports were activity-based or financial and there was little
reporting against logframes.
• Managing for results depends not only on technical methodology, but also on
the way the development cooperation programme is organised and
managed. Finland’s approach is characterised as being risk-averse; few
examples of results being used to inform policy.

10


Slide 11

‘Can we demonstrate the difference
that Norwegian aid makes?’
Overall conclusion
• Although there are some elements of good
foundations for better results measurement,
current arrangements lack the strength of
leadership, depth of guidance and coherence of
procedures necessary for effective evaluation of
Norwegian aid.
• As a result of a lack of incentives, poor processes
for planning and monitoring grants, and
weaknesses in the procedures for evaluations,
this cannot be demonstrated.
ITAD Ltd (2014) Can we demonstrate the difference that Norwegian Aid makes? Evaluation of results measurement and how this can be improved
Available at: http://www.norad.no/en/tools-and-publications/publications/evaluations/publication?key=412342

11


Slide 12

What is an evaluative culture?


An organization with a strong evaluative culture:

engages in selfreflection and selfexamination:
• deliberately seeks evidence
on what it is achieving, such
as through monitoring and
evaluation,
• uses results information to
challenge and support what
it is doing, and values
candour, challenge and
genuine dialogue;

engages in evidencebased learning:
• makes time to learn in a
structured fashion,
• learns from mistakes and
weak performance, and
• encourages knowledge
sharing;

encourages
experimentation and
change:
• supports deliberate risk
taking, and
• seeks out new ways of
doing business.

12
p

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4


Slide 13

UK Department for
International Development
• A 2009 study into DFID’s evaluation reports found:
• Weaknesses were systemic in nature linked to top management,
requiring a significant change in culture.
• A key overarching problem was an unduly defensive attitude to
the findings from evaluation.
• Other detailed recommendations called for:





evaluability issues to be considered at the planning stage;
for training of staff;
for strengthening the evidence base that underpins evaluations; and
for requiring managers to make a formal response to evaluations.

13
Roger C Riddell (2009) The Quality of DFID’s Evaluation Reports and Assurance Systems. IACDI (The Independent Advisory Committee on Development Impact)


Slide 14

DFID – Highly rated for evaluability
Five features of DFID’s approach combine to justify high ratings :
 Continuity of guidance from planning a project Business Case; quality
assurance arrangements; evaluation policy; and evaluation training materials,
with some cross referencing.

 Recognition that a clear logic model and results based on prior evidence
strengthens the quality of project design rather than being a formality to
complete a project proposal.
 Evaluability is assessed from several perspectives: expected impact and
outcomes; strength of the evidence base; theory of change; and what
arrangements are need to measure, monitor and evaluate progress and results.
 Documentation includes detailed descriptions, training or self-briefing
materials and examples for staff to follow.

 There is consistency of message across planning guidance, appraisal and
approval, with a detailed checklist for quality assurance.
Source: ITAD Ltd (2014) Can we demonstrate the difference that Norwegian aid makes?
An evaluation of results measurement and how this can be improved. Annex 5 (available on www.norad.no/evaluation)

14


Slide 15

DFID – Embedding
Embedding: Business Cases and Evaluation advisors
Since 2011: 37 advisers in a evaluation role;
150 staff accredited in evaluation and 700 people receiving basic training.
• Significant increase in the quantity of evaluations commissioned increased from 12
per year, prior to 2011, to an estimated 40 completed evaluations in 2013/14.
• The embedding process has increased the actual and potential demand.
• Decision to evaluate now made during the preparation of BCs. Good for programme
performance, but a lack of a broader strategic focus.
• Depth of this capacity is less than required with 81% accredited to date only at the
foundation or competent level
• Gaps in capacity relate to:
• understanding why and when to commission evaluation
• enhancing the contexts of evaluations and engaging stakeholders appropriately
• selecting and implementing appropriate evaluation approaches while ensuring
reliability of data and validity of analysis
• reporting and presenting information in a useful and timely manner.
• Need to:
• strengthen evaluation governance; develop a DFID evaluation strategy
Source DFID (2014) Rapid Review of Embedding Evaluation in UK DFID

15


Slide 16

Core quality model
Quality assurance
Standards

Performance

Technical guidance
How to
do it

Programme

Training

Procedures, roles
& responsibilities

Results &
evaluability
16


Slide 17

DFID Learning
DFID is the highest performing civil service main department for ‘learning and
development’. (Cabinet Office survey)
• Evaluations are a key source of knowledge.
• 40 evaluations completed in 2013-14.
• 425 evaluations either underway or planned as at July 2013.

• Annual, mid-term and project completion reviews are an under-utilised resource.
• Staff find it hard to identify what is important and what is irrelevant.
• DFID’s ability to influence has been strengthened by its investment in knowledge.
Issues:
• Workload pressures restricts making time to learn.
• Staff often feel under pressure to be positive about assessing both current and
future project performance.
• Knowledge is sometimes selectively used to support decision-making.
• Positive bias links to a culture where staff have often felt afraid to discuss failure.
• Many evaluations are not sufficiently concise or timely to affect decision-making.

Source: Independent Commission for Aid Impact (2014) How DFID Learns

17


Slide 18

UK – National Audit Office
£44m spent on government evaluation in 2010-11
Estimated 102 FTE staff working on evaluation in the government

Findings

Recommendations

• Significant spend
• Coverage incomplete
• Rationale for what the
government evaluates is
unclear.
• Evaluations often not
robust enough to reliably
identify the impact.
• Learning not used to
improve impact and costeffectiveness.

• Plan evaluation when
designing all new policies.
• Design policy
implementation to
facilitate robust evaluation.
• Departments to make data
available to independent
evaluators for research
purposes.

Source NAO (2014) Evaluation in Government

18


Slide 19

Results system or results culture?
Many
organizations
have systems
of results

A results-based planning system with
results frameworks for programmes.
Results monitoring systems in place
generating results data.
Evaluations undertaken to assess the
results achieved by an evaluation unit.
Reporting systems in place providing
data on the results achieved.

But these should not be mistaken for an evaluative culture. Indeed, on
their own, they can become a burdensome system that does not help
management at all.

19
p

Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4


Slide 20

Measures to foster an evaluative culture


Leadership

• Demonstrated senior management leadership and commitment
• Regular informed demand for results information
• Building capacity for results measurement and results management
• Establishing and communicating a clear role and responsibilities for results
management



Organisational support structures

• Supportive organizational incentives
• Supportive organizational systems, practices and procedures
• An outcome-oriented and supportive accountability regime
• Learning-focused evaluation and monitoring



A learning focus

• Building in learning
• Tolerating and learning from mistakes
Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR

20


Slide 21

Conclusions – taking a positive
view
• Evaluation only one source of information alongside
research and implementation experience. ECD needs to
inform how these work together.
• Quality evaluation is built on quality planning. ECD
needs to be linked to better planning systems.
• Technical skills are necessary but are not sufficient.
• Effective evaluation will be determined by the culture
and incentives in the organisation.
• ECD is a journey, not a destination. Systems are not
static; they need continual review, learning and revision.
There is no simple solution but rather systems need to be
introduced, used, tested, reviewed and then updated in a
rolling cycle.

21


Slide 22

END

22