Presentazione di PowerPoint

Download Report

Transcript Presentazione di PowerPoint

SPO Internship @ UVER
5-11 November 2007
Tiziana Tamborrini
Uver Effectiveness
Evaluation
An Original Blend of
Outcome Evaluation and
Implementation Analysis
Legal Framework
The Presidential Decree - D.P.R. 20.02.1998, n. 38 –
•
Provides UVER with the legal status for Outcome
Evaluation.
•
Specify the Object of the Evaluation Activity with
reference to investments programs and projects carried
out by Public Agencies (Regioni, Comuni…) and founded
by financial resources.
•
Mentiones the need to check for social-economic effects
produced by these public interventions, to be evaluated
accordingly to the planned targets and objectives and in
line with the Cost Estimates.
•
Attributes a role of guidance to UVER in order to promote
and suggest specific initiatives to be adopted by the
Administrations as a consequence of the evaluation
activity.
,
Legal Framework
The CIPE (Interministerial Committee
for Economic Planning) Resolution n.
12/2006 states the following:
– The Effectivness Evaluation carried out
by Uver is an important step towards a
more rational and efficient use of public
resources providing guidance to decisionmaking for development policies.
Evaluation-Object
• Output: these are the direct outcomes,
Goods and Services,of the funded activity
accordingly to what are also called
Operational Objectives;
• Results, these are what the funded activity
has set out to achieve accordingly to the
Specific Objectives of the Project;
• Impact or Effects, these are long-term and
indirect Effects on socio-economic context,
related to Global Objectives of the
Program.
Evaluation-Level
• Uver EffUnit is in charge for a Project and
Program-level Outcome Evaluation;
• Outcome Evaluation has been so far carried
out on Projects and specifically on Public
Investments on Infrastructures.
• Te Evaluation Activity is carried out on
Intervention financed by the Additional
National Public Resources within the Fund
for Underutilized Areas (FAS).
Key Answers for a key Evaluation
Why are we evaluating?
•To assess if the community of stakeholders and beneficiaries
– final users of the intervention services - is satisfied and to
what extent with the achieved results;
•To reward the best practices and stigmatized bad practices in
order to improve the planning design and give feedbacks and
guidance to the decision-making process.
What do we need to know?
•Achieved outputs and outcomes of Projects in order to
compare results to the planned targets.
•Implementation process in order to check the governance of
service delivering;
•Not only achieved output and outcomes, but also APPRAISE
the extent to which the project contributes to the program
general goals.
Uver-Effectivness Unit Vision
Of Evaluation
The adopted theoretical background (see the presentation on
Background Theory of Evaluation) operationalizes a
broader and complex view of Evaluation which:
•
Points to an ‘ad hoc – partecipated – contextbased’ application of the
Evaluation Theory based not on a one ‘best’ method but on a unique
mix of Evaluation Methods;
•
Focus not only on the last part of results chain : immediate outputs,
outcomes and final impacts, but on the chain itself: the design, the
implementation process, the context where the process takes place
and finally the ‘social’ utility of projects/programs;
•
Looks for a statistical-quantitative appraisal of how results are
causally linked, but also to a qualitative-participatory process of
evaluation where all possible sources of information are key and
essential to a multidimensional and systemic evaluation.
Different Approaches at Work
• Also the practice of Evaluation shows
there is Not One ‘Best’ Approach to
Evaluation but A Variety of Evaluation
Approaches (Martini, 2006).
• Depending on the Answers to the two
key questions any one of them or a
mix of them can be the suitable
Evaluation
A Mix Approach to Evaluation
Outcome Evaluation for Project Accountability
•
Evaluation is intended as Evaluating project delivering and targets
achievement.
•
In this case users come from outside of program operation – the key
stakeholders – and the object unit of evaluation is not a specific
single organizations but a more complex public programs of
investments.
•
Evaluation here aims to account for the main results to the key
stakeholders, therefore Evaluation aims not only to compare the
results with specific targets, but also to provide a general and deep
description of what has been done to the community.
•
Evaluation evocates a Transparent, Responsible, Accountable process
of analyzing the success of public intervention and delivering
evaluation results.
A Mix Approach to Evaluation
Evaluation as Implementation Research
• Understanding how the Project has been implemented.
• Evaluation here is meant as the process of going inside the
implementation process to shed light on critical issues and
incongruence.
• Knowing why a project achieves its goals is more important
than just knowing that it does. It evocates Governance of
Implementation Policy.
•
•
•
What are the critical components/activities of this project (both
explicit and implicit)?
How do these components connect to the goals and intended
outcomes for this project?
What aspects of the implementation process are facilitating success
or acting as stumbling blocks for the project?
A Mix of Tools for Different
Puposed Evaluation
•
•
•
•
•
•
•
Performance indicators
The logical framework (logframe) approach
Theory-based evaluation
Formal surveys
Rapid appraisal methods
Participatory methods
Impact evaluation Techniques
Performance Indicators
• Performance indicators are measures of inputs, processes, outputs,
outcomes, and impacts for development projects, programs, or
strategies. When supported with sound data collection—perhaps
involving formal surveys—analysis and reporting, indicators
enable managers to track progress, demonstrate results, and take
corrective action to improve service delivery.
• Participation of key stakeholders in defining indicators is important
because they are then more likely to understand and use indicators
for management decision-making.
References
CE
CE
World Bank (2000). Key Performance Indicator Handbook. Washington, D.C.
Hatry, H. (1999). Performance Measurement: Getting Results. The Urban
Institute, Washington, D.C.
•
•
•
•
The Logical Framework
The logical framework (LogFrame) helps to clarify objectives of any
Approach
project, program, or policy.
It helps identify expected causal links—the “program logic”—in the
following results chain: inputs, processes, outputs (including
coverage or “reach” across beneficiary groups), outcomes, and
impact.
It leads to the identification of performance indicators at each stage in
this chain, as well as risks which might impede the attainment of the
objectives.
The LogFrame is also a vehicle for engaging partners in clarifying
objectives and designing activities. During implementation the
LogFrame serves as a useful tool to review progress and take
corrective action.
References
World Bank (2000). The Logframe Handbook, World Bank:
http://wbln1023/OCS/Quality.nsf/Main/MELFHandBook/$File/LFhandbook.
pdf
GTZ (1997). ZOPP: Objectives-Oriented Project Planning:
http://www.unhabitat.org/cdrom/governance/html/books/zopp_e.pdf
Formal Surveys
•
Formal surveys can be used to collect standardized information from a
carefully selected sample of people or households. Surveys often collect
comparable information for a relatively large number of people in particular
target groups (NOT CARRIED OUT AT THE MOMENT).
 Providing baseline data against which the performance of the strategy,
program, or project can be compared.
 Comparing different groups at a given point in time and changes over time
in the same group.
 Comparing actual conditions with the targets established in a program or
project design
 Describing conditions in a particular community or group.
References:
LSMS: http://www.worldbank.org/lsms/
Client Satisfaction Surveys: http://www4.worldbank.org/afr/stats/wbi.cfm#sds
Citizen Report Cards: http://lnweb18.worldbank.org/ESSD/sdvext.nsf/60ByDocName/
CitizenReportCardSurveysANoteontheConceptandMethodology/$FILE/CRC+SD+note.pdf
Theory-Based Evaluation
•
•
•
•
Theory-based evaluation has similarities to the LogFrame approach but
allows a much more in-depth understanding of the workings of a program or
activity—the “program theory” or “program logic.” In particular, it need not
assume simple linear cause-andeffect relationships.
By mapping out the determining or causal factors judged important for
success, and how they might interact, it can then be decided which steps
should be monitored as the program develops, to see how well they are in fact
borne out.
This allows the critical success factors to be identified. And where the data
show these factors have not been achieved, a reasonable conclusion is that the
program is less likely to be successful in achieving its objectives.
For example, the success of a government program to improve literacy levels by
increasing the number of teachers might depend on a large number of factors. These
include, among others, availability of classrooms and textbooks, the likely reactions of
parents, school principals and schoolchildren, the skills and morale of teachers, the
districts in which the extra teachers are to be located, the reliability of government
funding, and so on.
References
Weiss, Carol H. (1998). Evaluation. Prentice Hall, New Jersey, Second Edition.
Weiss, Carol H. (2000). “Theory-based evaluation: theories of change for poverty reduction
programs.” In O. Feinstein and R. Picciotto (eds.), Evaluation and Poverty Reduction.
Operations Evaluation Department, The World Bank, Washington, D.C.
Rapid Appraisal Methods
• Rapid appraisal methods are quick, low-cost ways to gather
the views and feedback of beneficiaries and other
stakeholders, in order to respond to decision-makers’ needs for
information
• Providing rapid information for management decision-making,
especially at the project or program level.
• i.e.: Focus groups, Key informant interview , Community
group interviews, Direct observations, Mini survey (TO BE
IMPLEMENTED FOR SPECIFIC CASE STUDIES)
References
USAID. Performance Monitoring and Evaluation Tips, #s 2, 4, 5,
10: http://www.usaid.gov/pubs/usaid_eval/#02
K. Kumar (1993). Rapid Appraisal Methods. The World Bank
Washington, D.C..
Impact Evaluation Tools
•
Rapid assessment or review, conducted ex post. This method can encompass a
range of approaches to endeavor to assess impact, such as participatory methods,
interviews, focus groups, case studies, an analysis of beneficiaries affected by the
project, and available secondary data;
•
Ex-post comparison of project beneficiaries with a control group. With this method,
multivariate analysis may be used to control statistically for differences in attributes
between the two groups ― this is one way of estimating the counterfactual
situation;
•
Quasi-experimental design, involving the use of matched control and project
(beneficiary) groups. This method involves the use of a “non-equivalent” control group
to match as closely as possible the characteristics of the project population – either
through propensity score matching or using a multivariate regression approach. This
method often involves the use of large scale sample surveys, and sophisticated statistical
analysis;
•
Randomized design. This involves the random assignment of individuals or households
either as project beneficiaries, or as a control group which does not receive the service or
good being provided by the project. This is also known as the experimental method, and
is used in health research, for example, in areas such as evaluating the effectiveness of
new drugs and medical procedures.
UVER
E ffectivness E valuation F ram ew ork
FA SE ISTR U T TO R IA :
•C O N TE X T B A SE D A N A LY S IS
•H istory and LO G FR A M E O F TH E PR O JEC T:
O bjectives, specfic targets, im pacts
•Identify stakeholders e beneficiaries
IM P LE M E N TA T IO N E V A LU A T IO N
:G overnance of Im plem entation Process
O ver the w hole P roject C ycle:
D esign -Im plem entation -M anagem ent
IM P A C TS A PPR A IS A L
G auge
G eneral Im pacts on B eneficiaries and on C ontext
B y congruous context-based analysis,
Theory-based and LogFram e evaluation,
Stakeholders participation,
A d hoc R apid A ppraisal E valuation
O U TC O M E E V A LU A T IO N :
to w hat extent and quality
phisical outp uts a nd s ervic es
have b een pro duce d
com pared to targets
U sing quantitative and qualitative
statistical analysis
Evaluation Steps:
1. Project Assignment
 A Team of Experts (ToE) starts up the
process.
The ToE is made of engineers, legal
experts, lawyers, architects, economists.
 The ToE initializes the Evaluation Form in
the Data Base and check first available
data;
 The ToE contacts the Evaluation Team for
the methodological support.
Evaluation Steps:
2. Desk Work
The ToE starts up the ‘Desk Work’ and builds up a
partecipatory desk review from the very beginning
setting contacts with key stakeholders
Context-based model – Implementation Evaluation
Theory-based and Logframe model – Implementation Evaluation
 Track and Assess Deadlines,
 Assess estimated and actual Costs,
 Assess Previous Evaluation Results (Ex Ante Evaluation,
Monitoring Activity),
 Track Specific and General Objectives and Targets,
 Check continuing relavance and consistency of Project
Design and Objectives/Targets given current needs and
context.
Evaluation Steps:
2. Desk Work
Make a history of the project;
Highlight the logframe and the
underlined ‘thory’ :
 General programs priorities and goals,
 Social-political-economic context where
it was planned and the local needs,
 General background motivations for the
project.
Evaluation Steps:
2. Desk Work
 Assess critical factors affecting implementation
and effectiveness (any possibile shortcoming in
design and the management of implementation
process, Time consistency,specific technical
change in context which have required for an
ExtraOrder Change);
Implementation Model
 Brainstorm on shortcomings with stakeholders
Context-based and Partecipatory models – Implementation
Evaluation
 Brainstorm on objectives and beneficiaries
Context-based and Partecipatory models – Outome Evaluation
Evaluation Steps:
3. Indicators Selection
 Examine quantification of objective and available
data on output/results/impacts,
 Highlight badly defined or not relevant indicators
and speculative targets,
 Select suitable performance indicators from the
Table of Indicators in the Data Base according to
the logframe built in the first step (see the Table of
Indicators in Annex 1).
Deductive-Quantitative Approach and Outcome Evaluation
 Discuss and improve the Performance Indicators set
with the key stakeholders and look for available
Quantitative Targets.
Partecipatory model and Outcome Evaluation
Evaluation Steps:
4. On Site Visit Preparation
 Set face-to-face meetings with the stakeholders;
Partecipatory model
 Review and Contextualize the Governance Questionnaire (see
link) which will provide guidance for the interviews on site;
Partecipatory model and Implementation/Outcome Evaluation
 Possibly organize Rapid Appraisal Methods (Focus groups,
Key informant interviews, Community group interviews,
Direct observations, Mini survey in case of selected case
studies)
Deductive-Quantitative Approach and Partecipatory model
Evaluation Steps:
5. On Site Visit
 Have the planned meetings with the key stakeholders and beneficiaries;
 Collect data on Indicators, specifically:
 Targets;
 Acutal values;
 Context-values which describes context situation against which the changes will be
measured (baselines);
 Collect all possible information and visual evidence useful for the evaluation
(fotographs; stakeholders views; direct observations);
 Update the project’s picture from the First Step given On Site evidences –
stakeholders views – subjective impressions;
 Compare and Adjust results’ expectations based on theory-based and logframe
analysis.
Context-based Partecipatory Approach and Outcome Evaluation
Evaluation Steps:
6. Post Visit Work and Analysis
Fill the Numeric Fields of the
Evaluation Forms with the available
data on:
 Expected and Actual Deadlines of different
stages of implementation process,
 Expected and Actual Costs,
 Targets, Context-values and Actual Values of
Performance Indicators;
 Qualitative Indicators,
 Beneficiaries: Inhabitants, Final and Potential
Users.
Evaluation Steps:
6. Post Visit Work and Analysis
Fill the Descriptive Fields of the Evaluation
Forms:





Description of the Intervention,
Description of Expected Goals;
Description of Implementation Process,
Description of Beneficiaries,
Description of critical factors.
Evaluation Steps:
6. Post Visit Work and Analysis
 Discuss critical cases in collegial meetings with all
the colleagues and Evaluation Team in order to adopt
an homogeneous approach for problematic
interventions (highly recommended for cases at high
risk of a Negative Final Assessment);
 Fill up the Governance Questionnaires: experts of
ToE are intended to be ‘key informants’. Support of
the Evaluation Team.
Evaluation Steps:
7. Empirical Analysis and Evaluation
Carried out by the Evaluation Team:
 Measure Physical Output Realization and Planned Services
Achievement on Recommended and Optional Performance
Indicators;
 Gauge the extent to which major Strategic Objectives are
achieved
by single projects and by geographic region/Asses based on
 Collected data on Efficiency Indicators (Time and Costs: rate of
increase/decrease of actual value respect to the target);
 Collected data on Quantitative Performance Indicators (rate of
increase/decrease of actual value respect to the target);
 Qualitative Indicators;
 Elaborate the Governance Questionnaries;
Evaluation Steps:
7. Empirical Analysis and Evaluation
 Combine in a Multicriteria Final Qualitative
Assessment the Quantitative Indicators and the
Qualitative Assessment in the Evaluation Form;
 Discuss again the Final Assessment in open-up
meetings for a final check on critical cases (Negative
and Not Working Interventions);
 Elaborate the Governance Final Scores for a Final
Qualitative Integrate Assessment in order to provide a
Rating of Interventions.
Evaluation Steps:
8. Finalize Evaluation Process
• Communication of Final Results to
Public Agencies in charge for the
Project and Government (CIPE);
• Dissemination of Results to
stakeholders, citizens of local
communities, mass media.
Picture of Final Assessment
Process
Performance
Indicators
Time
Analysis
Impressions
Final
Assessment
Cost
Analysis
Critical
Factors
Positive Assessment
• Targets on physical outputs and services have been
accomplished by more than 50 percent;
• Costs estimates have been attended;
• Deadlines have been reasonably accomplished;
• The Design has been proven ‘robust’ and ‘resilient’
(not incurring in too many Critical Issues: Time
Extension, Extra orders Changes).
Negative Assessment
Projects have produced no relevant effects
(less than 50 percent of target has been
achieved on evarage of recommended
indicators) and have been proved uneffective
at the Implementation Analysis and
Qualitative Appraisal.
The Department suggests Cipe to sanction the
involved Administrations by retaining from
the next assignment the same amount of
money as that one has been uneffectively
used.
Not Activated Interventions
• Justified Not Activated Intervention:
Infrastructures have not being activated yet due to
acknowledged and well motivated reasons. They
will be checked for Effectiveness in a next
Evaluation Round;
• Not Justified and Not Activated:
They receive a warning of possible Negative
Assessment if not activated within 90 days since the
CIPE resolution.
They follow the same procedure as the Negative
projects if Not Activated by the extended deadlines.
Towards a Rating of the
Administrations
• Questionnaire on Governance yields a broader and
complex picture of the project from design to results
delivering.
• The Questionnaire is submitted to the Expert of the
ToE who are reasonably playing the role of a key
informant;
• It is made of 15 questions, each of them with 5
possible items;
• Each question is related to a Governance
Dimensions in accordance with the European
Community and International Approach to
Governance: Projection, Implementation
Management, Transparency, Participation,
Effectiveness.
• The Final Score allows for a more graduated rating
of projects
Project Rating
We consider six rating categories ranging from highly satisfactory to highly
unsatisfactory:
•
•
•
•
•
•
Highly Satisfactory: The Project achieved at least acceptable progress toward
all major relevant objectives and Governance Dymensions, and had best
practice development impact on one or more of them. No major shortcomings
were identified.
Satisfactory: The Project achieved acceptable progress toward all major
relevant Governance dymensions given the constraint of being effective has
been satisfied. No best practice achievements or major shortcomings were
identified.
Moderately Satisfactory: The Project achieved acceptable progress toward
most of its major relevant objectives and Governance Dymensions. No major
shortcomings were identified.
Moderately Unsatisfactory:The Project did not make acceptable progress
toward most of its major relevant objectives and Governance Dymensions.
Unsatisfactory:The Project did not make acceptable progress toward any of
its major relevant Governance Dymension given the constraint of
Effectiveness.
Highly Unsatisfactory:The Project did not make acceptable progress toward
any of its major relevant objectives and Governance Dymension
Project Rating
Highly Satisfactory
Gov++
Oucome ++
Satisfactory
Gov +
Outcome ++
Moderalìtly Satisfactory
Gov +
Outcome +
Moderatly
Unsatisfactory
Gov –
Outcom +
Outcome
++
Unsatisfactory
Gov –
Outcome –
Highly Unsatisfactory
Gov – Outcome - -
Tools for Evaluation
o Recommended and Optional Indicators: Few and
homogenous by Category of Interventions
 To make projects comparable within the same Category;
 To address good and not so good practices;
 To build in prospective baselines (standards for
comparison);
o Free Indicators in case of Projects with specific
objectives
o Governance Indicators
Completamenti: Main
Features and Selected Projects
Set for Evaluation
• Projects are selected from a group of
310 called “Completamenti” (1,8 bil
€).
• It is a group of non completed
projects, that in 1999 were provided
with an ex ante evaluation made by
UVAL.
Main Results of 2006 – 2007
Evaluation Cycle
• Approximately 93 percent of activated
projects are positive (152 out of 163);
• Few best practices but some good
practices;
• 11 Negative cases which have been
addressed to CIPE for sanction;
• 27 Not Activated Projects (16
Justified and 11 Not Justified).
Main Shortcomings
•
•
•
•
Half visited projects did not have any
data on targets on selected indicators.
Context values were almost never
available.
In most cases it has been quite
difficult to track back key
stakeholders for different stages of
project cycle.
Deadlines got mostly unattended.
PERSPECTIVES
• Improve the Projects Rating System;
• Extend the Rating System to the Regions;
• Stimulate the project planning and
implementation project cycle:
– Preliminary evalutations need to be strenghten;
– Quantitative context/baselines and targets for
Results Indicators need to be provided since the
beginning of project cycle;
– Monitoring on indicators has to be to improved.