Monitoring and evaluation - Institute of Management Studies

Download Report

Transcript Monitoring and evaluation - Institute of Management Studies

MONITORING AND EVALUATION
OUR APPROACH
1.
2.
3.
4.
What is monitoring and evaluation? Conceptual
differences and terminologies.
Approaches to Monitoring and Evaluation.
Establishing M&E System.
How to do Monitoring and Evaluation.
WHAT IS MONITORING?




Day-to-day follow up of activities during implementation
to measure progress and identify deviations
Routine follow up to ensure activities are proceeding as
planned and are on schedule
Routine assessment of activities and results
Answers the question, “what are we doing?”
WHY TO MONITOR ACTIVITIES?






Tracks inputs and outputs and compares them to plan
Identifies and addresses problems
Ensures effective use of resources
Ensures quality and learning to improve activities and
services
Strengthens accountability
Program management tool
WHAT IS EVALUATION?
It is a time-bound exercise that attempts to assess
systematically and objectively the relevance, performance
and success of ongoing and completed programmes and
projects.
 Designed specifically with intention to attribute changes to
intervention itself
 Answers the question, “what have we achieved and what
impact have we made”
 Evaluation commonly aims to determine the
relevance, efficiency, effectiveness, impact and
sustainability of a programme or project.

Relevance: The degree to which the outputs,
outcomes or goals of a programme remain valid
and pertinent as originally planned or as subsequently
modified owing to changing circumstances within the
immediate context and external environment of that
programme.
 Efficiency: A measure of how economically or
optimally inputs (financial, human, technical
and material resources) are used to produce outputs.
 Effectiveness: A measure of the extent to which a
programme achieves its planned results
(outputs, outcomes and goals).





Impact: Positive and negative long term effects
on identifiable population groups produced by
a development intervention, directly or indirectly,
intended or unintended. These effects can be
economic, socio-cultural, institutional, environmental,
technological or of other types.
Sustainability: Durability of programme results
after the termination of the technical
cooperation channelled through the programme.
Static sustainability – the continuous flow of the
same benefits, set in motion by the completed
programme, to the same target groups;
Dynamic sustainability – the use or adaptation of
programme results to a different context or changing
environment by the original target groups and/or
other groups.
WHY EVALUATE ACTIVITIES





Determines program effectiveness
Shows impact
Strengthens financial responsibility and accountability
Promotes a learning culture focused on service
improvement
Promotes replication of successful interventions.
TYPES OF EVALUATION




Ex-ante Evaluation: An evaluation that is performed
before implementation of a development.
intervention. Related term: appraisal.
Ex-post Evaluation: A type of summative evaluation
of an intervention usually conducted after it has
been completed.
External Evaluation: An evaluation conducted by
individuals or entities free of control by those
responsible for the design and implementation of the
development intervention to be evaluated (synonym:
independent evaluation).
Internal Evaluation: Evaluation of a development
intervention conducted by a unit and /or
individual/s reporting to the donor, partner, or
implementing organization for the intervention.
Formative Evaluation: A type of process
evaluation undertaken during programme
implementation to furnish information that
will guide programme improvement.
 Impact Evaluation: A type of outcome
evaluation that focuses on the broad,
longer-term impact or results of a programme.
 Joint Evaluation: An evaluation conducted
with other partners, bilateral donors or
international development banks.
 Meta-evaluation: A type of evaluation that
aggregates findings from a series of
evaluations.

Process Evaluation: A type of evaluation that
examines the extent to which a programme is
operating as intended by assessing ongoing
programme operations. A process evaluation helps
programme managers identify what changes are needed
in design, strategies and operations to improve
performance.
 Qualitative Evaluation: A type of evaluation that
is primarily descriptive and interpretative, and
may or may not lend itself to quantification.
 Quantitative Evaluation: A type of evaluation
involving the use of numerical measurement and
data analysis based on statistical methods.

Summative Evaluation: A type of outcome and
impact evaluation that assesses the overall
effectiveness of a programme.
 Thematic Evaluation: Evaluation of selected
aspects or cross-cutting issues in different types
of interventions.

CONFUSING TERMS
Audit
 Appraisal
 Inspection

Approach
Major Focus
Typical Question
Likely Methodology
Goal Based
(Strategic
Approach)
Assessing
achievement of
goals
and objectives.
Were the goals
achieved? Efficiently?
Were they the right
goals?
Comparing baseline and progress
data; finding ways to measure
indicators.
Decision Making
(System
Approach)
Provide
Information
Is the project effective?
Should it continue?
How might it be
modified?
Assessing range of options related to
the project context, inputs, process, and
product. Establishing some kind
of decision-making consensus.
Goal Free
(Inductive
Approach)
Assessing full
range of project
impacts, intended
and not intended
What are all the
outcomes? What value
do they have?
Independent determination of needs
and standards to judge project worth.
Qualitative and quantitative
techniques to uncover any
possible results.
Expert
Use Expertise
How does an outside
professional rate this
project?
Critical review based on experience,
informal surveying, and subjective
insights.
Participatory
Approach
Stakeholder
Satisfaction
How do the
stakeholders rate the
project.
Participatory Workshops.
M&E TOOLS
Evaluating programme strategy and direction: Logframes, Stakeholder Analysis
 Evaluating programme management: Horizontal
Evaluation; Appreciative Inquiry
 Evaluating programme outputs: Evaluating academic
articles and research reports; Evaluating websites; After
Action Reviews
 Evaluating outcomes and impacts: Outcome Mapping,
Most Significant Change; Episode Studies.

M&E TOOLS
Following is a non – exhaustive list of M&E Tools:
1. Performance indicators
2. Formal surveys
3. Rapid appraisal methods
4. Participatory methods
5. Cost-benefit and cost-effectiveness analysis
6. Impact evaluation
1. PERFORMANCE INDICATORS
Performance indicators are measures of inputs, processes,
outputs, outcomes, and impacts for development projects,
programs, or strategies.
 Uses:

Setting performance targets and assessing progress toward
achieving them.
 Identifying problems via an early warning system to allow
corrective action to be taken.


Problems:
Poorly defined indicators are not good measures of success.
 Tendency to define too many indicators, or those without accessible
data sources,
 Often a trade-off between picking the optimal or desired indicators
and having to accept the indicators which can be measured using

HOW TO MAKE INDICATORS
1.
2.
3.
4.
5.
Identify the problem situation you are trying to address.
Develop a vision for how you would like the problem areas
to be/look. This will give you impact indicators.
Develop a process vision for how you want things to be
achieved. This will give you process indicators.
Develop indicators for effectiveness.
Develop indicators for efficiency .
2- FORMAL SURVEYS
Formal surveys can be used to collect standardized
information from a carefully selected sample of people or
households.
 Uses:

Providing baseline data against which the performance of the
strategy, program, or project can be compared.
 Comparing different groups at a given point in time.
 Comparing changes over time in the same group.
 Comparing actual conditions with the targets established in a
program or project design.

2- FORMAL SURVEYS

Problems:
With the exception of CWIQ, results are often not available for a
long period of time.
 The processing and analysis of data can be a major bottleneck for
the larger surveys even where computers are available.
 LSMS & household surveys are expensive & time-consuming.
 Many kinds of information are difficult to obtain through formal
interviews.

DIFFERENT TYPES OF SURVEY
1.
2.
3.
4.
Multi-Topic Household Survey (also known as Living
Standards Measurement Survey—LSMS).
Core Welfare Indicators Questionnaire (CWIQ).
Client Satisfaction (or Service Delivery) Survey.
Citizen Report Card.
3- RAPID APPRAISAL METHODS
Rapid appraisal methods are quick, low-cost ways to gather
the views and feedback of beneficiaries and other
stakeholders, in order to respond to decision-makers’ needs
for information.
 Uses:

Providing rapid information for management decision-making,
especially at the project or program level.
 Providing qualitative understanding of complex socioeconomic
changes, highly interactive social situations, or people’s values,
motivations, and reactions.
 Providing context and interpretation for quantitative data collected
by more formal methods.

3- RAPID APPRAISAL METHODS

Problems:
Findings usually relate to specific communities or localities—
thus difficult to generalize from findings.
 Less valid, reliable, and credible than formal surveys.

4- RAPID APPRAISAL METHODS
1.
2.
3.
4.
5.
Key informant interview
Community group interview
Focus group discussion
Direct Observation
Mini surveys
4- PARTICIPATORY METHODS
Participatory methods provide active involvement in
decision-making for those with a stake in a project,
program, or strategy and generate a sense of ownership in
the M&E results and recommendations.
 Uses:

Learning about local conditions and local people’s perspectives
and priorities to design more responsive and sustainable
interventions.
 Evaluating a project, program, or policy.
 Providing knowledge and skills to empower poor people.

4- PARTICIPATORY METHODS

Problems:
Sometimes regarded as less objective.
 Time-consuming if key stakeholders are involved in a
meaningful way.
 Potential for domination and misuse by some stakeholders to
further their own interests.

4- PARTICIPATORY METHODS
1.
2.
Participatory rural appraisal
Participatory monitoring and evaluation
5- COST-BENEFIT & COST-EFFECTIVENESS
ANALYSIS
Cost-benefit and cost-effectiveness analysis are tools for
assessing whether or not the costs of an activity can be
justified by the outcomes and impacts. Cost-benefit
analysis measures both inputs and outputs in monetary
terms. Cost-effectiveness analysis estimates inputs in
monetary terms and outcomes in non-monetary quantitative
terms.
 Uses:

Informing decisions about the most efficient allocation of resources.
 Identifying projects that offer the highest rate of return on
investment.

5- COST-BENEFIT & COST-EFFECTIVENESS

ANALYSIS
Problems:
Fairly technical, requiring adequate financial and human
resources available.
 Requisite data for cost-benefit calculations may not be available,
and projected results may be highly dependent on assumptions
made.
 Results must be interpreted with care, particularly in projects
where benefits are difficult to quantify.

6 – IMPACT EVALUATION
Impact evaluation is the systematic identification of the
effects – positive or negative, intended or not – on individual
households, institutions, and the environment caused by a
given development activity such as a program or project.
 Uses:

Measuring outcomes and impacts of an activity and distinguishing
these from the influence of other, external factors.
 Helping to clarify whether costs for an activity are justified.
 Informing decisions on whether to expand, modify or eliminate
projects, programs or policies.


1.
2.
3.
Problems:
Some approaches are very expensive and time-consuming
Reduced utility when decision-makers need information
quickly.
Difficulties in identifying an appropriate counter-factual.