Evaluating CDC HIV Prevention Programs: Guidance and Data

Download Report

Transcript Evaluating CDC HIV Prevention Programs: Guidance and Data

Global Trends in HIV/AIDS
Monitoring and Evaluation
Deborah Rugg, PhD
Associate Director for Monitoring
and Evaluation
HHS/US Centers for Disease Control and
Prevention (CDC)
Global AIDS Program
Overview

Background on HIV/AIDS M&E

HIV/AIDS M&E at the National Level

Trends in Global HIV/AIDS M&E
– The need for collaboration
– UNGASS reporting
– Monitoring and Evaluation Reference Group
(MERG)
– The "3 Ones" Principle
BACKGROUND ON HIV/AIDS M&E
Monitoring and Evaluation for
Program Improvement
Program
Improvement
Share
Data with
Partners
Reporting/
Accountability
A Public Health Questions
Approach to Unifying SI/M&E
Are we doing
them on a large
enough scale?
Are we
doing them
right?
Determining
Collective
Effectiveness
Monitoring &
Evaluating
National
Programs
OUTCOMES
& IMPACTS
MONITORING
ACTIVITIES
Are we
doing
the
right
things?
Problem
Identification
INPUTS
Understanding
Potential
Responses
Are interventions working/making a difference?
● Outcome Evaluation Studies
OUTCOMES
OUTPUTS
Are collective efforts being implemented on a large
enough scale to impact the epidemic (coverage;
impact)? ● Surveys & Surveillance
Are we implementing the program as planned?
● Outputs Monitoring
What are we doing?
● Process Monitoring & Evaluation, Quality Assessments
What interventions and resources are needed?
● Needs, Resource, Response Analysis & Input Monitoring
What interventions can work (efficacy & effectiveness)?
● Special studies, Operations res., Formative res. & Research synthesis
What are the contributing factors?
● Determinants Research
What is the problem?
● Situation Analysis and Surveillance
Strategic Planning for M&E:
Setting Realistic Expectations
Monitoring and Evaluation Pipeline
#
of
Projects
All
Input /Output
Monitoring
Most
Process
Evaluation
Some
Outcome
Monitoring/
Evaluation
Few*
Impact Monitoring/
Evaluation
* Supplemented with
impact indicators from
surveillance data.
Levels of Monitoring & Evaluation Effort
M&E Indicator Pyramid:
Levels of Indicators
Global
Level
Indicators
Country Level Indicators
Project Level Indicators
HIV/AIDS M&E AT THE NATIONAL LEVEL
Basic Outline for a National M&E Plan

Introduction- overview of programs or interventions

Background Information: e.g., M&E resources—financial,
human, other; roles and responsibilities

Logic Model/ Results Framework
– Problem statement
– Expected outcomes/impacts
– Indicators
– Multi-year targets (measurable objectives)

Operational definitions, sources, frequency of indicator
data, method of verification/validation

Inclusion of plans for special evaluation studies
Multi-agency M&E Logic Model
NAC/NAP Outputs
Government
inputs
World Bank
inputs
Short-Term
Outcomes
Intermediate
Outcomes
Program
Program
Program
Program
Program
USG
inputs
Other
inputs
Program
Program
Program
Adapted from Milstein & Kreuter. A Summary Outline of Logic Models: What are They
and What Can They Do for Planning and Evaluation? CDC 2000
Long-Term
Impacts
Fundamentals of M&E Planning at
the National Level

National governments must believe in the value of M&E

Donors’ / development partners’ external assistance
efforts are aligned with overall national or local strategies

Donor / partner funding is part of overall development
funding

Effective coordination mechanisms between partners are
essential

Transparency, trust and consultation between partners
are essential
TRENDS IN GLOBAL HIV/AIDS M&E
– Current status of HIV/AIDS M&E
– UNGASS reporting
– Monitoring and Evaluation Reference Group (MERG)
– The "3 Ones" Principle
Current status and Challenges of
HIV/AIDS M&E

Long-term approach to development planning and funding by key players such
as government, donors

Linking of short, medium and long-term frameworks and strategies - including to
budgets

M&E systems need to encompass much more complex frameworks and
environments

Donor fatigue too many reports, too many terms, too little feedback, too little
ownership of interventions

We spend lots of time on indicators, but Evaluation is often neglected; we need
to strengthen Evaluation to better understand our programs

Inadequate analysis of results / understanding of what we are actually doing
and what is working or is not working –, synthesis of what we learn from M&E,
and adapting program practice accordingly

Need for harmonized and yet manageable M&E data systems
United Nations General Assembly
Special Session on AIDS
(UNGASS)*

Prior to this UNGASS report, we only compared country data
on HIV prevalence

Now there are standardized indicators for policies, funding,
services, coverage and risk reduction

Data coming directly from over 100 countries

70% of reports involved civil society, 50% involved people
living with HIV/AIDS
*UNGASS Information courtesy Paul De Lay, Director
for Evaluation, UNAIDS/Geneva (2004)
2003 UNGASS Survey

103 countries responded out of 189

Progress seen in political commitment, improved
policies, prevention efforts

Insufficient progress in human rights, human
capacity, financial resources
Challenges/Issues

Indicators are mainly for generalized epidemics

No indicators for blood safety and infections in
hospitals

Indicators for IDU and behavior change in youth need
improvement

No indicators for sex workers

Few countries could report on quality of STI treatment
Monitoring and Evaluation Reference
Group (MERG)

UNAIDS established and international M&E standards setting
group

Members from all UN co-sponsors and international agencies

Develops M&E strategy guidelines and international
indicators as well as coordinates international M&E Technical
Assistance activities

Meets annually (with sub-committees meeting more
frequently) and generates M&E documents and other reports
available on the UNAIDS website

Involved in monitoring the implementation of the “ Three
Ones Principles”
The ‘Three Ones’
Key Principles
1. One agreed HIV and AIDS action
framework
2. One national AIDS coordinating
authority
3. One agreed monitoring and evaluation
framework
The Third “One”
Why is it better?

Data based on national needs rather than individual
donors

Production of high quality, relevant, accurate and timely
data

Submission of reports to international bodies under a
unified global effort

Efficient and effective use of data and resources

Allows synthesis of data from multiple sources

Greater transparency, coordination and communication
among different partners.
Principles for agreement

One M&E unit which coordinates M&E activities

One multisectoral M&E plan

One national set of standardized indicators

One national level information system

Strategic information flow from sub-national to national level

Harmonized M&E capacity building

Collective responsibility and collective achievement (attribution)
Why isn’t available data used better?

Data collection is fragmented

No single unit is responsible for compiling, analyzing and
presenting data in a cohesive whole

No budgets for analyzing and presenting data

Underestimate skills and cost needed to present data
effectively

Most M&E persons do not know how to use data well
Key categories of information

Biologic surveillance

Policy environment

Behavioral surveillance

Resource flows data Tracking commodities

Provision of prevention and treatment services and the coverage of
these services

Mortality and morbidity data

General health service performance

Evaluation research
Challenges for Monitoring and Evaluating ART



Need for short-term indicators (first two years of program implementation)
–
3-6 month intervals
–
Including equity of access
–
Survival at 6, 12, and 24 month time periods
Need to monitor long-term sustainability
–
Resource flow tracking (sources, cost per unit services, costs per commodities)
–
Staffing patterns
–
Facilities capacity
–
Systems infrastructure capacity
Need to monitor long-term impact
–
ARV resistance patterns
–
5 to 10 year survival and quality of life
–
System wide impact
–
Impact on incidence
–
Impact on economic productivity and social sector services
Conclusions
We need to mainstream M&E at all
levels- district, state, provincial,
national, and global. We need to do this
in a credible way that includes primary
users and focuses on outcomes that are
meaningful to people on the front lines.
Conclusions
In building on each other’s strengths,
we must identify incentives and
opportunities for collaboration, with the
fundamental consensus that working
together in a harmonized manner is
better than going at it alone.