Roper_Rammer_19-03-2014

Download Report

Transcript Roper_Rammer_19-03-2014

Overview of evaluation of SME policy –
Why and How
Why evaluate?
• Evaluation provides
– Valuable evidence for developing schemes to make the
most of any investment
– It provides a validation of the benefits of a scheme and its
value – good news
– It provides hard evidence which can be used to make a
case for continuing or developing a scheme in budget
discussions
– It enables inter-scheme, inter-regional and international
comparisons
Types of SME policy evaluation
Type
1
2
3
4
Title
Measuring
take-up
Comments
Data needed
Provides an indication of popularity but no real idea of outcomes Scheme management
or impact
data
Recipient
evaluation
Subjective
assessment
Popularity and recipients' idea of the usefulness or value of the
scheme. Very often informal or unstructured.
Subjective assessments of the scheme by recipients. Often
categorical, less frequently numeric
Control
group
Matched
control
group
Recipient data
Recipient data
Impacts measured relative to a 'typical' control group of potential Recipient and control
recipients of the scheme
group data
6
Impacts measured relative to a 'matched' control group similar to Recipient and control
recipients in terms of some characteristics (e.g. size, sector)
group data
Survey data Econometric Impacts estimated using multivariate econometric or statistical recipients and nonstudies
approaches and allowing for sample selection
recipients
7
Experimental Impacts estimated using random allocation to treatment and
Approaches control groups or alternative treatment groups
5
Control and
treatment group data
Type 2 – Quantitative monitoring
•
Aim to profile operational aspects of the scheme and provide firm list for later impact
analysis
•
Key questions:
– Who applied for the scheme?
– How many were funded? Went ahead?
– How long did approval take?
– Were all the funds allocated?
•
Responsibility of delivering agent to collect and data management should be part of the
service contract
•
Key evaluation failure is lack of good administrative data on scheme
Type 3 – Qualitative monitoring
•
Aim to interpret and analyse admin data plus perhaps some ‘key informant’ interviews.
•
Possible questions:
– Did projects finish as expected?
– Did support go to the expected types of firms?
– Where are these firms located? What industries?
– Were applications processed fast enough?
•
Might be seen as ‘interim evaluation’ and best if independently done
Types 4 and 5 – Control group comparisons
•
Compares performance of recipients and a group of similar firms/individuals matched on
some quantitative dimension, e.g. growth, exporting etc. Difference in performance is said to
be due to the effect of the scheme. Data could be survey or business register type data.
•
Advantages
– Relatively simple methodology to apply
– Provides ‘hard’ data on impacts not self-assessment
•
Disadvantages
– Often difficult to construct relevant control group
– Requires information on participants and controls
– Costly sometimes
– Does not control for self selection into recipient group (see example below)
•
Operational issues
– Analysis should be undertaken independently commissioned by sponsoring department
– Key issue is design of control group and needs careful thought
Type 6 – Econometric approaches
•
Typically based on a survey of recipients and non-recipients. How did performance changes in
recipients compare to that of similar firms allowing for firm characteristics and selection bias?
Uses regression models to identify policy impact on performance controlling for selection bias
•
Advantages
– Seen as ‘best practice’ methodology
– Can control for firm/individual characteristics
– Can control for selection bias
Disadvantages
– Costly, complicated and difficult to understand
•
•
Operational aspects
– Analysis should be undertaken independently commissioned by sponsoring department
– Requires survey of recipients and non-recipients so costs at least double self-assessment
(Type 1)
– Telephone interviews can be very cost effective (CATI)
Comments on the LDC schemes
1/6
KORET
• Impressive scheme, key effects social+economic value for
assisted women
• Level 2 evaluation: self-assessment before and after
– appropriate approach, since appropriate control group diffucult
to identify and approach
– counterfactual is status quo
– little added value in control group approach
• Information from existing evaluation could be used to
develop NPV-type evaluation (scheme value)
• But this would underestimate the programme‘s effect –
strong social impact and local multipliers
• Consider strategic added value
Business Centres
• 2 alternative treatments:
– virtual incubator (networking)
– physical incubator
• Compare relative growth of two groups of entrepreneurs,
control for firm capabilities
• Small samples suggest to analyse all 5 centres together
• Compare contribution of different types of support
activities of LDCs
• Complex evaluation with external control group not
worth doing
• Outcome: also consider investment
Initiating a Business
• Current evaluation is a good approach
– matched control group: applicants decided not to go forward
(level 4 evaluation)
– better controlling for selection bias and unobserved
characteristics relevant for decision to start a business, and the
business‘ success
– collect a bit more information on the applicants
•
•
•
•
aspiration
qualification/skills
family/community background
prior employment record
– use selection-control model (level 6 evaluation)
– outcome variables: survival, profitability, enter employment
– impact period: 18 or 24 months
Ex-Post Evaluation of the Consulting
Programme?
• not clear whether it is worth conducting an evaluation
• difficult to identify a sensible control group because of
focus on firms in crisis
Option 1: Effects of alternative treatments
– exploit existing monitoring data
– estimate effects of different types of consulting support while
controlling for firms‘ initial status (= result of diagnosis?)
Option 2: Test on performance specifities of target group:
– identify sector/size/age/regional distribution of firms receiving
consulting
– compare performance
– differentiate by the type of consulting support provided
Evaluating the Consultancy Programme
• Develop Logic Models for each initiative:
– output of funding activities
– short-term, medium-term, long-term outcome
– contingency factors
• Etablish a integrated monitoring system
– firms approaching MAOF: motivation, source of information
– results of diagnosis
– surveying firms on attitudes, behaviour, output at three
points in time: at diagnosis, post-consultancy, after 2 years
(potentially as part of another diagnosis)
– put all into a database
Evaluating the Consultancy Programme
• Early, formative evaluation
– sustainability effects of the new 2-year approach
– effects on attitudes and behaviour of different types of
consultancy
• Impact evaluation
– establish a control group: match several „twins“ for each funded
firm (e.g. from D&B, business register)
– short surveys of control group at beginning and end of
2-year period (past performance, information about MAOF)
– displacement and multiplier effects (through second control group
survey)
– net effect: NPV based on employment differences to control
group (survival and growth)
Evaluating the Consultancy Programme
Now something speculative ...
• Randomisation of allocating consulting support
a) firms can choose between
– consulting service based on diagnosis (and having to pay a
part of consulting costs)
– getting a consulting service assigned randomly (and not
having to pay)
b) some firms get a second consultancy
– firms may choose the subject
– they will have to pay