Transcript Slide 1

Concepts, scope and limitations of evaluating Escience policy
Kate Barker, Jakob Edler, Kieron Flanagan
Manchester Institute of Innovation Research, Manchester
Business School
UKe-Science ALL HANDS MEETING 2008 - "Crossing Boundaries" 10
September, Edinburgh, UK
Content
• E-science as investment in infrastructure
• Evaluation of e-science – starting from first principles
• Impact dimensions and examples from impact assessment
of EC e-infrastructures
• Knowledge dynamics and e-science
• Possible ways forward
Infrastructure as an element of ‘research capacity’
• Capacity is mobilised in the research process to
create various outcomes:
– cognitive developments (new concepts and theories)
– new data sets
– impacts upon teaching and learning
– wider impacts upon research ‘users’ and practice
– impacts on infrastructure innovation
Special features of research infrastructures – what about escience?
•
Distinctive innovation dynamics
– Most high-tech innovation is supplier-driven
– Much research infrastructure innovation is user-led and involves a
complex interaction between demanding users and potential suppliers
– Expectations of economic effects usually too high
•
Distinctive funding dynamics
– Challenges of assembling infrastructure
• Small items funded through project grants
• Very large-scale infrastructure through special funding
– Researchers often complain of a ‘missing middle’- lack of
funding for mid-range items (e-science – middleware)
Special features of research infrastructures 2 – relevance for escience?
•
Challenges of funding
– underfund the ongoing support costs of infrastructure, underestimate the
complementary investment needed in order to maximise the original
investment (physical access)
– opportunity costs of major investments
See above with e-science
•
Challenges of management and provision
– Often limited scope for pooling/sharing
– Sensitivity to different access and charging models
– Public vs. private users
Some relevance of these generic issues
Special features of research infrastructures 3 – relevance for escience?
•
Dynamics of scientific production – impacts of research infrastructures upon
this
– Detailed histories and some anthropologies of science performed on
major research infrastructures (classic histories of CERN)
– Work on peculiar scientific output patterns from research infrastructures
eg telescopes
evaluation/policy studies need to have these insights to avoid ‘blind’ evaluations
For e-science – transformations seem to be profound, likely to be different in
different disciplines – but lack of fundamental sociology/anthropology of –escience to provide grounding for evaluations
Evaluation of e-science - first principles...
Evaluation has a purpose...what purpose?
•
Justify funding, measure impacts, learn lessons for improvement, map
outcomes and transformations, benchmarking performance
•
Continue/terminate decision (classical evaluation)
What audience/customer?
Which level of intervention, which unit of analysis?
•
A specific infrastructure, specific project or programme?
•
Funding system (within one discipline), or overall funding system?
•
National infrastructure policy (how well does the UK provide
scientists with what they need to deliver)?
•
Infrastructure policy as part of sectoral/technological/issue driven
policy
Evaluation principles - evaluability
• Are there objectives and goals to evaluate against?
• Do these need to be constructed/reconstructed?
• Are they concrete or aspirational and vague?
• Are there data/possibilities to collect data to measure
progress against goals?
• Later on – evaluation would question whether the goals
were the right ones
Impact dimensions – first principles
•
Evaluating what impact, and on whom?
•
Impact of e-science on...science
1. Mode of Knowledge production:
•
change of the ways knowledge is produced: following a certain type of
model that dominates the interpretation of what e-science is and can be?
•
Organisational and behavioural adaptations (or the failure to adapt)
•
Adaptations for the good or worse – given e-scientists part of the story
only?
2. Direction(s) of Knowledge production: does e-infrastructure influence
what is researched – and/or what is funded („we do X and not Z because for
X we need the large infrastructure and can use our data...“):
•
Socio-economic impacts
– Second order impacts of new knowledge for creation of socio-economic
effects
– Second order impacts on ICT suppliers, spin offs from the technology and
software
Impact dimensions – first principles contd
3. Efficiency of knowledge production (costs):
1. does e-science enable cost-sharing, scale and scope effects to deliver
more efficiently what needs to edelivered?
2. How to weigh against enormous investment and opportunity costs?
3. How to find the optimal level of investment (why 250 Mio on e-science)
4. Dissemination and Impact of the knowledge produced, societal and
economic effects: did e-science enable, improve delivery?
5. Diffusion of the e-science technology and its complementary
technologies, spill over effects beyond (public) science
Experience – evaluation of the impacts of the EC
Research Infrastructures Programme
Study by Ramboll-Matrix consultancies, MIOIR advising, 20072008 – still in progress - covering research infrastructures
and e-infrastructures funded under FP6
Purpose of evaluation – policy learning for future programmes,
evidence of impacts and effects upon European research and
in the policy context of aims to make progress in the
European Research Area and the standing of European
research viz a viz the rest of the world
Ramboll-Matrix evaluation of E-infrastructures
•
Collection of data on new networks created, new areas of research opened
up, new users of e-science, new standards and protocols
•
Using structured questionnaire (on-line) and personal interviews where
reported impacts are explored and degree of attribution to the EC programme
estimated, economic spin offs and social impacts asked for
•
We avoided publication counting
•
Case studies to provide more flesh and stories of paths to impact
•
-typical approaches in S&T evaluation to get over the problems of underreporting and missing paths to impact
•
-typical evaluation problems encountered – going outside science becomes
tenuous, lack of reliable hard data, most impacts are on the science
Policy level evaluation for e-science
Needs to differentiate and clarify with policy makers and stakeholders:
•
explicit goals of the e-science policy (first order, second order)
– (re) construct them?
– Cannot evaluate against vague aspirational policy statements
•
Look for secondary and unexpected effects (on beneficiaries, on other
stakeholders)
•
Look for changes in behaviour (changes of practice)
•
Link to other policy goals, eg internationalisation of science, innovation,
performance of ICT sector, creation of highly (knoweldge produced as
input somewhere else? Innovation?)
•
Meaning of e-science for the research and innovation system
Impact on Knowledge Production: Keep in mind the specific
“Knowledge Configurations” for different areas
Premise: need for policy instruments (and infrastructures) differs greatly
between different scientific fields and their “knoweldge configurations“.
Knowledge configurations are determined by
1. The specific characteristics of the knowledge dynamics of different
research and innovation themes / topics / areas;
2. The institutional set-up (markets; industry structures; regulation;
organisation; tradition)
3. The involved actors, their ambition, strategy and power.
When evaluating e-science: take those dimensions not only as context
variables, but as analytical units.
•
differentiate for the different science fields and their dynamics!!
•
Understand the changes in cofigurations and the meaning of the
infrastructure (before new, critical e-infrastructures were used)
Source: Project Group within PRIME Network of Excellence on „Era-Dynamics“, www.prime-noe.org
Limits and drawbacks of evaluating e-science policies
• Complexity:
• differences in knowledge configurations between areas hard to
pin down but probably essential not to miss important effects
and changes
• Data and methods:
• can indicator analysis do the job: time lag, boundaries,
relative weight of infrastructure
• Much more...
Thank you
Contact
Kate Barker
Senior Lecturer
Manchester Institute of Innovation Research MIoIR
Manchester Business School, University of Manchester
Harold Hankins Building
Manchester, UK M13 9PL
0044 (0) 161 275-0919 5932
[email protected]