Tony O'Hagan
Download
Report
Transcript Tony O'Hagan
Value of Information
Some introductory remarks by
Tony O’Hagan
Welcome!
• Welcome to the second CHEBS dissemination
workshop
• This forms part of our Focus Fortnight on “Value
of Information”
• Our format allows plenty of time for discussion
of the issues raised in each talk, so please feel
free to join in!
Uncertainty in models
• An economic model tells us the mean cost and
effectiveness for each treatment, and so gives
their mean net benefits
• We can thereby identify the most cost-effective
treatment
• However, there are invariably many unknown
parameters in the model
• Uncertainty in these leads to uncertainty in the
net benefits, and hence in the choice of
treatment
Responses to uncertainty
• We need to recognise this uncertainty when
expressing conclusions from the model
› Variances or intervals around estimates of net
benefits
› Cost-effectiveness acceptability curves for the choice
of treatment
• We can identify those parameters whose
uncertainty has most influence on the
conclusions
Sensitivity and VoI
• One way to identify the most influential
parameters is via sensitivity analysis
• This is primarily useful to see where research
will be most effective in reducing uncertainty
• A more direct approach is to quantify the cost to
us of uncertainty, and thereby calculate the
value of reducing it
• Then we should engage in further research
wherever it would cost less than the value of
the information it will yield
Notation
• To define value of information, we need some
notation
› X denotes the uncertain inputs
› EX denotes expectation with respect to the random
quantity X
› t denotes the treatment number
› maxt denotes taking a maximum over all the possible
treatments
› U (t , X ) denotes the net benefit of treatment t
when the uncertain inputs take values X
Baseline
• If we cannot get more information, we have to
take a decision now, based on present
uncertainty in X
• This gives us the baseline expected net benefit
maxt EX U (t , X )
• The baseline decision is to use the treatment
that achieves this maximal expected net benefit
Perfect information
• Suppose now that we could gain perfect
information about all the unknown inputs X
• We would then simply maximise U (t , X ) across
the various treatments, using the true value of
X
• However, at the present time we do not know X
and so the expected achieved net benefit is
EX maxt U (t , X )
EVPI
• The gain in expected net benefit from learning
the true value of X is the difference between
these two formulae
EX maxt U (t , X ) — maxt EX U (t , X )
• We call this the Expected Value of Perfect
Information, EVPI
Partial information
• Now suppose we can find out the true value of
Y, comprising one or more of the parameters in
X (but not all of them)
• Then we will get expected net benefit
maxt EX | Y U (t , X )
where now we need to take expectation over
the remaining uncertainty in X after learning Y,
which is denoted by EX | Y
• But because at the present time we do not yet
know Y, our present expectation of this future
expected net benefit is the relevant measure
EY maxt EX | Y U (t , X )
• To calculate this, we need to carry out two
separate expectations
• To get the value of the partial information, we
again subtract the baseline expected net benefit
Sample information
• In practice, we will never be able to learn the
values of any of the parameters exactly
• What we can hope for is to do some research
that will yield more information relevant to some
or all of X
• Let this information be denoted by Y
• Then the previous formula still holds
› Sample information is a kind of partial information
Scaling up
• All of these formulae have been expressed at a
per-patient level
• To get the real value of information, we need to
multiply by the number of patients to whom the
choice of treatment will apply
› When comparing with the cost of an experiment to
get more information now, the relevant number of
patients should be discounted over time
Computation
• Computing EVPI is quite straightforward
› Simple Monte Carlo (MC) sampling from the
distribution of X can evaluate the baseline as well as
the perfect information expected net benefit
• Computing the value of partial or sample
information is more complex
› Two levels of sampling are needed for MC
computation
• More sophisticated Bayesian methods are
available when the model is too complex for MC