Evidence-based supported employment for people with severe

Download Report

Transcript Evidence-based supported employment for people with severe

Economic evaluation of health programmes

Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision Analytic Modelling III Nov 5, 2008

Plan of class

 Patient-level simulations: an example  Assessment of uncertainty in decision analytic models  Assessment of uncertainty due to sampling variation in individual studies

3rd alternative: patient-level simulation

 Each individual encounters events with probabilities that can be made path dependent  Virtually infinite flexibility  But how to “populate” all model parameters?

Example of a study using patient-level simulation: Stahl JE, Rattner D, et al., Reorganizing the system of care surrounding laparoscopic surgery: A cost-effectiveness analysis using discrete simulation, Medical Decision-Making Sep-Oct 2004, 461 – 471.

Study background

Base case process of care

Arriving or exiting process

Sensitivity analysis results

Average cost of patients cared for per day

Conclusions

 New system yields a lower cost per patient treated  Slightly higher cost if patient volume is lower  Reason: higher cost per minute more than compensated by higher throughput of patients  Sensitivity analyses point to robustness of conclusion to several changes in assumptions, evaluated one at a time

Significance

 Use of a simulation model allows representation of a complex process that neither decision tree nor Markov model could represent  Obtaining valid data may be an issue – process represented in greater detail, but on what basis are those details defined?

Dealing with uncertainty in decision-analytic models

Types of uncertainty in DAMs and how to handle them

Type of uncertainty

Methodological Parameter uncertainty Modeling uncertainty Structure Process Generalizability/transferability

Method for handling it

Reference case/sensitivity analysis Probabilistic sensitivity analysis Sensitivity analysis No established method Sensitivity analysis (From Box 3.3 of Drummond et al. 2005)

Limitations of one-way sensitivity analyses

 Stahl et al. 2004 varied key parameters one at at time  Limitations to this:   Conscious or unconscious bias in selection of parameters to vary Subjective interpretation – when do we conclude results are too sensitive to variation in a parameter or other feature of the model?

 Variation one at a time ignores potential interactions and also covariation  In many DAMs there are too many parameters to be able to represent results of such analyses meaningfully

Probabilistic sensitivity analysis

1. Represent uncertainty in parameters by means of a distribution  Example, beta distribution for parameter between 0 and 1  Use joint distribution for parameters that are correlated 2. Propagate uncertainty through the model   Monte Carlo simulation E.g., 10,000 replications using each time a different set of randomly-selected parameter values

The beta distribution

Probabilistic sensitivity analysis

 Present implications of parameter uncertainty  Confidence intervals around ICER, or around incremental net benefit  Cost-effectiveness acceptability curve, • May also show scatter plot on cost-effectiveness plane

Cost-effectiveness acceptability curves

Incremental cost-effectiveness ratio

Average cost per person: Experimental Tx (E) RCEI = C E - C C E E - E C Control group(C) Average value of effectiveness measure: Experimental group Control group (C)

Representing uncertainty of the ICER

• • Ratio nature complicates things Analytic methods exist but tend to oversimplify reality • Bootstrapping methods are now widely used instead

Using the bootstrap to obtain a measure of the sampling variability of the ICER

Suppose we have n EXP et n CON observations in the experimental and control groups, respectively. One way to estimate the uncertainty around an ICER is to: 1. Sample n CON cost-effect pairs from the control group, with replacement 2. Sample n EXP cost-effect pairs from the experimental group, with replacement 3. Calculate the ICER from those two new sets of cost-effect pairs 4. Repeat steps 1 to 3 many times, e.g., 1000 times. 5. Plot the resulting 1,000 ICER values on the Cost effectiveness plane See Drummond & McGuire, Eds.,

Economic evaluation in health care

, Oxford, 2001, p. 189

An illustration of step 1

(Note: These are made-up data)

Going over the next steps again… • • • • • Do exactly the same steps for data from the experimental group, independently.

Calculate the ICER from the 2 bootstrapped samples Store this ICER in memory Repeat the steps all over again Of course, this is done by computer. Stata is one program that can be used to do this fairly readily.

Bootstrapped replications of an ICER with 95% confidence interval Note: ellipses here are derived using Van Hout’s method and are too big; the bootstrap gives better results Source: Drummond & McGuire 2001, p. 189

2 common problems with bootstrapped confidence intervals • • • The magnitude of negative ICERs conveys no useful information:  A: (1 LY, - $2,000): ICER = -2,000 $/LY  B: (2 LY, - $2,000): ICER = -1,000 $/LY  C: (2 LY, - $1,000): ICER = - 500 $/LY  B is preferred yet is intermediate in value Positive ICERs from the NE and SW quadrants have opposite interpretations  In NE quadrant, fewer $ for an increase in LY favors new treatment; in SW quadrant, fewer $ saved from a reduction in LY favors old one As a result,

if enough bootstrapped replications fall in other quadrants than the NE

, the 95% confidence interval will be

uninterpretable.

Bootstrapped replications that fall in all 4 quadrants Source: Drummond & McGuire 2001, p. 193

• • • A solution: the Cost-effectiveness acceptability curve Strategy: We recognize that the decision-maker may in fact have a ceiling ratio, or shadow price R willing to pay C – a maximum amount of $ per unit benefit he or she is So we will estimate, based on our bootstrapped replications, the probability that the ICER is less than or equal to the ceiling ratio, as a function of the ceiling ratio If the ceiling ratio is $0, then the probability that the ICER is less than or equal to 0 is the p-value of the statistic from testing the null hypothesis that the costs of the 2 groups are the same  Recall that the p-value is the probability of observing the difference in costs seen in the data set (or a smaller one) by chance if the true difference is in fact 0.

Cost-effectiveness acceptability curve (CEAC) Source: Drummond & McGuire 2001, p. 195