Transcript slide set
Some comments on the 3 papers
Robert T. O’Neill Ph.D
Comments on G. Anderson
WHISH is a nice example
Randomiztion (Zelen) but using different
sources of data for outcome
Outcome data: self reported, adjudicated for
medical records, Medicare claim (hybrid-ability
to estimate SE and SP
Impact of outcome misclassification
Event data not defined by protocol-you depend
on the health care system
Claims data DO NOT provide standardized data
– see Mini-Sentinel and OMOP
Comments on A J Cook
Key component is randomization at
patient or clinic level and use of electronic
health record for data capture (cluster
randomization addresse different issues)
Missing data, informative censoring, switching,
measuring duration of exposure (repeat Rx, gaps) ,
different answers depending upon definition
Validation of outcomes makes the pragmatic trial less
simple
Only some outcomes (endpoints) , populations,
questions are addressable before complexities of
interpretation overwhelm
Comments on M Gaffney
Precision and Eagle are not large simple
trials – they are large difficult trials
Outcome adjudication, monitoring strategies
Non-inferiority poses significant challenges for
pragmatic trials – generally no assay sensitivity
Margin selection based upon evidence vs.
based upon close enough but not sure if both
are equally good or bad
Other comments on NI studies
Pre-specifying the margins – why and what is the
difference in these two situations
What treatment difference is detectable and
credible with the playoff of bias and huge sample
size
When pre-specification is not possible because there is
no historical information, the width of the confidence
interval makes sense – but two conclusions – both
treatments the same and comparably effective vs. both
the same but both ineffective
What endpoints are eligible : Hard endpoints (y),
patient symptoms(n)
Other comments
Are NI designs appropriate for claims data of EHR
without independent all case adjudication –
implications for poor sensitivity and specificity to
drive estimate to null – what does a null result mean
Experience suggests that Exposure (drugs) has better
accuracy than diagnoses or procedure in claims data
bases(outcomes)
Duration of exposure dependent upon
algorithms for repeat prescriptions – different
results depending upon definitions of gaps
between repeated RX
Can randomization overcome lack of
blinding and personal choices after
randomization
Use of observational methods of
accounting for unmeasured confounding
of assigned treatment and time to event
outcomes subject to censoring
Directed Acylic Graphs to explore the
confounding-censoring problem diagnostics
Instrumental variables
Lessons learned from Mini-Sentinel and the
Observational Medical Outcomes Partnership (OMOP)
Distributed data models
Common data models
Limits of detectability of effect sizes of two or more
competing agents – calibration, interpretation of p-values
for non randomized studies
Not all outcomes and exposures can be dealt with in similar
manner
Know the limitations of your data base – is this possible in
advance of conducting the study – part of the intensive
study planning, protocol and prospective analysis plan
An example of Medicare data use but not a RCT
Some other views and opinions on
CER using the learning health care
system
Lessons Learned from OMOP and MiniSentinel About observational studies
using health care claims data or EHR –
but no randomization
Lessons about the limitations of the data
bases, outcome capturing, ascertainment ,
missing data (confounders) are relevant to
the RCT use of the same data source
Lessons about data models, and challenges
for data (outcome standardization)
http://www.mini-sentinel.org/
http://omop.org/
The Observational
Medical Outcomes
Partnership – many
findings
Some ideas on what to
evaluate about a given data
source before committing to
conducting a study – focus
on observational studies –
but also relevant to
pragmatic RCTs
How do these presentations relate to
pragmatic trials within a health care system
Two or more competing therapies on a formulary – never
compared with each other
Randomize patients under equipoise principle – do you
need patient consent , physician consent if health plan
and no data to think ‘I or We don’t know but want to find
out ‘
Collect electronic medical record data, including
exposures and outcomes – and decide if any additional
adjudication is needed
Analyze according to best practices – but with some
prospective SAPs – causal inference strategies ?