Transcript Document

Are we
successfully
addressing the
PSHA debate?
Seth Stein
Earth & Planetary
Sciences, Northwestern
University
Debate: why do hazard maps sometimes (seem to)
fail?
NYT 3/21/11
Explanations
1) Probabilistic approach is fundamentally sound,
big events are rare but expected “black swans”
If so,
everything’s
fine.
Implication:
maps should
not be
remade after
big events in
low-hazard
areas
2) Probabilistic approach is fundamentally flawed
- either because probabilities can’t be usefully
defined (Bayesian)
“ probability is a property of a model ... the models, unlike
the models for coin-tossing, have not been tested against
relevant data. Indeed, the models cannot be tested on a
human time scale, so there is little reason to believe the
probability estimates." (Freedman and Stark, 2003)
-or expected value of shaking not useful for
design. If so, use deterministic hazard
assessment (requires assuming Mmax for large
design earthquakes) without considering how rare
they are (uneconomic)
3) PSHA algorithm is reasonable, but some key
parameters are poorly known, unknown, or
unknowable, leading to uncertainties and some
failures.
If so, maps can be improved by improving
parameter estimates, accepting that some can be
improved significantly even though others can’t
- What are our maps’ goals?
- How can we define & measure
success or failure?
- What can be done on short
(~ decadal) scale to improve hazard
maps?
- What can’t be done on short
(~ decadal) scale to improve hazard
maps?
Ideal criterion for total success:
After period equal to map time window (500 yr, 2500
yr…) observed maximum shaking at each point would
be that predicted
Actual shaking map would look like hazard map
Hazard map would have
neither underpredicted,
causing excess damage,
nor overerpredicted,
causing excess mitigation
cost
What can we say on
short timescale?
How long will it take to tell how well a model is working,
especially for significant shaking?
On short (decadal?) time scale, how can we tell how
well a model is working?
On short time scale, how can we tell whether one model
is doing better than another?
Should one update a map once new data or insights are
available even before one knows how well a map is
working?
How much do the differences matter for hazard
assessment and mitigation?
Seismological assessment of hazard maps
Various metrics could be used, e.g. compare
maximum observed shaking in subregion i, xi to
predicted maximum shaking pi
Compute Hazard Map Error
HME(p,x) = i (xi - pi)2/N
and compare to error of reference map produced
using a null hypothesis
HME(r,x) = i (xi - ri)2/N
using the skill score
SS(p,r,x) = 1 - HME(p,x)/HME(r,x)
Positive score if map does better than null
Societal assessment of hazard maps
Consider map as
means, not end
Assess map’s
success in terms
of contribution to
mitigation
Even uncertain or
poor maps may do
some good
Stein et al., 2012
Within range,
inaccurate
hazard maps
produce
nonoptimal
mitigation,
raising cost, but
still do some
good (net
benefit)
Inaccurate loss
estimates have
same effect
Stein & Stein, 2013
Characterizing uncertainty is crucial in attempts
to describe unknown future events.
Knight (1921) proposed that to distinguish between "the
measurable uncertainty and an unmeasurable one, we
may use the term 'risk' to designate the former and the
term 'uncertainty' for the latter."
Seismic hazard analysis follows engineering literature in
distinguishing uncertainties by their sources
Aleatory uncertainties are due to irreducible physical
variability of a system
Epistemic uncertainties are due to lack of knowledge of the
system, and so can be reduced by more knowledge
Alternative from risk analysis based on
ability to describe mathematically
Shallow uncertainty - we don’t know what
will happen, but know the odds. The past
is a good predictor of the future. We can
make math models (pdfs) that work well.
Deep uncertainty - we don’t know the
odds. The past is a poor predictor of the
future. We can make math models (pdfs),
but they won’t work well.
Shallow uncertainty is
like estimating the
chance that a batter will
get a hit. His batting
average is a good
predictor.
Deep uncertainty is like
trying to predict the
winner of the World
Series next baseball
season. Teams' past
performance give only
limited insight into the
future.
"Some of the most troubling risk management challenges of our
time are characterized by deep uncertainties.
Well-validated, trustworthy risk models giving the probabilities of
future consequences for alternative present decisions are not
available; the relevance of past data for predicting future
outcomes is in doubt;
experts disagree about the probable consequences of
alternative policies — or, worse, reach an unwarranted
consensus that replaces acknowledgment of uncertainties and
information gaps with groupthink — and policymakers (and
probably various political constituencies) are divided about what
actions to take to reduce risks...
Passions may run high and convictions of being right run deep
in the absence of enough objective information to support
rational decision analysis... ”
Cox (2012)
Seismic hazard uncertainty typically factor of 3-4
How much can this be reduced ?
Stein et al, 2012
Stein et al., 2012
Cause of uncertainty
How much can it be
reduced?
Where will large
earthquakes occur?
Significantly on plate
boundaries, little in interiors
When will large
earthquakes occur?
Little if at all
How large will they
be?
Significantly for lower
bound, not for upper
How strong will the
shaking be?
Significantly
Uncertainty from ground motion model reducible
Others much less or not at all
Stein et al, 2012
Stein et al., 2012
Mmax uncertainty reducible at lower bound, not upper
US East Coast - 10,000 synthetic earthquake histories.
Given 300 years of data, what Mmax would we observe if Mmax were really
7.0, 7.2, 7.4?
In most cases, we would not observe the largest events and so
underestimate Mmax
Most likely Mmax to observe is ~ 6.6 whose recurrence time
~ sample length
Earthquake probability uncertainty in part irreducible
Analogy: Deep uncertainty in earthquake recurrence
Imagine an urn
containing e balls
labeled "E" for
earthquake, and n
balls labeled "N" for
no earthquake.
We can draw balls in
two ways.
Stein & Stein, 2013
Earthquake probability uncertainty in part irreducible
Analogy: Deep uncertainty in earthquake recurrence
Option 1: after drawing a
ball, we replace it. In
successive draws, the
probability of an event is
constant or timeindependent.
Because one event
happening does not
change the probability of
another happening, we
can estimate probabilities
well and an event is never
overdue.
Stein & Stein, 2013
Earthquake probability uncertainty in part irreducible
Analogy: Deep uncertainty in earthquake recurrence
Option 2: We can add a
number a of E-balls after a
draw when an event does
not occur, and remove r Eballs when an event occurs.
This makes the probability
of an event increase with
time until one happens,
after which it decreases and
then grows again. Events
are not independent,
because one happening
changes the probability of
another.
Stein & Stein, 2013
Problem: Given a
sequence of
results, it’s hard
or impossible to
tell how the urn
was sampled.
Thus it’s hard to
assess the
probability of an
“earthquake” in
the next draw.
We can quote a
number, but it
means little.
Stein & Stein, 2013
GEM & similar efforts can improve hazard maps
by recognizing the uncertainties and reducing
those that are reducible
Cause of uncertainty
How much can it be reduced?
Where will large
earthquakes occur?
Significantly on plate
boundaries, little in interiors
When will large
earthquakes occur?
Little if at all
How large will they be?
Significantly for lower bound,
not for upper
How strong will the
shaking be?
Significantly