Monitoring the Quality of Operational and Semi
Download
Report
Transcript Monitoring the Quality of Operational and Semi
Monitoring the Quality of
Operational and Semi-Operational
Satellite Precipitation Estimates –
The IPWG Validation /
Intercomparison Study
Beth Ebert
Bureau of Meteorology Research Center
Melbourne, Australia
2nd IPWG Meeting, Monterey, 25-28 October 2004
Motivation – provide information to...
Me...! fill the blank spot
Algorithm developers
How well is my algorithm performing?
Where/when is it having difficulties?
How does it compare to the other guys?
Climate researchers
Do the satellite rainfall products give the correct rain amount
by region, season, etc?
Hydrologists
Are the estimated rain volumes correct?
NWP modelers
Do the satellite products put the precipitation in the right
place?
Is it the right type of precipitation?
Forecasters and emergency managers
Are the timing, location, and maximum intensities correct?
Web page for Australia – home
http://www.bom.gov.au/bmrc/wefor/staff/eee/SatRainVal/sat_val_aus.html
Earlier studies
GPCP Algorithm Intercomparison Programs (AIPs) and
WetNet Precipitation Intercomparison Programs (PIPs)
found:
Performance varied with sensor
Passive microwave estimates more accurate than IR and
VIS/IR estimates for instantaneous rain rates
IR and VIS/IR slightly more accurate for daily and monthly
rainfall due to better space/time sampling
Performance varied with region and season
Tropics better than mid- and high latitudes
Summer better than winter (convective better than
stratiform)
Model reanalyses performed poorer than satellite
algorithms for monthly rainfall in tropics, but
competitively in mid-latitudes (PIP-3)
More recent studies
Combination of microwave and IR gives further
improvement at all time scales
Good accuracy of microwave rain rates
Good space/time sampling from IR (geostationary)
Strategies
Weighted combination of estimates
Using match-ups of microwave and geostationary estimates
Get a field of multiplicative correction factors
Tune parameters of IR algorithm
Map IR TB onto microwave rain rates
Morphing of successive microwave estimates using time
evolution from geostationary imagery
Paradigm for GPM?
Focus of IPWG validation /
intercomparison study
1. Updated evaluation of satellite rainfall algorithms
Quantitative Precipitation Forecasts
(QPFs) from Numerical Weather
Prediction (NWP)
WCRP Working Group on Numerical Experimentation (WGNE) has
been validating / intercomparing model QPFs since 1995
Results
Performance varies with region and season
Mid-latitudes better than tropics
Winter better than summer (stratiform better than convective)
NWP performance is complementary to satellite performance!
NWP
performance
over Germany
Foci of IPWG validation /
intercomparison study
1. Updated evaluation of satellite rainfall algorithms
2. Where, when, under which circumstances is NWP rainfall
better than satellite rainfall, and visa versa?
Related studies
http://rain.atmos.colostate.edu/CRDC/
Related studies
Observed
Precipitation
Validation
http://ldas.gsfc.nasa.gov/GLDAS/DATA/precip_valid.shtml
Parameters of study
Evaluate estimates for at least one year to get seasonal
variations in performance
As many different regions (climate regimes) as possible
So far:
Australia
United States
Western Europe
Any volunteers for Asia? Elsewhere?
Focus on daily rainfall
Rain gauge and radar rainfall analyses used as
reference data
Focus on relative accuracy
Global estimates archived at U. Maryland
Algorithms
Operational and semi-operational algorithms
Run every day
Available to public via web or FTP
Experimental algorithms OK
Sorted by sensor type
Microwave
IR or VIS/IR
Microwave + IR
Blending strategy
NWP models
Global models (ECMWF, US)
Lower spatial resolution, global coverage
Regional models
Higher spatial resolution, limited coverage
Evaluation methodology
Daily rainfall estimates of
Rain occurrence
Rain amount
Spatial resolution
Finest possible resolution (typically 0.25° lat/lon)
Coarser resolution (1° lat/lon) for comparison with NWP
Stratify by
Season
Region
Algorithm type
Algorithm
Rain amount threshold
Verification methods
Rain occurrence
Frequency bias
Probability of detection and false alarm ratio
Equitable threat score
Rain amount
Multiplicative bias
RMS error
Correlation coefficient
Probability of exceedance
Properties of rain systems
Contiguous Rain Area (CRA) validation method (Ebert and
McBride, 2000)
Rain area, volume, maximum amount
Spatial correlation
Error decomposition into volume vs. pattern
Some results for Australia...
User page
Targeted to external users of satellite rainfall products
Developer page
Targeted to algorithm developers – contains more
algorithms, some of which aren't publicly available (at
least not easily)
Multi-algorithm maps
All algorithms and NWP models for 30
September 2004 over Australia
Basic daily validation product
Maps and statistics
Daily CRA validation
Properties of rain system
Area
Mean and maximum rain accumulation
Rain volume
Spatial correlation
Error decomposition into volume and pattern error
components
Monthly and
seasonal
summaries
Variety of statistical plots
Time series
Scatter plots
Table of statistics
Binary (categorical) scores as a function of rain threshold
Error as a function of estimated (observed) rain rate
Intercomparison of algorithm types
Multiplicative bias
Australian
Tropics
December 2002September 2004
1° grid
Australian
Mid-latitudes
summer autumn
winter
spring
Intercomparison of algorithms
POD
December 2002September 2004
Australian
Tropics
1° grid
Australian
Mid-latitudes
Caveats
Reference data (gauge and radar analyses) are not as
accurate as targeted ground validation sites
Performance results more meaningful in a relative sense
than in an absolute sense
No ocean validation
Microwave algorithms are expected to have better
performance over ocean because emission signal is used
Therefore microwave+IR algorithms should also perform
better over ocean
NWP QPFs perform better over land than over ocean since
more observations used in model initialization
Not all algorithms cover the same period (some missing
data)
Future of this study
Results so far will be examined closely and written up
for publication
Satellite precipitation validation / intercomparison will
continue into the future...
Algorithm developers
Keep making your results available
Good opportunity to check new or updated algorithms
Reference data providers
Thanks for data currently provided
More is better!
Can you assist in the validation itself?
Users of validation results
Are we giving you the information you need?
Please provide feedback and suggestions for improvement