MOSPost2016.ppt

Download Report

Transcript MOSPost2016.ppt

Model Post Processing
Model Output Can Usually Be
Improved with Post Processing
• Can remove systematic bias
• Can produce probabilistic information from
deterministic information
• Can provide forecasts for parameters that the
model incapable of modeling successfully due
to resolution or physics issues (e.g., shallow
fog)
Post Processing
• Model Output Statistics was the first postprocessing method used by the NWS (1969)
• Based on multiple linear regression.
• Essentially unchanged in 40 years.
• Does not consider non-linear relationships
between predictors and predictands.
• Does take out much of systematic bias.
Day 2 (30-h) GFS MOS Max Temp Equation for KSLC
(Cool Season – 0000 UTC cycle)
N
Predictor (XN)
0
Coeff. (aN)
-467.7800
1
2-m Temperature (21-h proj.)
1.7873
2
2-m Dewpoint (21-h proj.)
0.1442
3
2-m Dewpoint (12-h proj.)
-0.2060
4
2-m Dewpoint (27-h proj.)
0.1252
5
Observed Temperature (03Z)
0.0354
6
850-mb Vertical Velocity (21-h proj.)
19.5759
7
925-mb Wind Speed (15-h proj.)
0.6024
8
Sine Day of Year
1.7111
9
700-mb Wind Speed (15-h proj.)
0.2701
10
Sine 2*DOY
1.5110
Day 2 (42-h) GFS MOS Max Temp Equation for KUNV
(Warm Season – 1200 UTC cycle)
N
Predictor (XN)
0
Coeff. (aN)
-432.2469
1
2-m Temperature (33-h proj.)
0.9249
2
2-m Dewpoint (33-h proj.)
0.5751
3
950-mb Dewpoint (24-h proj.)
0.4026
4
950-mb Rel. Humidity (27-h proj.)
-0.1784
5
850-mb Dewpoint (39-h proj.)
-0.2439
6
Observed Dewpoint (15Z)
-0.1000
7
Observed Temperature (15Z)
0.1270
8
1000-mb Rel. Humidity (24-h proj.)
0.0027
9
Sine Day of Year
0.9763
10
500-1000mb Thickness (45-h proj.)
0.0057
Day 2 (30-h) GFS MOS Min Temp Equation for KDCA
(Cool Season - 1200 UTC cycle)
N
Predictor (XN)
0
Coeff. (aN)
-374.9980
1
2-m Temperature (21-h proj.)
0.9700
2
1000-mb Temperature (12-h proj.)
0.3245
3
2-m Dewpoint (27-h proj.)
0.1858
4
2-m Relative Humidity (27-h proj.)
-0.0037
5
2-m Relative Humidity (15-h proj.)
-0.0380
6
975-mb Wind Speed (21-h proj.)
-0.0653
7
Observed Temperature (15Z)
0.1584
8
Sine Day of Year
-0.0342
Develop /
Evaluate
MOS Developed by and Run at
the NWS Meteorological
Development Lab (MDL)
• Full range of products available at:
http://www.nws.noaa.gov/mdl/synop/index.ph
p
Global Ensemble MOS
• Ensemble MOS forecasts are based on the 0000
UTC run of the GFS Global model ensemble
system. These runs include the operational
GFS, a control version of the GFS (run at lower
resolution), and 20 additional runs.
• Older operational GFS MOS prediction
equations are applied to the output from each of
the ensemble runs to produce 21 separate sets
of alphanumeric bulletins in the same format as
the operational MEX message.
Gridded MOS
•The NWS needs MOS on a grid for many
reasons, including for use in their IFPS
analysis/forecasting system.
•The problem is that MOS is only available at
station locations.
•To deal with this, NWS created Gridded MOS.
•Takes MOS at individual stations and spreads it
out based on proximity and height differences.
Also does a topogaphic correction dependent on
reasonable lapse rate.
Current “Operational” Gridded MOS
Localized Aviation MOS
Program
(LAMP)
• Hourly updated statistical product
• Like MOS but combines:
– MOS guidance
– the most recent surface observations
– simple local models run hourly
– GFS output
Practical Example of Solving a
LAMP Temperature Equation
Y = b + a1x1 + a2x2 + a3x3 + a4x4
Y = LAMP temperature forecast
Equation Constant b = -6.99456
Predictor x1 = observed temperature at cycle issuance time (value
66.0)
Predictor x2 = observed dewpoint at cycle issuance time (value
58.0)
Predictor x3 = GFS MOS temperature (value 64.4)
Predictor x4 = GFS MOS dewpoint (value 53.0)
Theoretical Model Forecast Performance of
LAMP, MOS, and Persistence
Skill
LAMP
LAMP outperforms persistence for all
projections and handily outperforms
MOS in the
1-12 hour projections.
MOS
The skill level of LAMP forecasts begin
to converge to the MOS skill level after
the 12 hour projection and become
almost indistinguishable by the 20 hour
projection.
0
6
Persistence
12
Projection (hr)
The decreased predictive value of
the observations at the later
projections causes the LAMP skill
level to diminish and converge to
the skill level of MOS forecasts.
18
24
Verification of LAMP 2-m Temperature
Forecasts
MOS Performance
• MOS significantly improves on the skill of
model output.
• National Weather Service verification
statistics have shown a narrowing gap
between human and MOS forecasts.
Cool Season Mi. Temp – 12 UTC Cycle
Average Over 80 US stations
Prob. Of Precip.– Cool Season
(0000/1200 UTC Cycles Combined)
MOS Won the Department
Forecast Contest in 2003
For the First Time!
Average or Composite MOS
• There has been some evidence that an average or
consensus MOS is even more skillful than individual
MOS output.
• Vislocky and Fritsch (1997), using 1990-1992 data,
found that an average of two or more MOS’s
(CMOS) outperformed individual MOS’s and many
human forecasters in a forecasting competition.
Some Questions
• How does the current MOS performance…driven
by far superior models… compare with NWS
forecasters around the country.
• How skillful is a composite MOS, particularly if
one weights the members by past performance?
• How does relative human/MOS performance vary
by forecast projection, region, large one-day
variation, or when conditions vary greatly from
climatology?
• Considering the results, what should be the role of
human forecasters?
This Study
• August 1 2003 – August 1 2004 (12 months).
• 29 stations, all at major NWS Weather
Forecast Office (WFO) sites.
• Evaluated MOS predictions of maximum and
minimum temperature, and probability of
precipitation (POP).
National Weather Service locations used in the study.
Forecasts Evaluated
•
•
•
•
•
•
NWS Forecast by real, live humans
EMOS: Eta MOS
NMOS: NGM MOS
GMOS: GFS MOS
CMOS: Average of the above three MOSs
WMOS: Weighted MOS, each member is weighted
by its performance during a previous training period
(ranging from 10-30 days, depending on each
station).
• CMOS-GE: A simple average of the two best MOS
forecasts: GMOS and EMOS
The Approach: Give the NWS the
Advantage!
• 08-10Z-issued forecast from NWS matched against
previous 00Z forecast from models/MOS.
– NWS has 00Z model data available, and has added
advantage of watching conditions develop since 00Z.
– Models of course can’t look at NWS, but NWS looks at
models.
• NWS Forecasts going out 48 (model out 60) hours, so
in the analysis there are:
– Two maximum temperatures (MAX-T),
– Two minimum temperatures (MIN-T), and
– Four 12-hr POP forecasts.
Temperature Comparisons
Temperature
MAE (F) for the seven forecast types for all stations,
all time periods, 1 August 2003 – 1 August 2004.
Precipitation Comparisons
Brier Scores for Precipitation for all stations for the entire
study period.
Brier Score for all stations, 1 August 2003 – 1 August
2004. 3-day smoothing is performed on the data.
Precipitation
Brier Score for all stations, 1 August 2003 – 1 August
2004, sorted by geographic region.
There are many other postprocessing approaches
• Neural nets
Attempts to duplicate the complex interactions
between neurons in the human brain.
Dynamic MOS Using Multiple
Models
• MOS equations are updated frequently, not
static like the NWS.
• Example: DiCast used by the Weather
Channel
ForecastAdvisor.com
They don’t MOS!
UW Bias Correction of WRF
And many others…