Consortium02042011.ppt

Download Report

Transcript Consortium02042011.ppt

Consortium Meeting
Feb 3, 2011
Hit Rates
• Finally, a clear reduction in Canadian traffic and other
mega-users not part of the consortium
• Some may have switched to Comcast or alternative
avenues of access.
Major Hardware Changes
• Eight new Nehalem (dual-quad core) nodes for SAGE.
• Added Infiniband communication between these nodes
(40 Gbit per sec)
• Two new uninterruptable power supplies (old ones
failed)
• Replace RAID server for the EnkF analysis and
forecasting system.
• Innumerable disk replacements.
• Replaced UPS in SAGE.
• Moved all SAGE processors to our ensemble system
(allow switch to WRF).
Impacts
• Now only were our main nodes refreshed,
each node is twice as fast, AND WRF scales far
better (can use all the nodes simultaneously
effectively).
• Major resource increase which made possible
substantial changes to be described later.
Scalability
• The combination of the new chips (much
better memory bandwidth) and the fast
backplane, has given us something we always
wanted…almost unlimited scalability….adding
more processors speed things up.
Major Changes Since the Last
Meeting
• Not only installed a wide range of new
hardware, but made radical changes to the
modeling systems.
• Very large changes in our modeling
approaches.
Major Change 1
• Greatly expanded the 4/3 km run to include
the Gorge and all of Washington State.
• Run twice a day.
• Initially extended to 48 hr both cycle.
No more nestdown and different
use of machines
• The old way was to run 36-12 km first to 72h,
saving the grids.
• THEN, we would use nestdown to run 4-km
completely separately from 36-12 km
• THEN, would run 1.3 km separately from 4km.
• 36-12 km would run out to 180 h on the
slower Blade server.
Better approach
• We now have the computer power to do this
better.
• All runs are nested interactively in lower
resolution run.
• Found that this substantially improved
moisture fields coming in on boundaries,
which are not updated in nestdown runs.
• Less boundary issues, better simulations.
• Extended 4-km to 84 hr.
How we do it now.
• As first grids come in, the Blade Cluster starts
36-12 km and stays with it until 180h. Finishes
by 10:15 AM
• The new Sage cluster also starts and does 3612-4km immediately. Finishes by 9:45 AM.
• It then starts 36-12-4-1.3 km. Finishes by 2:40
PM through 48 hr. Can go further in time.
Issues and Problems
• Huge impact on all the controlling scripts and
graphics generation. Most conquered.
• Some stability problems at 1.3 km…losing 1 in
15-20 runs at some point.
• Traced it to CFL errors over the steepest
terrain…Mt. Rainier. Testing fixing and think
we have one (vertical velocity damping or
smoothing Mt. Rainier).
Need Better Snow Fields!
• Think we got one that is available daily:
One More Change
• New consortium member, Iberdrola, has
requested a small domain increase to the
south (roughly to Salem). 9% increase in
domain, 30 minute longer runtime.
• Will result in excellent coverage of Portland
area and inclusion of important terrain
draining into Gorge.
Added Terrain
Further Expansion to Missoula?
• This domain is current size: 448x334
• Both westward and southward
expansion:616x 364= 49.9% increase
• Would need seven more nodes to do this, we
no impact on availability.
• Cost for hardware (including more disks)
around 50K.
EnKF System Goes Truly
Operational with 3-h Cycling
• The system now does 36-4 km analyses every
three hours, followed by 3 hr forecast.
• Converted to DART system—now the national
system—lots of flexibility.
• Remember this is an ensemble of 64 members.
• Proved itself during Nov 22 snowstorm
Next Stage
– Make improvements in system (bias of obs, vertical
localization, and others)
– Add forecasts (e.g, 24h, every say 6 hr)
– One hour cycling
– better web page
12-km domain tests
• Our
northern
boundary
is
relatively
close—is
this a
problem?
Northern Boundary
• 95% of the time this is no concern….most of
our weather comes from the west or
southwest…particularly when there are strong
winds.
• But what about the rare case when there is
strong flow from the north with important
weather features (like November 22).
• Got concerned about this when we didn’t do
as well as the NAM for this event.
Tests
• Did a series of tests for this and some other
cases…expanding the 12-km domain
northward in steps, including making the 12km as big as the 36-km.
• Bottom line: no real improvement. NAM was
better in this one case due to better
initialization than GFS (which is relatively rare)
SnowWatch
• The City of Seattle is concerned about
snow/ice situations.
• A major issue is having the best real-time
information—issues demonstrated on Nov 22.
• Proposed to build a system for them—
SnowWatch that makes use of our high
resolution modeling and data assets…and the
infrastructure of RainWatch.
• Funding looks good---Jeff Baars will build it.
NW Weather Workshop
• Delayed until May 13-14th.