Transcript Slide 1

Welcome To the Inaugural
Meeting of the
WRF Software Training and
Documentation Team
Jan. 26-28, 2004
NCAR, MMM Division
Monday January 26, 2004
•
•
•
•
•
Introduction
Software Overview
WRF Software Tutorial, June 2003
Data and data structures
Parallel infrastructure
Introduction
• History
– Requirements emphasize flexibility over a
range of platforms, applications, users
– WRF develops rapidly. First released Dec
2000; Last beta release, 1.3, in May 2003.
Official 2.0 release coming in May 2004
– Circa 2003: "Arcane" used to describe WRF:
• Adj. Known or understood by only a few.
Mysterious.
Introduction
• Purpose of WRF Tiger Team effort
– Extend knowledge of WRF software to wider
base of software developers
– Create comprehensive developer document
– Team approach to both objectives
– Streamlining and code improvement as
byproducts in the coming months but not the
subject for this meeting
Introduction
• This meeting
– Review of WRF software structure and function:
• Phenomenological – this is the code as it exists
• Incomplete –time to prepare and present is limiter, but
• What we're looking for now is a roadmap through the code for
producing the comprehensive documentation
– Develop an outline for the developer documentation
– Writing assignments and work plan over next 9 months
Some terms
• WRF Architecture – scheme of software layers
and interface definitions
• WRF Framework – the software infrastructure,
also "driver layer" in the WRF architecture
• WRF Model Layer – the computational routines
that are specifically WRF
• WRF Model – a realization of the WRF
architecture comprising the WRF model layer
with some framework
• WRF – a set of WRF architecture-compliant
applications, of which the WRF Model is one
WRF Software Overview
Weather Research and
Forecast Model
Goals: Develop an advanced mesoscale forecast
and assimilation system, and accelerate
research advances into operations
12km WRF simulation of largescale baroclinic cyclone, Oct.
24, 2001
WRF Software Requirements…
;-)
• Fully support user community's needs: nesting, coupling, contributed
physics code and multiple dynamical cores – but keep it simple
• Support every computer but make sure scaling and performance is
optimal on the computer we use
• Leverage community infrastructure, computational frameworks,
contributed software, but please no opaque code
• Implement by committee of geographically remote developers
• Adhere to union of all software process models
• Fully test, document, support
• Free
WRF Software Requirements (for real)
Goals
•
•
•
•
•
•
•
•
•
Community Model
Good performance
Portable across a range of
architectures
Flexible, maintainable,
understandable
Facilitate code reuse
Multiple dynamics/ physics options
Run-time configurable
Nested
Package independent
Aspects of Design
•
•
•
•
•
•
Single-source code
Fortran90 modules, dynamic
memory, structures, recursion
Hierarchical software architecture
Multi-level parallelism
CASE: Registry
Package-neutral APIs
– I/O, data formats
– Communication
•
Scalable nesting/coupling
infrastructure
Aspects of WRF Software Design
Aspects of WRF Software Design
Model Coupling
ESMF
Performance
170
160
150
140
130
120
110
100
90
80
70
60
50
40
30
20
10
0
550
HP EV68
(Pittsburgh TCS)
500
450
400
350
HP IA-64 (PNNL)
300
250
IBM p690
200
150
iJet
IBM Winterhawk II
100
50
0
100
200
300
400
500
600
processors
700
800
900
1000
0
1100
Simulation speed
Gflop/s
WRF EM Core, 425x300x35, DX=12km, DT=72s
Structural Aspects
• Directory Structure and relationship to
Software Hierarchy
• File nomenclature and conventions
• Use Association
Directory Structure
WRF Model Directory Structure
driver
mediation
model
page 5,
WRF D&I Document
WRF File Taxonomy and Nomenclature
Module files
14
Driver
1
Driver
16
Model
18
Model
11
Model
22
Model
2
Model
5
Mediation
Non-module Fortran Source
7
Driver
3
Driver
4
Mediation
12 Mediation
5
Mediation
5
Mediation
Include files
frame/module_* .F
frame/module_state_description.F
dyn_em/module_*.F
dyn_nmm/module_* .F
share/module_*. F
phys/module_pp_*. F
phys/module_*.F
phys/module_*_ driver.F
WRF framework (driver layer)
registry generated framework file
em core-specific model layer
nmm core-specific model layer
non-core specific model layer
physics modules, where pp is kind of physics
misc physics routines
physics drivers
main/*.F
frame/*.c
dyn_em/*. F
dyn_nmm/*. F
share/mediation_*. F
share/something. F
main programs (1 wrf and 6 preprocs)
C-language routines in the WRF framework
em core-specific routines (includes solver)
nmm core-specific routines (includes solver)
mediation layer
mediation layer and miscellaneous
inc/*. inc
inc/*. h
registry generated includes
io api definitions, autogenerated from build
Makefile */Makefile
configure and compile scripts
tools/*.c
tools/regtest.csh
build mechanism
build mechanism
source for registry program
a regression tester for WRF model
external/*
external package directories
Others
13
2
9
1
Externals
7
External
Module Conventions and USE Association
•
•
•
•
Modules are named
module_something
Name of file containing module is
module_something.F
If a module includes an
initialization routine, that routine
should be named
init_module_something()
Typically:
– Driver and model layers are made
up of modules,
– Mediation layer is not (rather, bare
subroutines), except for physics
drivers in phys directory
– Gives benefit of modules while
avoiding cycles in the use
association graph
MODULE module_this
MODULE module_that
…
driver
USE module_this
USE module_that
USE module_whatcha
USE module_macallit
…
mediation
MODULE module_whatcha
MODULE module_macallit
USE module_this
USE module_that
…
model
WRF S/W Tutorial, June 2003
Tutorial Presentation (click here)
•
•
•
•
Parallel Infrastructure
Registry details
I/O architecture and mechanism
Example of coding new package into
framework
Data and Data Structures
Session 3: Data Structures
• Overview
• Representation of domain
• Special representations
– Lateral Boundary Conditions
– 4D Tracer Arrays
Data Overview
• WRF Data Taxonomy
– State data
– Intermediate data type 1 (I1)
– Intermediate data type 2 (I2)
– Heap storage (COMMON or Module data)
State Data
• Persist for the duration of a domain
• Represented as fields in domain data structure
• Arrays are represented as dynamically allocated
pointer arrays in the domain data structure
• Declared in Registry using state keyword
• Always memory dimensioned; always thread
shared
• Only state arrays can be subject to I/O and
Interprocessor communication
I1 Data
• Data that persists for the duration of 1 time step
on a domain and then released
• Declared in Registry using i1 keyword
• Typically automatic storage (program stack) in
solve routine
• Typical usage is for tendency arrays in solver
• Always memory dimensioned and thread
shared
• Typically not communicated or I/O
I2 Data
• I2 data are local arrays that exist only in modellayer subroutines and exist only for the duration
of the call to the subroutine
• I2 data is not declared in Registry, never
communicated and never input or output
• I2 data is tile dimensioned and thread local;
over-dimensioning within the routine for
redundant computation is allowed
– the responsibility of the model layer programmer
– should always be limited to thread-local data
Heap Storage
• Data stored on the process heap is not threadsafe and is generally forbidden anywhere in
WRF
– COMMON declarations
– Module data
• Exception: If the data object is:
– Completely contained and private within a Model
Layer module, and
– Set once and then read-only ever after, and
– No decomposed dimensions.
Grid Representation in Arrays
• Increasing indices in WRF arrays run
– West to East (X, or I-dimension)
– South to North (Y, or J-dimension)
– Bottom to Top (Z, or K-dimension)
• Storage order in WRF is IKJ but this is a
WRF Model convention, not a restriction of
the WRF Software Framework
Grid Representation in Arrays
• The extent of the logical or domain
dimensions is always the "staggered" grid
dimension. That is, from the point of view
of a non-staggered dimension, there is
always an extra cell on the end of the
domain dimension.
Grid Indices Mapped onto Array
Indices (C-grid example)
jde = 5
v1,5
u1,4
m1,4
v2,5
u2,4
v1,4
u1,3
m1,3
m1,2
u2,3
jds = 1
m1,1
v1,1
ids = 1
m2,3
u2,2
m2,2
u3,3
m2,1
v2,1
u4,4
m3,3
u3,2
m3,2
u4,3
m3,1
v3,1
u4,5
m4,3
u4,2
m4,2
u5,2
v4,2
u4,1
m4,1
Computation over
mass points runs
only ids..ide-1
and jds..jde-1
u5,3
v4,3
v3,2
u3,1
m4,4
v4,4
v3,3
v2,2
u2,1
m3,4
v4,5
v3,4
v2,3
v1,2
u1,1
u3,4
v2,4
v1,3
u1,2
m2,4
v3,5
u5,1
v4,1
ide = 5
Likewise, vertical
computation over
unstaggered fields
run kds..kde-1
LBC Arrays
• State arrays, declared in Registry using the b
modifier in the dimension field of the entry
• Store specified forcing data on domain 1, or
forcing data from parent on a nest
• All four boundaries are stored in the array; last
index is over:
P_XSB (western)
P_XEB (eastern)
P_YSB (southern)
P_YEB (northern)
These are defined in module_state_description.F
LBC Arrays
• LBC arrays are declared as follows:
em_u_b(max(ide,jde),kde,spec_bdy_width,4)
• Globally dimensioned in first index as the maximum of x and y
dimensions
• Second index is over vertical dimension
• Third index is the width of the boundary (namelist)
• Fourth index is which boundary
• Note: LBC arrays are globally dimensioned
• not fully dimensioned so still scalable in memory
• preserves global address space for dealing with LBCs
• makes input trivial (just read and broadcast)
LBC Arrays
unused
unused
P_YEB
jde
P_XEB
P_YEB
A Given Domain
jds
P_YSB
ids
ide
LBC Arrays
unused
unused
P_YEB
jde
P_YEB
P_XEB
A given subdomain
that includes a
domain boundary
jds
P_YSB
ids
ide
Four Dimensional Tracer Arrays
• State arrays, used to store arrays of 3D fields such as
moisture tracers, chemical species, ensemble members,
etc.
• First 3 indices are over grid dimensions; last dimension
is the tracer index
• Each tracer is declared in the Registry as a separate
state array but with f and optionally also t modifiers to
the dimension field of the entry
• The field is then added to the 4D array whose name is
given by the use field of the Registry entry
Four Dimensional Tracer Arrays
• Fields of a 4D array are input and output separately and
appear as any other 3D field in a WRF dataset
• The extent of the last dimension of a tracer array is from
PARAM_FIRST_SCALAR to num_tracername
– Both defined in Registry-generated
frame/module_state_description.F
– PARAM_FIRST_SCALAR is a defined constant (2)
– Num_tracername is computed at run-time in
set_scalar_indices_from_config (module_configure)
– Calculation is based on which of the tracer arrays are associated
with which specific packages in the Registry and on which of
those packages is active at run time (namelist.input)
Four Dimensional Tracer Arrays
• Each tracer index (e.g. P_QV) into the 4D array is also
defined in module_state_description and set in
set_scalar_indices_from_config
• Code should always test that a tracer index greater than
or equal to PARAM_FIRST_SCALAR before referencing
the tracer (inactive tracers have an index of 1)
• Loops over tracer indices should always run from
PARAM_FIRST_SCALAR to num_tracername -EXAMPLE
Parallel Infrastructure
Parallel Infrastructure
• Distributed memory parallelism
– Some basics
– API
• module_dm.F routines
• Registry interface (gen_comms.c)
– Data decomposition
– Communications
• Shared memory parallelism
– Tiling
– Threading directives
– Thread safety
Some Basics on DM Parallelism
• Principal types of explicit communication
–
–
–
–
Halo exchanges
Periodic boundary updates
Parallel transposes
Special purpose scatter gather for nesting
• Also
–
–
–
–
Broadcasts
Reductions (missing, using MPI directly)
Patch-to-global and global-to-patch
Built in I/O server mechanism
Some Basics on DM Parallelism
• All DM comm operations are collective
• Semantics for specifying halos exchanges,
periodic bdy updates, and transposes allow
message agglomeration (bundling)
• Halos and periods allow fields to have varying
width stencils within the same operation
• Efficient implementation is up to the external
package implementing communications
DM Comms API
• External package provides a number of subroutines in
module_dm.F
Click here for partial API specification
• Actual invocation of halos, periods, transposes provided
by a specific external package is through #include files in
the inc directory. This provides greater flexibility and
latitude to the implementer than a subroutine interface
• Package implementer may define comm-invocation
include files manually or they can be generated
automatically by the Registry by providing a routine
external/package/gen_comms.c for inclusion in the
Registry program
A few notes on RSL implementation
• RSL maintains descriptors for
domains and the operations on
the domains
• An operation such as a halo
exchange is a collection of
logical "messages", one per
point on the halo's stencil
• Each message is a collection
of fields that should be
exchanged for that point
• RSL stores up this information
in tables then compiles an
efficient communication
schedule the first time the
operation is invoked for a
domain
msg1{ u, v }
msg2{ t, w, ps }
...
24 pt stencil
Example HALO_EM_D2_5
USED in dyn_em/solve_em.F
#ifdef DM_PARALLEL
IF ( h_mom_adv_order <= 4 ) THEN
# include "HALO_EM_D2_3.inc"
ELSE IF ( h_mom_adv_order <= 6 ) THEN
# include "HALO_EM_D2_5.inc"
ELSE
WRITE(wrf_err_message,*)'solve_em: invalid h_mom_adv_order '
CALL wrf_error_fatal (TRIM(wrf_err_message))
ENDIF
# include "PERIOD_BDY_EM_D.inc"
# include "PERIOD_BDY_EM_MOIST2.inc"
# include "PERIOD_BDY_EM_CHEM2.inc"
#endif
Defined in Registry
halo
HALO_EM_D2_5 dyn_em 48:u_2,v_2,w_2,t_2,ph_2;\
24:moist_2,chem_2;\
4:mu_2,al
Example HALO_EM_D2_5
!STARTOFREGISTRYGENERATEDINCLUDE 'inc/HALO_EM_D2_5.inc'
!
! WARNING This file is generated automatically by use_registry
! using the data base in the file named Registry.
! Do not edit. Your changes to this file will be lost.
!
IF ( grid%comms( HALO_EM_D2_5 ) == invalid_message_value ) THEN
CALL wrf_debug ( 50 , 'set up halo HALO_EM_D2_5' )
CALL setup_halo_rsl( grid )
CALL reset_msgs_48pt
CALL add_msg_48pt_real ( u_2 , (glen(2)) )
CALL add_msg_48pt_real ( v_2 , (glen(2)) )
CALL add_msg_48pt_real ( w_2 , (glen(2)) )
CALL add_msg_48pt_real ( t_2 , (glen(2)) )
CALL add_msg_48pt_real ( ph_2 , (glen(2)) )
if ( P_qv .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qv),
if ( P_qc .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qc),
if ( P_qr .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qr),
if ( P_qi .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qi),
if ( P_qs .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qs),
if ( P_qg .GT. 1 ) CALL add_msg_24pt_real ( moist_2 ( grid%sm31,grid%sm32,grid%sm33,P_qg),
CALL add_msg_4pt_real ( mu_2 , 1 )
CALL add_msg_4pt_real ( al , (glen(2)) )
CALL stencil_48pt ( grid%domdesc , grid%comms ( HALO_EM_D2_5 ) )
ENDIF
glen(2)
glen(2)
glen(2)
glen(2)
glen(2)
glen(2)
)
)
)
)
)
)
Defined in Registry
CALL rsl_exch_stencil ( grid%domdesc , grid%comms( HALO_EM_D2_5 ) )
halo
HALO_EM_D2_5 dyn_em 48:u_2,v_2,w_2,t_2,ph_2;\
24:moist_2,chem_2;\
4:mu_2,al
Notes on Period Communication
v1,5
u1,4
m1,4
v2,5
u2,4
v1,4
u1,3
m1,3
m1,2
u2,3
m1,1
v1,1
m2,3
u2,2
m2,2
u3,3
m2,1
v2,1
u4,4
m3,3
u3,2
m3,2
u4,3
m3,1
v3,1
u4,5
m4,3
u5,3
v4,3
u4,2
v3,2
u3,1
m4,4
v4,4
v3,3
v2,2
u2,1
m3,4
v4,5
v3,4
v2,3
v1,2
u1,1
u3,4
v2,4
v1,3
u1,2
m2,4
v3,5
m4,2
u5,2
v4,2
u4,1
m4,1
v4,1
u5,1
updating
Mass Point
periodic
boundary
Notes on Period Communication
v1,5
m4,4
u1,4
m1,4
v2,5
u2,4
v1,4
m4,3
u1,3
m1,3
u1,2
m1,2
u2,3
u1,1
m1,1
v1,1
m2,3
u2,2
m2,2
u3,3
m2,1
v2,1
u4,4
m3,3
u3,2
m3,2
u4,3
m3,1
v3,1
u4,5
m1,4
m4,3
u5,3
m1,3
u5,2
m1,2
u5,1
m1,1
v4,3
u4,2
v3,2
u3,1
m4,4
v4,4
v3,3
v2,2
u2,1
m3,4
v4,5
v3,4
v2,3
v1,2
m4,1
u3,4
v2,4
v1,3
m4,2
m2,4
v3,5
m4,2
v4,2
u4,1
m4,1
v4,1
updating
Mass Point
periodic
boundary
Notes on Period Communication
note: replicated
v1,5
u1,4
m1,4
v2,5
u2,4
v1,4
u1,3
m1,3
m1,2
u2,3
m1,1
v1,1
m2,3
u2,2
m2,2
u3,3
m2,1
v2,1
u4,4
m3,3
u3,2
m3,2
u4,3
m3,1
v3,1
u1,5
m4,3
u1,3
v4,3
u4,2
v3,2
u3,1
m4,4
v4,4
v3,3
v2,2
u2,1
m3,4
v4,5
v3,4
v2,3
v1,2
u1,1
u3,4
v2,4
v1,3
u1,2
m2,4
v3,5
m4,2
u1,2
v4,2
u4,1
m4,1
v4,1
u1,1
updating
U Staggered
periodic
boundary
Notes on Period Communication
v1,5
u4,4
u1,4
m1,4
v2,5
u2,4
v1,4
u4,3
u1,3
m1,3
u1,2
m1,2
u2,3
u1,1
m1,1
v1,1
m2,3
u2,2
m2,2
u3,3
m2,1
v2,1
u4,4
m3,3
u3,2
m3,2
u4,3
m3,1
v3,1
u1,5
u2,4
m4,3
u1,3
u2,3
u1,2
u2,2
u1,1
u2,1
v4,3
u4,2
v3,2
u3,1
m4,4
v4,4
v3,3
v2,2
u2,1
m3,4
v4,5
v3,4
v2,3
v1,2
u4,1
u3,4
v2,4
v1,3
u4,2
m2,4
v3,5
m4,2
v4,2
u4,1
m4,1
v4,1
updating
U Staggered
periodic
boundary
Welcome To the Inaugural
Meeting of the
WRF Software Training and
Documentation Team
Jan. 26-28, 2004
NCAR, MMM Division
Tuesday, January 27, 2004
• Detailed code walk-through
• I/O
• Misc. Topics
– Registry
– Error handling
– Time management
– Build mechanism
Detailed WRF Code Walkthrough
Detailed Code Walkthrough
• The walkthrough was conducted using the
following set of Notes
• The WRF Code Browser was used to peruse the
code and dive down at various points
• The walkthrough began with the main/wrf.F routine
(when you bring up the browser, this should be in
the upper right hand frame; if not, click the link
WRF in the lower left hand frame, under
Programs)
I/O
I/O
• Concepts
• I/O Software Stack
• I/O and Model Coupling API
WRF I/O Concepts
• WRF model has multiple input and output streams that are bound a
particular format at run time
• Different formats (NetCDF, HDF, binary I/O) are implemented behind
a standardized WRF I/O API
• Lower levels of the WRF I/O software stack allow expression of a
dataset open as a two-stage operation: OPEN BEGIN and then
OPEN COMMIT
– Between the OPEN BEGIN and OPEN COMMIT the program performs
the sequence of writes that will constitute one frame of output to "train"
the interface
– An implementation of the API is free to use this information for
optimization/bundling/etc. or ignore it
• Higher levels of the WRF I/O software stack provide a
BEGIN/TRAIN/COMMIT form of an OPEN as a single call
I/O Software Stack
•
•
•
•
Domain I/O
Field I/O
Package-independent I/O API
Package-specific I/O API
Domain I/O
• Routines in share/module_io_domain.F
– High level routines that apply to operations on a
domain and a stream
• open and define a stream for writing in a single call that
contains the OPEN FOR WRITE BEGIN, the series of
"training writes" to a dataset, and the final OPEN FOR
WRITE COMMIT
• read or write all the fields of a domain that make up a
complete frame on a stream (as specified in the Registry)
with a single call
• some wrf-model specific file name manipulation routines
Field I/O
• Routines in share/module_io_wrf.F
– Many of the routines here are duplicative of
the routines in share/module_io_domain.F
and an example of unnecessary layering in
the WRF I/O software stack
– However, file does contain the base
output_wrf and input_wrf routines in this file
are what all the stream-specific wrappers (that
are duplicated in the two layers)
Field I/O
• Output_wrf and input_wrf
– Contain hard coded WRF-specific meta-data puts (for
output) and gets (for input)
• Whether meta-data is output or input is controlled by a flag in
the grid data structure
• Meta data output is turned off when output_wrf is being
called as part of a "training write" within a two-stage open
• It is turned on when it's called as part of an actual write
– Contain registry generated series of calls the WRF I/O
API to write or read individual files
Package-independent I/O API
• frame/module_io.F
• These routines correspond to WRF I/O API specification
• Start with the wrf_ prefix (package-specific routines start
with ext_package_)
• The package-independent routines here contain logic for:
– selecting between formats (package-specific) based on the what
stream is being written and what format is specified for that stream
– calling the external package as a parallel package (each process
passes subdomain) or collecting and calling on a single WRF
process
– passing the data off the the asynchronous quilt-servers instead of
calling the I/O API from this task
Package-specific I/O API
• Format specific implementations of I/O
–
–
–
–
external/io_netcdf/wrf_io.F90
external/io_int/io_int.F90
external/io_phdf5/wrf-phdf5.F90
external/io_mcel/io_mcel.F90
• The NetCDF version each contain a small program,
diffwrf.F90, that uses the API read and then generate an
ascii dump of a field that is readable by HMV (see:
www.rotang.com) a small plotting program we use inhouse for debugging and quick output.
• Diffwrf is also useful as a small example of how to use
the I/O API to read a WRF data set
Misc. Topics
Misc. Topics
•
•
•
•
Registry
Error handling
Time management
Build mechanism
Registry
• Overview of Registry program
• Survey of what is autogenerated
Registry Source Files (in tools/)
registry.c
reg_parse.c
Main program
Parser Registry File and build AST
gen_allocs.c
gen_args.c
gen_comms.c
gen_config.c
gen_defs.c
gen_interp.c
gen_mod_state_descr.c
gen_model_data_ord.c
gen_scalar_derefs.c
gen_scalar_indices.c
gen_wrf_io.c
Generate
Generate
Generate
Generate
Generate
Generate
Generate
Generate
Generate
Generate
Generate
misc.c
my_strtok.c
data.c
sym.c
symtab_gen.c
type.c
Utilities used in registry program
"
"
"
"
"
Abstract syntax tree routines
Symbol table (used by parser and AST)
"
"
"
"
"
"
Type handling, derived data types, misc.
allocate statements
argument lists
comms (STUBS or PACKAGE SPECIFIC)
namelist handling code
variable/dummy arg declarations
nest interpolation code
frame/module_state_description.F
inc/model_data_ord.inc
grid dereferencing code for non arrays
code for 4D array indexing
calls to I/O API for fields
What the Registry Generates
• Include files in the inc directory…
WRF Error Handling
• frame/module_wrf_error.F
• Routines for
– Incremental debugging output WRF_DEBUG
– Producing diagnostic messages
WRF_MESSAGE
– Writing an error message and terminating
WRF_ERROR_FATAL
WRF Time management
• Implementation of ESMF Time Manager
• Defined in external/esmf_time_f90
• Objects
– Clocks
– Alarms
– Time Instances
– Time Intervals
WRF Time management
• Operations on ESMF time objects
– For example: +, -, and other arithmetic is defined for
time intervals intervals and instances
– I/O intervals are specified by setting alarms on clocks
that are stored for each domain; see
share/set_timekeeping.F
– The I/O operations are called when these alarms "go
off". see MED_BEFORE_SOLVE_IO in
share/mediation_integrate.F
WRF Build Mechanism
• Structure
– Scripts:
• configure
– Determines architecture using 'uname' then searchs the
arch/configure.defaults file for the list of possible compile options for
that system. Typically the choices involved compiling for singlethreaded, pure shared memory, pure distributed memory, or hybrid;
may be other options too
– Creates the file configure.wrf, included by Makefiles
• compile [scenario]
– Checks for existence of configure.wrf
– Checks the environment for the core-specific settings such as
WRF_EM_CORE or WRF_NMM_CORE
– Invokes the make command on the top-level Makefile passing it
information about specific targets to be built depending on the scenario
argument to the script
• clean [-a]
– Cleans the code, or really cleans the code
WRF Build Mechanism
• Structure (continued)
– arch/configure.defaults -- file containing settings for
various architectures
– test directory
• Contains a set of subdirectories, each one for a different
scenario. Includes idealized cases as well as directories for
running real-data cases
• The compile script requires the name of one of these
directories (for example "compile em_real") and based on
that it compiles wrf.exe and the appropriate preprocessor (for
example real.exe) and creates symbolic links from the
test/em_real subdirectory to these executables
WRF Build Mechanism
• Structure (continued)
– Top-level Makefile and Makefiles in subdirectories
– The compile script invokes the top-level Makefile as:
"make scenario"
– The top level Makefile, including rules and targets from the
configure.wrf file that was generated by the configure script, then
recursively invokes Makefiles in subdirectories in order:
•
•
•
•
•
•
•
external
tools
frame
shared
physics
dyn_*
main
(external packages based on configure.wrf)
(builds registry)
(invokes registry and then builds framework)
(mediation layer and other modules and subroutines)
(physics package)
(core specific code)
(main routine and link to produce executables)