*[Running LES with online computation of resolved-fluxes turbulent fluxes](#)
1.[Data assimilation (DA)](#)
*[Observation nudging](#)
*[Variational DA](#)
*[Ensemble DA](#)
1.[Specific tasks](#)
*[Before running the model](#)
*[Defining the vertical grid](#)
*[Defining a new geographical database](#)
*[Using ECMWF data as IC/BC](#)
*[Spinning up soil fields](#)
*[After running the model](#)
*[Interpolating model output to a new grid](#)
*[Subsetting model output](#)
*[Further compression of model output (data packing)](#)
*[Converting model output to CF-compliant NetCDF](#)
1.[Other useful tools](#)
# What is WRF
WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, [NCAR](https://ncar.ucar.edu), part of the University Corporation for atmospheric Research, [UCAR](https://www.ucar.edu]) and the National Weather Service (National Centers for Environmental Prediction, [NCEP](https://www.weather.gov/ncep/) at the National Oceanic and Atmospheric Administration, [NOAA](https://www.noaa.gov/)).
The present manual aims at constituting a first approach for beginner users to the Weather
Research and Forecasting Model.
Table of contents:
[TOC]
## What is WRF
WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, [NCAR](https://ncar.ucar.edu), part of the University Corporation for atmospheric Research, [UCAR](https://www.ucar.edu) and the National Weather Service (National Centers for Environmental Prediction, [NCEP](https://www.weather.gov/ncep/) at the National Oceanic and Atmospheric Administration, [NOAA](https://www.noaa.gov/)).
Over the years, WRF evolved into two distinct models. [ARW-WRF](https://www.mmm.ucar.edu/models/wrf)(Advanced Research WRF) is maintained by NCAR and is used by the research community. [WRF-NMM](https://nomads.ncep.noaa.gov/txt_descriptions/WRF_NMM_doc.shtml) is used operationally by the National Weather Service. We use ARW-WRF.
...
...
@@ -56,41 +25,48 @@ WRF and related programs run as executables on linux machines and clusters. Runn
The WRF source code is available on [Github](https://github.com/wrf-model/WRF). It is possible to checkout the repository, but the recommended way of getting the code is to download one of the [official releases](https://github.com/wrf-model/WRF/releases): scroll down to the "Assets" section and choose one of the `v*.tar.gz` or `v*zip` files (not the "Source code" ones; these are incomplete).
To download while working on the terminal on a remote server, use wget or curl:
To uncompress the source code, use either of the following (depending on the format):
```
```sh
tar xzvf v4.4.2.tar.gz
unzip v4.4.2.zip
```
# Quick start
## Quick start
Compiling WRF for an idealized simulation (LES):
```
```sh
./configure
./compile em_les > compile.log 2>&1 &
```
Running WRF for an idealized simulation (LES):
```
```sh
cd ./test/em_les
./ideal.exe
./wrf.exe
```
For other test cases, compilation might create a `run_me_first.csh` script in the same directory as the executables. If there is one, run it only once, before any other program. It will link any necessary lookup tables needed for the simulation (land-use, parameterizations, etc.).
Compiling WRF for an idealized simulation (LES):
```
```sh
./configure
./compile em_real > compile.log 2>&1 &
```
Running WRF for a real_case simulation:
```
```sh
cd test/em_real
ln-s$WPS_PATH/met_em*.
./real.exe
...
...
@@ -98,7 +74,8 @@ ln -s $WPS_PATH/met_em* .
```
To do the WRF pre-processing for a real-case simulation getting initial and boundary conditions from ECMWF-IFS data on model levels, you could use a script such as the following. However, it depends on namelists, variable tables and other settings files being correctly specified. See below for details.
@@ -167,7 +144,7 @@ In some cases, editing the model source code is necessary. This mostly happens i
*`phys`: this contains the source code of parameterizion schemes ("model physics").
*`Registry`: large chunks of the WRF source code are generated automatically at compile time, based on the information contained in a text file called `Registry`. This file specifies for instance what model variables are saved in the output, and how.
## Compiling the model
### Compiling the model
WRF is written in compiled languages (mostly Fortran and C++), so it needs to be compiled before execution. It relies on external software libraries at compilation and runtime, so these libraries have to be available on the system where WRF runs.
...
...
@@ -175,32 +152,32 @@ In general, compiled WRF versions are already available on all of our servers (S
However, we describe the typical workflow of the compilation, for anyone that wishes to try it out. There are three steps: (i) make libraries available, (ii) configure, (iii) compile.
### Make the prerequisite libraries available
#### Make the prerequisite libraries available
In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections.
@@ -208,7 +185,7 @@ Load modules with `module load LIST-OF-MODULE-NAMES`, unload them one by one wit
After loading modules, it is also recommended to set the `NETCDF` environment variable to the root variable of the netcdf installation. On srvx1, jet and VSC4, use `module show` to see which directory is correct. For instance:
Important note: **The environment must be consistent between compilation and runtime. If you compile WRF with a set of modules loaded, you must run it with the same set of modules**.
### Configure WRF for compilation
#### Configure WRF for compilation
This will test the system to check that all libraries can be properly linked. Type `./configure`, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration.
...
...
@@ -275,7 +252,7 @@ This build of WRF will use NETCDF4 with HDF5 compression
```
But the configuration could also end with a message like this:
```
```sh
************************** W A R N I N G ************************************
NETCDF4 IO features are requested, but this installation of NetCDF
@@ -298,20 +275,23 @@ This is actually a misleading error message. The problem has nothing to do with
The configure script stores the model configuration to a file called `configure.wrf`. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working `configure.wrf` file for a given source code/server/environment, back it up.
### Compile WRF
#### Compile WRF
You always compile WRF for a specific model configuration. The ones we use most commonly are `em_les` (for large-eddy simulation), `em_quarter_ss` (for idealized mesoscale simulations), `em_real` (for real-case forecasts). So type either of the following, depending on what you want to get:
```
```sh
./compile em_les > compile.log 2>&1 &
./compile em_quarter_ss > compile.log 2>&1 &
./compile em_real > compile.log 2>&1 &
```
The `> compile.log` tells the operating system to redirect the output stream from the terminal to a file called `compile.log`. The `2>&1` tells the operating system to merge the standard and error output streams, so `compile.log` will contain both regular output and error messages. The final `&` tells the operating system to run the job in the background, and returns to the terminal prompt.
The compiled code will be created in the `run` directory, and some of the compiled programs will be linked in either of the `test/em_les`, `test/em_quarter_ss` or `test/em_real` directories. Executable WRF files typically have names ending with `.exe` (this is just conventional; it is actually not necessary for them to run).
Compilation may take half an hour or so. A successful compilation ends with:
@@ -336,27 +316,27 @@ build completed: Thu Feb 2 17:07:04 CET 2023
```
then you have a problem, and there is no unique solution. Take a closer look at `compile.log` and you might be able to diagnose it.
## Copying compiled WRF code
### Copying compiled WRF code
## Running WRF in a software container
### Running WRF in a software container
## Running an idealized simulation
### Running an idealized simulation
## Running a real-case simulation
### Running a real-case simulation
## Suggested workflow
### Suggested workflow
## Analysing model output
### Analysing model output
[Python interface to WRF](https://wrf-python.readthedocs.io/en/latest/)
## Important namelist settings
### Important namelist settings
# Advanced usage
## Advanced usage
## Changing the source code
### Changing the source code
## Conditional compilation
### Conditional compilation
Most Fortran compilers allow passing the source code through a C preprocessor (CPP; sometimes also called the Fortran preprocessor, FPP) to allow for conditional compilation. In the C programming language, there are some directives that make it possible to compile portions of the source code selectively.
...
...
@@ -368,7 +348,7 @@ This means:
For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code define the model orography for idealized large-eddy simulation runs. Four possibilities are given: `MTN`, `EW_RIDGE`, `NS_RIDGE`, and `NS_VALLEY`. If none is selected at compile time, none of these code lines is compiled and `grid%ht(i,j)` (the model orography) is set to 0:
```
```fortran
#ifdef MTN
DO j=max(ys,jds),min(ye,jde-1)
DO i=max(xs,ids),min(xe,ide-1)
...
...
@@ -414,7 +394,8 @@ For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code
To control conditional compilation:
1. Search for the variable `ARCHFLAGS` in `configure.wrf`
2. Add the desired define statement at the bottom. For instance, to selectively compile the `NS_VALLEY` block above, do the following:
## Running LES with online computation of resolved-fluxes turbulent fluxes
### Running LES with online computation of resolved-fluxes turbulent fluxes
WRFlux
# Data assimilation (DA)
## Data assimilation (DA)
## Observation nudging
### Observation nudging
## Variational DA
### Variational DA
WRFDA
## Ensemble DA
### Ensemble DA
We cover this separately. See DART-WRF.
# Specific tasks
## Specific tasks
## Before running the model
### Before running the model
### Defining the vertical grid
#### Defining the vertical grid
### Customizing model orography
#### Customizing model orography
### Defining a new geographical database
#### Defining a new geographical database
### Using ECMWF data as IC/BC
#### Using ECMWF data as IC/BC
The long story made short is: you should link grib1 files and process them with `ungrib.exe` using `Vtable.ECMWF_sigma`.
...
...
@@ -476,70 +457,90 @@ In detail:
1. Conversion to grib1 (needs the grib_set utility from eccodes):
for i in det.CROSSINN.mlv.20190913.0000.f*.grib2; do j=`basename $i .grib2`; grib_set -s deletePV=1,edition=1 ${i} ${j}; done
```sh title="convert to grib1"
for i in det.CROSSINN.mlv.20190913.0000.f*.grib2;
do
j=`basename $i .grib2`;
grib_set -s deletePV=1,edition=1 ${i} ${j};
done
```
2. Concatenation of grib files (two sets of files, `*mlv*` and `*sfc*`, with names ending with "grib1" yield a new set of files with names ending with "grib"; everything is grib1):
for i in det.CROSSINN.mlv.20190913.0000.f*.grib1; do j=`echo $i|sed 's/.mlv./.sfc./'`; k=`echo $i|sed 's/.mlv././'|sed 's/.grib1/.grib/'`; cat $i $j > $k; done
An alternative procedure would be to convert everything to grib2 instead of grib1. Then, one has to use a Vtable with grib2 information for the surface fields, for instance the one included here at the bottom. But: Data from the bottom soil level will not be read correctly with this Vtable, because the Level2 value for the bottom level is actually MISSING in grib2 files (at the moment of writing, 6 May 2022; this may be fixed in the future).
GRIB1| Level| From | To | metgrid | metgrid | metgrid |GRIB2|GRIB2|GRIB2|GRIB2|
Param| Type |Level1|Level2| Name | Units | Description |Discp|Catgy|Param|Level|
1. In the code snippet above, -remapnn specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: <https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields>
1. In the code snippet above, `-remapnn` specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: <https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields>
1. File gridfile.lonlat.txt contans the grid specifications, e.g.:
...
...
@@ -558,11 +559,11 @@ An alternative procedure would be to convert everything to grib2 instead of grib
yfirst = 43.00
yinc = 0.01
### Subsetting model output
#### Subsetting model output
### Further compression of model output (data packing)
#### Further compression of model output (data packing)