diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index c4f246ceba5fa3581a818472771fcae9407bc267..0ff8af9b20de3ce6545e3e723cce2b14c149a85e 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -15,12 +15,7 @@ stages: build: stage: build - - rules: - # only run pipeline when build is in the commit message - - if: $CI_COMMIT_MESSAGE =~ /.*2wolke.*/ - - if: $UPDATEWOLKE - + when: manual before_script: # Install all required packages - apt-get install -y -qq graphviz @@ -38,6 +33,7 @@ build: deploy: stage: deploy + when: manual needs: - build before_script: @@ -46,6 +42,6 @@ deploy: # - sshpass -p "$WOLKE_PASSWORD" scp -oStrictHostKeyChecking=no -r ./site/* $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general/ - sshpass -p "$WOLKE_PASSWORD" rsync -atv --delete -e "ssh -o StrictHostKeyChecking=no" ./site/ $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general cache: - key: build-cache - paths: - - site/ + key: build-cache + paths: + - site/ diff --git a/Data/README.md b/Data/README.md index 998932ce773a9a34982f3a6ef1dd9ef334383b3b..3b15fe2f494a3a0dae9f87cc03456f066d6b1cae 100644 --- a/Data/README.md +++ b/Data/README.md @@ -1,23 +1,31 @@ -# Data Repositories +# Data Descriptions -available at the Department of Meteorology and Geophysics. -Edit this file also on [gitlab](https://gitlab.phaidra.org/imgw/computer-resources/-/blob/master/Data/README.md) +Purpose: list available data at the department of Meteorology and Geophysics. + +Edit this file here or on [gitlab](https://gitlab.phaidra.org/imgw/computer-resources/-/blob/master/Data/README.md) + +Fill into the appropriate table and add a README to the directory/dataset as well. Use the Data-template.md for example. -# Reanalysis data -There are currently the following reanalysis available | Name | Time period | Resolution | Domain | Variables | Vertical Resolution | Location | Contact | Comments | Source | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | CERA 20C | 01-1900 to 12-2010 | 2.0 deg x 2.0 deg; 3-hourly | Global | --- | 91 layers | /jetfs/shared-data/ECMWF/CERA_glob_2deg_3h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | | ERA-Interium | 01-1999 to 01-2019 | 1.0 deg x 1.0 deg; 3-hourly | Global | --- | 60 layers | /jetfs/shared-data/ECMWF/EI_glob_1deg_3h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | | OPERATIONAL | 01-2016 to 06-2020 | 1.0 deg x 1.0 deg; 3-hourly | Global | --- | 137 layers | /jetfs/shared-data/ECMWF/OPER_glob_1deg_3h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | -| ERA5 | IN PROGRESS | 0.5 deg x 0.5 deg; 1-hourly | Global | --- | 137 layers | /jetfs/shared-data/ECMWF/ERA5_glob_0.5deg_1h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | -| ERA5 europe | IN PROGRESS | 0.25 deg x 0.25 deg; 1-hourly | Europe | --- | 137 layers | /jetfs/shared-data/ECMWF/ERA5_euro_0.25deg_1h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | +| ERA5 | 01-1959 to 10-2022 | 0.5 deg x 0.5 deg; 1-hourly | Global | --- | 137 layers | /jetfs/shared-data/ECMWF/ERA5_glob_0.5deg_1h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | +| ERA5 europe | 01-2000 to 10-2022 | 0.25 deg x 0.25 deg; 1-hourly | Europe | --- | 137 layers | /jetfs/shared-data/ECMWF/ERA5_euro_0.25deg_1h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract | # Observations +| Name | Time period | Temporal resolution | Horizontal resolution | Vertical Resolution | Variables | Location | Contact | Comments | Source | +| --- | --- | --- | ---| --- | --- | --- | --- | --- | --- | + + # Satellite + + + # Model Simulations ICON-INPUTDATA diff --git a/WRF.md b/WRF.md new file mode 100644 index 0000000000000000000000000000000000000000..0a196e680bb1443bfe7d194219329b9c944c3dc4 --- /dev/null +++ b/WRF.md @@ -0,0 +1,568 @@ +# Table of contents + +1. [What is WRF](#what-is-wrf) +1. [Quick start](#quick-start) +1. [Basic usage](#basic-usage) + * [Organization of the source code](#organization-of-the-source-code) + * [Compiling the model](#compiling-the-model) + * [Make the prerequisite libraries available](#make-the-prerequisite-libraries-available) + * [Configure WRF for compilation](#configure-wrf-for-compilation) + * [Compile WRF](#compile-wrf) + * [Copying compiled WRF code](#copying-compiled-wrf-code) + * [Running WRF in a software container](#) + * [Running an idealized simulation](#) + * [Running a real-case simulation](#) + * [Suggested workflow](#) + * [Analysing model output](#) + * [Important namelist settings](#) +1. [Advanced usage](#) + * [Changing the source code](#) + * [Conditional compilation](#conditional-compilation) + * [Customizing model output](#) + * [Adding namelist variables](#) + * [Running offline nested simulations](#) + * [Running LES with online computation of resolved-fluxes turbulent fluxes](#) +1. [Data assimilation (DA)](#) + * [Observation nudging](#) + * [Variational DA](#) + * [Ensemble DA](#) +1. [Specific tasks](#) + * [Before running the model](#) + * [Defining the vertical grid](#) + * [Defining a new geographical database](#) + * [Using ECMWF data as IC/BC](#) + * [Spinning up soil fields](#) + * [After running the model](#) + * [Interpolating model output to a new grid](#) + * [Subsetting model output](#) + * [Further compression of model output (data packing)](#) + * [Converting model output to CF-compliant NetCDF](#) +1. [Other useful tools](#) + +# What is WRF + +WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, [NCAR](https://ncar.ucar.edu), part of the University Corporation for atmospheric Research, [UCAR](https://www.ucar.edu]) and the National Weather Service (National Centers for Environmental Prediction, [NCEP](https://www.weather.gov/ncep/) at the National Oceanic and Atmospheric Administration, [NOAA](https://www.noaa.gov/)). + +Over the years, WRF evolved into two distinct models. [ARW-WRF](https://www.mmm.ucar.edu/models/wrf) (Advanced Research WRF) is maintained by NCAR and is used by the research community. [WRF-NMM](https://nomads.ncep.noaa.gov/txt_descriptions/WRF_NMM_doc.shtml) is used operationally by the National Weather Service. We use ARW-WRF. + +Most of the information about the ARW-WRF is accessible from the [WRF users page](https://www2.mmm.ucar.edu/wrf/users/). The formulation of the model (background theory, numerical aspects, dynamical core, parameterizations) is described in depth in a [Technical description](https://opensky.ucar.edu/islandora/object/opensky:2898), which is periodically updated. The practical use of the model is described in a [User guide](https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.4/contents.html). If you want to acknowledge use of WRF in a manuscript or thesis, and do not like to refer to grey literature, you can use the article by [Skamarock and Klemp (2008)](https://doi.org/10.1016/j.jcp.2007.01.037) as a reference. + +NCAR periodically organizes WRF tutorials (one-week workshops for beginners). The [teaching material from the WRF tutorials](https://www2.mmm.ucar.edu/wrf/users/tutorial/tutorial_presentations_2021.htm) is available online and is a great source of information. There is also an [online tutorial](https://www2.mmm.ucar.edu/wrf/OnLineTutorial/index.php) that covers the basics of installing and running WRF. + +There is also a [users's forum](https://forum.mmm.ucar.edu/), which can be a source of information on solutions to common problems. However most of the forum posts are about problems, and very few offer useful solutions. Navigating the forum in search of solutions is useless, but landing in the forum from a web source might be useful. + +WRF and related programs run as executables on linux machines and clusters. Running WRF requires access to a linux terminal. If you work on Linux or Mac, this is trivial: just open a terminal window. If you work on windows, consider using a linux terminal emulator that supports X11 forwarding (a protocol that enables running interactive graphical applications on a remote server via ssh). There are several alternatives, one option that proved to work well is [MobaXterm](https://mobaxterm.mobatek.net/). + +The WRF source code is available on [Github](https://github.com/wrf-model/WRF). It is possible to checkout the repository, but the recommended way of getting the code is to download one of the [official releases](https://github.com/wrf-model/WRF/releases): scroll down to the "Assets" section and choose one of the `v*.tar.gz` or `v*zip` files (not the "Source code" ones; these are incomplete). + +To download while working on the terminal on a remote server, use wget or curl: +``` +wget "https://github.com/wrf-model/WRF/releases/download/v4.4.2/v4.4.2.tar.gz" +curl -OL "https://github.com/wrf-model/WRF/archive/refs/tags/v4.4.2.zip" +``` + +To uncompress the source code, use either of the following (depending on the format): +``` +tar xzvf v4.4.2.tar.gz +unzip v4.4.2.zip +``` + +# Quick start + +Compiling WRF for an idealized simulation (LES): +``` +./configure +./compile em_les > compile.log 2>&1 & +``` + +Running WRF for an idealized simulation (LES): +``` +cd ./test/em_les +./ideal.exe +./wrf.exe +``` +For other test cases, compilation might create a `run_me_first.csh` script in the same directory as the executables. If there is one, run it only once, before any other program. It will link any necessary lookup tables needed for the simulation (land-use, parameterizations, etc.). + +Compiling WRF for an idealized simulation (LES): +``` +./configure +./compile em_real > compile.log 2>&1 & +``` + +Running WRF for a real_case simulation: +``` +cd test/em_real +ln -s $WPS_PATH/met_em* . +./real.exe +./wrf.exe +``` + +To do the WRF pre-processing for a real-case simulation getting initial and boundary conditions from ECMWF-IFS data on model levels, you could use a script such as the following. However, it depends on namelists, variable tables and other settings files being correctly specified. See below for details. +``` +#!/bin/bash +set -eu + +# Set paths +date=20190726.0000 +gribdir=/users/staff/serafin/data/GRIB_IC_for_LAM/ECMWF/TEAMx_convection/ + +# Run WPS +./geogrid.exe +./link_grib.csh ${gribdir}/${date}/* +./ungrib.exe +./calc_ecmwf_p.exe +./avg_tsfc.exe +mpirun -np 32 ./metgrid.exe + +# Archive results and clean up +archive=./archive/TEAMxConv_${date} +mkdir -p ${archive} +mv geo_em.d0?.nc met_em*nc ${archive} +cp namelist.wps geogrid/GEOGRID.TBL.HIRES ${archive} +rm -fr FILE* PRES* TAVGSFC GRIBFILE* metgrid.log.* +``` + +# Basic usage + +## Organization of the source code + +After download and unpacking, the WRF source code looks like this + +``` +(base) [serafin@srvx1 WRF-4.4.2]$ ls +total 236K +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 arch +drwxr-xr-x. 3 serafin users 8,0K 19 dic 18.37 chem +-rwxr-xr-x. 1 serafin users 4,0K 19 dic 18.37 clean +-rwxr-xr-x. 1 serafin users 17K 19 dic 18.37 compile +-rwxr-xr-x. 1 serafin users 37K 19 dic 18.37 configure +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 doc +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 dyn_em +drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 external +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 frame +drwxr-xr-x. 16 serafin users 4,0K 19 dic 18.37 hydro +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 inc +-rw-r--r--. 1 serafin users 1,1K 19 dic 18.37 LICENSE.txt +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 main +-rw-r--r--. 1 serafin users 57K 19 dic 18.37 Makefile +drwxr-xr-x. 3 serafin users 8,0K 19 dic 18.37 phys +-rw-r--r--. 1 serafin users 18K 19 dic 18.37 README +-rw-r--r--. 1 serafin users 1,2K 19 dic 18.37 README.md +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 Registry +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 run +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 share +drwxr-xr-x. 17 serafin users 4,0K 19 dic 18.37 test +drwxr-xr-x. 4 serafin users 4,0K 19 dic 18.37 tools +drwxr-xr-x. 14 serafin users 4,0K 19 dic 18.37 var +drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 wrftladj +``` + +Knowing in detail the structure of the source code is not necessary for the average user. However, the directories where most of the practical work is done are: + +* `run`: this is where the compiled executables and lookup tables will reside after compilation. +* `test`: this contains several subdirectories, each of which refers to a specific compilation mode. For instance, compiling WRF for large-eddy simulation will link some executables in `em_les`, while compiling WRF for real-case simulations will link some other executables and lookup tables in `em_real`. Most of the test subdirectories refer to simple idealized simulations, some of which are two-dimensional. These test cases are used to valide the model's dynamical core (e.g., check if it correctly reproduces analytical solution of the Euler or Navier-Stokes equations). + +In some cases, editing the model source code is necessary. This mostly happens in these directories: +* `dyn_em`: this contains the source code of the dynamical core of the model ("model dynamics") and of part of the initialization programmes. +* `phys`: this contains the source code of parameterizion schemes ("model physics"). +* `Registry`: large chunks of the WRF source code are generated automatically at compile time, based on the information contained in a text file called `Registry`. This file specifies for instance what model variables are saved in the output, and how. + +## Compiling the model + +WRF is written in compiled languages (mostly Fortran and C++), so it needs to be compiled before execution. It relies on external software libraries at compilation and runtime, so these libraries have to be available on the system where WRF runs. + +In general, compiled WRF versions are already available on all of our servers (SRVX1, JET, VSC4, VSC5) from the expert users. So, the easiest way of getting started is to copy a compiled version of the code from them (see below). + +However, we describe the typical workflow of the compilation, for anyone that wishes to try it out. There are three steps: (i) make libraries available, (ii) configure, (iii) compile. + +### Make the prerequisite libraries available + +In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections. + +SRVX1: +``` +module load intel-parallel-studio/composer.2020.4-intel-20.0.4 openmpi/3.1.6-intel-20.0.4 netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6 eccodes/2.19.1-intel-20.0.4-MPI3.1.6 +``` + +JET (GNU Fortran compiler): +``` +module load openmpi/4.0.5-gcc-8.5.0-ryfwodt hdf5/1.10.7-gcc-8.5.0-t247okg parallel-netcdf/1.12.2-gcc-8.5.0-zwftkwr netcdf-c/4.7.4-gcc-8.5.0-o7ahi5o netcdf-fortran/4.5.3-gcc-8.5.0-3bqsedn gcc/8.5.0-gcc-8.5rhel8-7ka2e42 +``` + +JET (Intel Fortran compiler): +``` +module load intel-parallel-studio/composer.2020.2-intel-20.0.2-zuot22y zlib/1.2.11-intel-20.0.2-3h374ov openmpi/4.0.5-intel-20.0.2-4wfaaz4 hdf5/1.12.0-intel-20.0.2-ezeotzr parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs netcdf-c/4.7.4-intel-20.0.2-337uqtc netcdf-fortran/4.5.3-intel-20.0.2-irdm5gq +``` + +JET (alternative setup with Intel Fortran compiler): +``` +intel-oneapi-mpi/2021.4.0-intel-2021.4.0-eoone6i hdf5/1.10.7-intel-2021.4.0-n7frjgz parallel-netcdf/1.12.2-intel-2021.4.0-bykumdv netcdf-c/4.7.4-intel-2021.4.0-vvk6sk5 netcdf-fortran/4.5.3-intel-2021.4.0-pii33is intel-oneapi-compilers/2021.4.0-gcc-9.1.0-x5kx6di +``` + +VSC4: +``` +module load pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7 intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32 zlib/1.2.12-intel-2021.5.0-pctnhmb hdf5/1.12.2-intel-2021.5.0-loke5pd netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2 netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy +``` + +Load modules with `module load LIST-OF-MODULE-NAMES`, unload them one by one with `module unload LIST-OF-MODULE-NAMES`, unload all of them at the same time with `module purge`, get information about a specific module with `module show MODULE_NAME`. Modules may depend on each other. If the system is set up properly, a request to load one module will automatically load any other prerequisite ones. + +After loading modules, it is also recommended to set the `NETCDF` environment variable to the root variable of the netcdf installation. On srvx1, jet and VSC4, use `module show` to see which directory is correct. For instance: + +``` +(skylake) [serafins@l46 TEAMx_real]$ module list +Currently Loaded Modulefiles: +1) pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 4) zlib/1.2.12-intel-2021.5.0-pctnhmb 7) netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy +2) intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7 5) hdf5/1.12.2-intel-2021.5.0-loke5pd +3) intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32 6) netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2 +(skylake) [serafins@l46 TEAMx_real]$ module show netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy +------------------------------------------------------------------- +/opt/sw/spack-0.19.0/var/spack/environments/skylake/modules/linux-almalinux8-skylake/netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy: + +module-whatis {NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. This is the Fortran distribution.} +prepend-path PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/bin +prepend-path LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib +prepend-path LD_LIBRARY_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib +prepend-path CPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/include +prepend-path MANPATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/share/man +prepend-path PKG_CONFIG_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/lib/pkgconfig +prepend-path CMAKE_PREFIX_PATH /gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj/ +------------------------------------------------------------------- +(skylake) [serafins@l46 TEAMx_real]$ export NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj +(skylake) [serafins@l46 TEAMx_real]$ env|grep NETCDF +NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5.0/netcdf-fortran-4.6.0-pnaropyoft7hicu7bfsugqa2aqcsggxj +``` + +On VSC5 do not use `module`, but `spack`: +``` +spack load intel-oneapi-compilers +spack load netcdf-fortran@4.4.5%intel +``` + +To check the library paths of loaded modules: +``` +(zen3) [serafins@l51 ~]$ spack find --loaded --paths +==> In environment zen3 +... +==> 8 loaded packages +-- linux-almalinux8-zen2 / intel@2021.5.0 ----------------------- +hdf5@1.10.5 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/hdf5-1.10.5-tty2baooecmvy5vhfhyt5uc3bj46cwpl +intel-oneapi-mpi@2021.4.0 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/intel-oneapi-mpi-2021.4. 0-jjcwtufcblofydeg2s3vm7fjb3qsezpf +netcdf-c@4.7.0 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-c-4.7.0-spzlhyrfnqcl53ji25zop2adp222ftq4 +netcdf-fortran@4.4.5 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4 +numactl@2.0.14 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/numactl-2.0.14-beunpggnwwluwk7svx6zkjohv2ueayei +pkgconf@1.8.0 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/pkgconf-1.8.0-ig5i4nqzqldjasgmkowp5ttfevdb4bnr +zlib@1.2.11 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/zlib-1.2.11-6lzwo7c5o3db2q7hcznhzr6k3klh7wok + +-- linux-almalinux8-zen3 / gcc@11.2.0 --------------------------- +intel-oneapi-compilers@2022.0.2 /gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen3/gcc-11.2.0/intel-oneapi-compilers-2022.0.2-yzi4tsud2tqh4s6ykg2ulr7pp7guyiej +(zen3) [serafins@l51 ~]$ export NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4 +(zen3) [serafins@l51 ~]$ env|grep NETCDF +NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/netcdf-fortran-4.4.5-um5yjit56ufokugazyhqgpcldrjfb2w4 + +``` + +Important note: **The environment must be consistent between compilation and runtime. If you compile WRF with a set of modules loaded, you must run it with the same set of modules**. + +### Configure WRF for compilation + +This will test the system to check that all libraries can be properly linked. Type `./configure`, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration. + +If all goes well, the configuration will end will a message like this: +``` +***************************************************************************** +This build of WRF will use NETCDF4 with HDF5 compression +***************************************************************************** +``` + +But the configuration could also end with a message like this: +``` +************************** W A R N I N G ************************************ +NETCDF4 IO features are requested, but this installation of NetCDF + /home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23 +DOES NOT support these IO features. + +Please make sure NETCDF version is 4.1.3 or later and was built with + --enable-netcdf4 + +OR set NETCDF_classic variable + bash/ksh : export NETCDF_classic=1 + csh : setenv NETCDF_classic 1 + +Then re-run this configure script + +!!! configure.wrf has been REMOVED !!! + +***************************************************************************** +``` +This is actually a misleading error message. The problem has nothing to do with NETCDF4 not being available, but with the operating system not knowing where the NETCDF libraries are. **NOTE: explain solution.** + +The configure script stores the model configuration to a file called `configure.wrf`. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working `configure.wrf` file for a given source code/server/environment, back it up. + +### Compile WRF + +You always compile WRF for a specific model configuration. The ones we use most commonly are `em_les` (for large-eddy simulation), `em_quarter_ss` (for idealized mesoscale simulations), `em_real` (for real-case forecasts). So type either of the following, depending on what you want to get: +``` +./compile em_les > compile.log 2>&1 & +./compile em_quarter_ss > compile.log 2>&1 & +./compile em_real > compile.log 2>&1 & +``` +The `> compile.log` tells the operating system to redirect the output stream from the terminal to a file called `compile.log`. The `2>&1` tells the operating system to merge the standard and error output streams, so `compile.log` will contain both regular output and error messages. The final `&` tells the operating system to run the job in the background, and returns to the terminal prompt. + +The compiled code will be created in the `run` directory, and some of the compiled programs will be linked in either of the `test/em_les`, `test/em_quarter_ss` or `test/em_real` directories. Executable WRF files typically have names ending with `.exe` (this is just conventional; it is actually not necessary for them to run). + +Compilation may take half an hour or so. A successful compilation ends with: +``` +========================================================================== +build started: mer 19 ott 2022, 16.17.36, CEST +build completed: mer 19 ott 2022, 16.51.46, CEST + +---> Executables successfully built <--- + +-rwxr-xr-x 1 serafin users 51042008 19 ott 16.51 main/ideal.exe +-rwxr-xr-x 1 serafin users 57078208 19 ott 16.51 main/wrf.exe + +========================================================================== +``` + +If instead you get this: +``` +========================================================================== +build started: Thu Feb 2 16:30:55 CET 2023 +build completed: Thu Feb 2 17:07:04 CET 2023 + +---> Problems building executables, look for errors in the build log <--- + +========================================================================== +``` +then you have a problem, and there is no unique solution. Take a closer look at `compile.log` and you might be able to diagnose it. + +## Copying compiled WRF code + +## Running WRF in a software container + +## Running an idealized simulation + +## Running a real-case simulation + +## Suggested workflow + +## Analysing model output + +[Python interface to WRF](https://wrf-python.readthedocs.io/en/latest/) + +## Important namelist settings + +# Advanced usage + +## Changing the source code + +## Conditional compilation + +Most Fortran compilers allow passing the source code through a C preprocessor (CPP; sometimes also called the Fortran preprocessor, FPP) to allow for conditional compilation. In the C programming language, there are some directives that make it possible to compile portions of the source code selectively. + +In the WRF source code, Fortran files have an .F extension. cpp will parse these files and create corresponding .f90 files. The .f90 files will then be compiled by the Fortran compiler. + +This means: +1. When editing the source code, always work on the .F files, otherwise changes will be lost on the next compilation. +2. In the .F files, it is possible to include `#ifdef` and `#ifndef` directives for conditional compilation. + +For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code define the model orography for idealized large-eddy simulation runs. Four possibilities are given: `MTN`, `EW_RIDGE`, `NS_RIDGE`, and `NS_VALLEY`. If none is selected at compile time, none of these code lines is compiled and `grid%ht(i,j)` (the model orography) is set to 0: + +``` +#ifdef MTN + DO j=max(ys,jds),min(ye,jde-1) + DO i=max(xs,ids),min(xe,ide-1) + grid%ht(i,j) = mtn_ht * 0.25 * & + ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) ) * & + ( 1. + COS ( 2*pi/(ye-ys) * ( j-ys ) + pi ) ) + ENDDO + ENDDO +#endif +#ifdef EW_RIDGE + DO j=max(ys,jds),min(ye,jde-1) + DO i=ids,ide + grid%ht(i,j) = mtn_ht * 0.50 * & + ( 1. + COS ( 2*pi/(ye-ys) * ( j-ys ) + pi ) ) + ENDDO + ENDDO +#endif +#ifdef NS_RIDGE + DO j=jds,jde + DO i=max(xs,ids),min(xe,ide-1) + grid%ht(i,j) = mtn_ht * 0.50 * & + ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) ) + ENDDO + ENDDO +#endif +#ifdef NS_VALLEY + DO i=ids,ide + DO j=jds,jde + grid%ht(i,j) = mtn_ht + ENDDO + ENDDO + xs=ids !-1 + xe=xs + 20000./config_flags%dx + DO j=jds,jde + DO i=max(xs,ids),min(xe,ide-1) + grid%ht(i,j) = mtn_ht - mtn_ht * 0.50 * & + ( 1. + COS ( 2*pi/(xe-xs) * ( i-xs ) + pi ) ) + ENDDO + ENDDO +#endif +``` + +To control conditional compilation: +1. Search for the variable `ARCHFLAGS` in `configure.wrf` +2. Add the desired define statement at the bottom. For instance, to selectively compile the `NS_VALLEY` block above, do the following: +``` +ARCHFLAGS = $(COREDEFS) -DIWORDSIZE=$(IWORDSIZE) -DDWORDSIZE=$(DWORDSIZE) -DRWORDSIZE=$(RWORDSIZE) -DLWORDSIZE=$(LWORDSIZE) \ + $(ARCH_LOCAL) \ + $(DA_ARCHFLAGS) \ + -DDM_PARALLEL \ +... + -DNMM_NEST=$(WRF_NMM_NEST) \ + -DNS_VALLEY + +``` + +## Customizing model output + +## Adding namelist variables + +## Running offline nested simulations + +## Running LES with online computation of resolved-fluxes turbulent fluxes + +WRFlux + +# Data assimilation (DA) + +## Observation nudging + +## Variational DA + +WRFDA + +## Ensemble DA + +We cover this separately. See DART-WRF. + +# Specific tasks + +## Before running the model + +### Defining the vertical grid + +### Customizing model orography + +### Defining a new geographical database + +### Using ECMWF data as IC/BC + +The long story made short is: you should link grib1 files and process them with `ungrib.exe` using `Vtable.ECMWF_sigma`. + +More in detail, since a few years ECMWF has been distributing a mixture of grib2 and grib1 files. Namely: + +* grib1 files for surface and soil model levels. +* grib2 files for atmospheric model levels. + +The WPS has a predefined Vtable for grib1 files from ECMWF, so the easiest way to process ECMWF data is to: + +1. convert model-level grib2 files to grib1 +2. if necessary, for every time stamp, concatenate the model-level and surface grib1 files into a single file. This is only necessary if the grib1 and grib2 data were downloaded as separate sets of GRIB files. +3. process the resulting files with ungrib after linking `ungrib/Variable_Tables/Vtable.ECMWF_sigma` as `Vtable` + +In detail: + +1. Conversion to grib1 (needs the grib_set utility from eccodes): + + for i in det.CROSSINN.mlv.20190913.0000.f*.grib2; do j=`basename $i .grib2`; grib_set -s deletePV=1,edition=1 ${i} ${j}; done + +2. Concatenation of grib files (two sets of files, `*mlv*` and `*sfc*`, with names ending with "grib1" yield a new set of files with names ending with "grib"; everything is grib1): + + for i in det.CROSSINN.mlv.20190913.0000.f*.grib1; do j=`echo $i|sed 's/.mlv./.sfc./'`; k=`echo $i|sed 's/.mlv././'|sed 's/.grib1/.grib/'`; cat $i $j > $k; done + +3. In the WPS main directory: + + link_grib.csh /data/GRIB_IC_for_LAM/ECMWF/20190913_CROSSINN_IOP8/det.CROSSINN.20190913.0000.f*.grib + ln -s ungrib/Variable_Tables/Vtable.ECMWF_sigma Vtable + ./ungrib.exe + +An alternative procedure would be to convert everything to grib2 instead of grib1. Then, one has to use a Vtable with grib2 information for the surface fields, for instance the one included here at the bottom. But: Data from the bottom soil level will not be read correctly with this Vtable, because the Level2 value for the bottom level is actually MISSING in grib2 files (at the moment of writing, 6 May 2022; this may be fixed in the future). + + GRIB1| Level| From | To | metgrid | metgrid | metgrid |GRIB2|GRIB2|GRIB2|GRIB2| + Param| Type |Level1|Level2| Name | Units | Description |Discp|Catgy|Param|Level| + -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ + 130 | 109 | * | | TT | K | Temperature | 0 | 0 | 0 | 105 | + 131 | 109 | * | | UU | m s-1 | U | 0 | 2 | 2 | 105 | + 132 | 109 | * | | VV | m s-1 | V | 0 | 2 | 3 | 105 | + 133 | 109 | * | | SPECHUMD | kg kg-1 | Specific humidity | 0 | 1 | 0 | 105 | + 152 | 109 | * | | LOGSFP | Pa | Log surface pressure | 0 | 3 | 25 | 105 | + 129 | 109 | * | | SOILGEO | m | Surface geopotential | 0 | 3 | 4 | 1 | + | 109 | * | | SOILHGT | m | Terrain field of source analysis | 0 | 3 | 5 | 1 | + 134 | 109 | 1 | | PSFCH | Pa | | 0 | 3 | 0 | 1 | + 157 | 109 | * | | RH | % | Relative Humidity | 0 | 1 | 1 | 105 | + 165 | 1 | 0 | | UU | m s-1 | U | 0 | 2 | 2 | 103 | + 166 | 1 | 0 | | VV | m s-1 | V | 0 | 2 | 3 | 103 | + 167 | 1 | 0 | | TT | K | Temperature | 0 | 0 | 0 | 103 | + 168 | 1 | 0 | | DEWPT | K | | 0 | 0 | 6 | 103 | + 172 | 1 | 0 | | LANDSEA | 0/1 Flag | Land/Sea flag | 2 | 0 | 0 | 1 | + 151 | 1 | 0 | | PMSL | Pa | Sea-level Pressure | 0 | 3 | 0 | 101 | + 235 | 1 | 0 | | SKINTEMP | K | Sea-Surface Temperature | 0 | 0 | 17 | 1 | + 34 | 1 | 0 | | SST | K | Sea-Surface Temperature | 10 | 3 | 0 | 1 | + 139 | 112 | 0| 700| ST000007 | K | T of 0-7 cm ground layer | 192 | 128 | 139 | 106 | + 170 | 112 | 700| 2800| ST007028 | K | T of 7-28 cm ground layer | 192 | 128 | 170 | 106 | + 183 | 112 | 2800| 10000| ST028100 | K | T of 28-100 cm ground layer | 192 | 128 | 183 | 106 | + 236 | 112 | 10000| 0| ST100289 | K | T of 100-289 cm ground layer | 192 | 128 | 236 | 106 | + 39 | 112 | 0| 700| SM000007 | fraction | Soil moisture of 0-7 cm ground layer | 192 | 128 | 39 | 106 | + 40 | 112 | 700| 2800| SM007028 | fraction | Soil moisture of 7-28 cm ground layer | 192 | 128 | 40 | 106 | + 41 | 112 | 2800| 10000| SM028100 | fraction | Soil moisture of 28-100 cm ground layer | 192 | 128 | 41 | 106 | + 42 | 112 | 10000| 0| SM100289 | fraction | Soil moisture of 100-289 cm ground layer | 192 | 128 | 42 | 106 | + -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ + + +### Spinning up soil fields + +## After running the model + +### Converting model output to CF-compliant NetCDF + +1. To convert WRF output to CF-compliant NetCDF, use `wrfout_to_cf.ncl` (from <https://sundowner.colorado.edu/wrfout_to_cf/overview.html>): + + ncl 'file_in="wrfinput_d01"' 'file_out="wrfpost.nc"' wrfout_to_cf.ncl + +### Interpolating model output to a new grid + +1. First convert to CF-compliant NetCDF (see above) + +1. Then use cdo to interpolate the CF-compliant WRF output: + + cdo -remapnn,gridfile.lonlat.txt wrfpost.nc wrfpost_interpolated.nc + +1. In the code snippet above, -remapnn specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: <https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields> + +1. File gridfile.lonlat.txt contans the grid specifications, e.g.: + + gridtype = lonlat + gridsize = 721801 + xsize = 1201 + ysize = 601 + xname = lon + xlongname = "longitude" + xunits = "degrees_east" + yname = lat + ylongname = "latitude" + yunits = "degrees_north" + xfirst = 5.00 + xinc = 0.01 + yfirst = 43.00 + yinc = 0.01 + +### Subsetting model output + +### Further compression of model output (data packing) + +### 3D visualization + +# Useful tools +