From 8430d16d58bf81fc8a8be45348fa54f8056a4303 Mon Sep 17 00:00:00 2001 From: MB <michael.blaschek@univie.ac.at> Date: Wed, 8 Mar 2023 10:35:04 +0100 Subject: [PATCH] fixed width, updated a formatting WRF --- WRF.md | 293 ++++++++++++++++++----------------- mkdocs/stylesheets/extra.css | 6 +- 2 files changed, 152 insertions(+), 147 deletions(-) diff --git a/WRF.md b/WRF.md index 0a196e6..84ebdef 100644 --- a/WRF.md +++ b/WRF.md @@ -1,47 +1,16 @@ -# Table of contents - -1. [What is WRF](#what-is-wrf) -1. [Quick start](#quick-start) -1. [Basic usage](#basic-usage) - * [Organization of the source code](#organization-of-the-source-code) - * [Compiling the model](#compiling-the-model) - * [Make the prerequisite libraries available](#make-the-prerequisite-libraries-available) - * [Configure WRF for compilation](#configure-wrf-for-compilation) - * [Compile WRF](#compile-wrf) - * [Copying compiled WRF code](#copying-compiled-wrf-code) - * [Running WRF in a software container](#) - * [Running an idealized simulation](#) - * [Running a real-case simulation](#) - * [Suggested workflow](#) - * [Analysing model output](#) - * [Important namelist settings](#) -1. [Advanced usage](#) - * [Changing the source code](#) - * [Conditional compilation](#conditional-compilation) - * [Customizing model output](#) - * [Adding namelist variables](#) - * [Running offline nested simulations](#) - * [Running LES with online computation of resolved-fluxes turbulent fluxes](#) -1. [Data assimilation (DA)](#) - * [Observation nudging](#) - * [Variational DA](#) - * [Ensemble DA](#) -1. [Specific tasks](#) - * [Before running the model](#) - * [Defining the vertical grid](#) - * [Defining a new geographical database](#) - * [Using ECMWF data as IC/BC](#) - * [Spinning up soil fields](#) - * [After running the model](#) - * [Interpolating model output to a new grid](#) - * [Subsetting model output](#) - * [Further compression of model output (data packing)](#) - * [Converting model output to CF-compliant NetCDF](#) -1. [Other useful tools](#) - -# What is WRF - -WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, [NCAR](https://ncar.ucar.edu), part of the University Corporation for atmospheric Research, [UCAR](https://www.ucar.edu]) and the National Weather Service (National Centers for Environmental Prediction, [NCEP](https://www.weather.gov/ncep/) at the National Oceanic and Atmospheric Administration, [NOAA](https://www.noaa.gov/)). +# WRF + + + +The present manual aims at constituting a first approach for beginner users to the Weather +Research and Forecasting Model. + +Table of contents: +[TOC] + +## What is WRF + +WRF is a community-driven numerical weather prediction model, originally developed in the US in a collaboration between the research community (National Center for Atmospheric Research, [NCAR](https://ncar.ucar.edu), part of the University Corporation for atmospheric Research, [UCAR](https://www.ucar.edu) and the National Weather Service (National Centers for Environmental Prediction, [NCEP](https://www.weather.gov/ncep/) at the National Oceanic and Atmospheric Administration, [NOAA](https://www.noaa.gov/)). Over the years, WRF evolved into two distinct models. [ARW-WRF](https://www.mmm.ucar.edu/models/wrf) (Advanced Research WRF) is maintained by NCAR and is used by the research community. [WRF-NMM](https://nomads.ncep.noaa.gov/txt_descriptions/WRF_NMM_doc.shtml) is used operationally by the National Weather Service. We use ARW-WRF. @@ -56,41 +25,48 @@ WRF and related programs run as executables on linux machines and clusters. Runn The WRF source code is available on [Github](https://github.com/wrf-model/WRF). It is possible to checkout the repository, but the recommended way of getting the code is to download one of the [official releases](https://github.com/wrf-model/WRF/releases): scroll down to the "Assets" section and choose one of the `v*.tar.gz` or `v*zip` files (not the "Source code" ones; these are incomplete). To download while working on the terminal on a remote server, use wget or curl: -``` + +```sh wget "https://github.com/wrf-model/WRF/releases/download/v4.4.2/v4.4.2.tar.gz" curl -OL "https://github.com/wrf-model/WRF/archive/refs/tags/v4.4.2.zip" ``` To uncompress the source code, use either of the following (depending on the format): -``` + +```sh tar xzvf v4.4.2.tar.gz unzip v4.4.2.zip ``` -# Quick start +## Quick start Compiling WRF for an idealized simulation (LES): -``` + +```sh ./configure ./compile em_les > compile.log 2>&1 & ``` Running WRF for an idealized simulation (LES): -``` + +```sh cd ./test/em_les ./ideal.exe ./wrf.exe ``` + For other test cases, compilation might create a `run_me_first.csh` script in the same directory as the executables. If there is one, run it only once, before any other program. It will link any necessary lookup tables needed for the simulation (land-use, parameterizations, etc.). Compiling WRF for an idealized simulation (LES): -``` + +```sh ./configure ./compile em_real > compile.log 2>&1 & ``` Running WRF for a real_case simulation: -``` + +```sh cd test/em_real ln -s $WPS_PATH/met_em* . ./real.exe @@ -98,7 +74,8 @@ ln -s $WPS_PATH/met_em* . ``` To do the WRF pre-processing for a real-case simulation getting initial and boundary conditions from ECMWF-IFS data on model levels, you could use a script such as the following. However, it depends on namelists, variable tables and other settings files being correctly specified. See below for details. -``` + +```sh title="Example: wrf-run-script.sh" #!/bin/bash set -eu @@ -122,13 +99,13 @@ cp namelist.wps geogrid/GEOGRID.TBL.HIRES ${archive} rm -fr FILE* PRES* TAVGSFC GRIBFILE* metgrid.log.* ``` -# Basic usage +## Basic usage -## Organization of the source code +### Organization of the source code After download and unpacking, the WRF source code looks like this -``` +```sh (base) [serafin@srvx1 WRF-4.4.2]$ ls total 236K drwxr-xr-x. 2 serafin users 4,0K 19 dic 18.37 arch @@ -167,7 +144,7 @@ In some cases, editing the model source code is necessary. This mostly happens i * `phys`: this contains the source code of parameterizion schemes ("model physics"). * `Registry`: large chunks of the WRF source code are generated automatically at compile time, based on the information contained in a text file called `Registry`. This file specifies for instance what model variables are saved in the output, and how. -## Compiling the model +### Compiling the model WRF is written in compiled languages (mostly Fortran and C++), so it needs to be compiled before execution. It relies on external software libraries at compilation and runtime, so these libraries have to be available on the system where WRF runs. @@ -175,32 +152,32 @@ In general, compiled WRF versions are already available on all of our servers (S However, we describe the typical workflow of the compilation, for anyone that wishes to try it out. There are three steps: (i) make libraries available, (ii) configure, (iii) compile. -### Make the prerequisite libraries available +#### Make the prerequisite libraries available In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections. SRVX1: -``` +```sh module load intel-parallel-studio/composer.2020.4-intel-20.0.4 openmpi/3.1.6-intel-20.0.4 netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6 eccodes/2.19.1-intel-20.0.4-MPI3.1.6 ``` JET (GNU Fortran compiler): -``` +```sh module load openmpi/4.0.5-gcc-8.5.0-ryfwodt hdf5/1.10.7-gcc-8.5.0-t247okg parallel-netcdf/1.12.2-gcc-8.5.0-zwftkwr netcdf-c/4.7.4-gcc-8.5.0-o7ahi5o netcdf-fortran/4.5.3-gcc-8.5.0-3bqsedn gcc/8.5.0-gcc-8.5rhel8-7ka2e42 ``` JET (Intel Fortran compiler): -``` +```sh module load intel-parallel-studio/composer.2020.2-intel-20.0.2-zuot22y zlib/1.2.11-intel-20.0.2-3h374ov openmpi/4.0.5-intel-20.0.2-4wfaaz4 hdf5/1.12.0-intel-20.0.2-ezeotzr parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs netcdf-c/4.7.4-intel-20.0.2-337uqtc netcdf-fortran/4.5.3-intel-20.0.2-irdm5gq ``` JET (alternative setup with Intel Fortran compiler): -``` +```sh intel-oneapi-mpi/2021.4.0-intel-2021.4.0-eoone6i hdf5/1.10.7-intel-2021.4.0-n7frjgz parallel-netcdf/1.12.2-intel-2021.4.0-bykumdv netcdf-c/4.7.4-intel-2021.4.0-vvk6sk5 netcdf-fortran/4.5.3-intel-2021.4.0-pii33is intel-oneapi-compilers/2021.4.0-gcc-9.1.0-x5kx6di ``` VSC4: -``` +```sh module load pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 intel-oneapi-compilers/2022.1.0-gcc-8.5.0-kiyqwf7 intel-oneapi-mpi/2021.6.0-intel-2021.5.0-wpt4y32 zlib/1.2.12-intel-2021.5.0-pctnhmb hdf5/1.12.2-intel-2021.5.0-loke5pd netcdf-c/4.8.1-intel-2021.5.0-hmrqrz2 netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy ``` @@ -208,7 +185,7 @@ Load modules with `module load LIST-OF-MODULE-NAMES`, unload them one by one wit After loading modules, it is also recommended to set the `NETCDF` environment variable to the root variable of the netcdf installation. On srvx1, jet and VSC4, use `module show` to see which directory is correct. For instance: -``` +```sh (skylake) [serafins@l46 TEAMx_real]$ module list Currently Loaded Modulefiles: 1) pkgconf/1.8.0-intel-2021.5.0-bkuyrr7 4) zlib/1.2.12-intel-2021.5.0-pctnhmb 7) netcdf-fortran/4.6.0-intel-2021.5.0-pnaropy @@ -233,13 +210,13 @@ NETCDF=/gpfs/opt/sw/spack-0.19.0/opt/spack/linux-almalinux8-skylake/intel-2021.5 ``` On VSC5 do not use `module`, but `spack`: -``` +```sh spack load intel-oneapi-compilers spack load netcdf-fortran@4.4.5%intel ``` To check the library paths of loaded modules: -``` +```sh (zen3) [serafins@l51 ~]$ spack find --loaded --paths ==> In environment zen3 ... @@ -263,7 +240,7 @@ NETCDF=/gpfs/opt/sw/spack-0.17.1/opt/spack/linux-almalinux8-zen2/intel-2021.5.0/ Important note: **The environment must be consistent between compilation and runtime. If you compile WRF with a set of modules loaded, you must run it with the same set of modules**. -### Configure WRF for compilation +#### Configure WRF for compilation This will test the system to check that all libraries can be properly linked. Type `./configure`, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration. @@ -275,7 +252,7 @@ This build of WRF will use NETCDF4 with HDF5 compression ``` But the configuration could also end with a message like this: -``` +```sh ************************** W A R N I N G ************************************ NETCDF4 IO features are requested, but this installation of NetCDF /home/swd/spack/opt/spack/linux-rhel8-skylake_avx512/intel-20.0.4/netcdf-fortran-4.5.2-ktet7v73pc74qrx6yc3234zhfo573w23 @@ -298,20 +275,23 @@ This is actually a misleading error message. The problem has nothing to do with The configure script stores the model configuration to a file called `configure.wrf`. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working `configure.wrf` file for a given source code/server/environment, back it up. -### Compile WRF +#### Compile WRF You always compile WRF for a specific model configuration. The ones we use most commonly are `em_les` (for large-eddy simulation), `em_quarter_ss` (for idealized mesoscale simulations), `em_real` (for real-case forecasts). So type either of the following, depending on what you want to get: -``` + +```sh ./compile em_les > compile.log 2>&1 & ./compile em_quarter_ss > compile.log 2>&1 & ./compile em_real > compile.log 2>&1 & ``` + The `> compile.log` tells the operating system to redirect the output stream from the terminal to a file called `compile.log`. The `2>&1` tells the operating system to merge the standard and error output streams, so `compile.log` will contain both regular output and error messages. The final `&` tells the operating system to run the job in the background, and returns to the terminal prompt. The compiled code will be created in the `run` directory, and some of the compiled programs will be linked in either of the `test/em_les`, `test/em_quarter_ss` or `test/em_real` directories. Executable WRF files typically have names ending with `.exe` (this is just conventional; it is actually not necessary for them to run). Compilation may take half an hour or so. A successful compilation ends with: -``` + +```sh ========================================================================== build started: mer 19 ott 2022, 16.17.36, CEST build completed: mer 19 ott 2022, 16.51.46, CEST @@ -325,7 +305,7 @@ build completed: mer 19 ott 2022, 16.51.46, CEST ``` If instead you get this: -``` +```sh ========================================================================== build started: Thu Feb 2 16:30:55 CET 2023 build completed: Thu Feb 2 17:07:04 CET 2023 @@ -336,27 +316,27 @@ build completed: Thu Feb 2 17:07:04 CET 2023 ``` then you have a problem, and there is no unique solution. Take a closer look at `compile.log` and you might be able to diagnose it. -## Copying compiled WRF code +### Copying compiled WRF code -## Running WRF in a software container +### Running WRF in a software container -## Running an idealized simulation +### Running an idealized simulation -## Running a real-case simulation +### Running a real-case simulation -## Suggested workflow +### Suggested workflow -## Analysing model output +### Analysing model output [Python interface to WRF](https://wrf-python.readthedocs.io/en/latest/) -## Important namelist settings +### Important namelist settings -# Advanced usage +## Advanced usage -## Changing the source code +### Changing the source code -## Conditional compilation +### Conditional compilation Most Fortran compilers allow passing the source code through a C preprocessor (CPP; sometimes also called the Fortran preprocessor, FPP) to allow for conditional compilation. In the C programming language, there are some directives that make it possible to compile portions of the source code selectively. @@ -368,7 +348,7 @@ This means: For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code define the model orography for idealized large-eddy simulation runs. Four possibilities are given: `MTN`, `EW_RIDGE`, `NS_RIDGE`, and `NS_VALLEY`. If none is selected at compile time, none of these code lines is compiled and `grid%ht(i,j)` (the model orography) is set to 0: -``` +```fortran #ifdef MTN DO j=max(ys,jds),min(ye,jde-1) DO i=max(xs,ids),min(xe,ide-1) @@ -414,7 +394,8 @@ For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code To control conditional compilation: 1. Search for the variable `ARCHFLAGS` in `configure.wrf` 2. Add the desired define statement at the bottom. For instance, to selectively compile the `NS_VALLEY` block above, do the following: -``` + +```Makefile ARCHFLAGS = $(COREDEFS) -DIWORDSIZE=$(IWORDSIZE) -DDWORDSIZE=$(DWORDSIZE) -DRWORDSIZE=$(RWORDSIZE) -DLWORDSIZE=$(LWORDSIZE) \ $(ARCH_LOCAL) \ $(DA_ARCHFLAGS) \ @@ -425,39 +406,39 @@ ARCHFLAGS = $(COREDEFS) -DIWORDSIZE=$(IWORDSIZE) -DDWORDSIZE=$(DWORDSIZ ``` -## Customizing model output +### Customizing model output -## Adding namelist variables +### Adding namelist variables -## Running offline nested simulations +### Running offline nested simulations -## Running LES with online computation of resolved-fluxes turbulent fluxes +### Running LES with online computation of resolved-fluxes turbulent fluxes WRFlux -# Data assimilation (DA) +## Data assimilation (DA) -## Observation nudging +### Observation nudging -## Variational DA +### Variational DA WRFDA -## Ensemble DA +### Ensemble DA We cover this separately. See DART-WRF. -# Specific tasks +## Specific tasks -## Before running the model +### Before running the model -### Defining the vertical grid +#### Defining the vertical grid -### Customizing model orography +#### Customizing model orography -### Defining a new geographical database +#### Defining a new geographical database -### Using ECMWF data as IC/BC +#### Using ECMWF data as IC/BC The long story made short is: you should link grib1 files and process them with `ungrib.exe` using `Vtable.ECMWF_sigma`. @@ -476,70 +457,90 @@ In detail: 1. Conversion to grib1 (needs the grib_set utility from eccodes): - for i in det.CROSSINN.mlv.20190913.0000.f*.grib2; do j=`basename $i .grib2`; grib_set -s deletePV=1,edition=1 ${i} ${j}; done + ```sh title="convert to grib1" + for i in det.CROSSINN.mlv.20190913.0000.f*.grib2; + do + j=`basename $i .grib2`; + grib_set -s deletePV=1,edition=1 ${i} ${j}; + done + ``` 2. Concatenation of grib files (two sets of files, `*mlv*` and `*sfc*`, with names ending with "grib1" yield a new set of files with names ending with "grib"; everything is grib1): - for i in det.CROSSINN.mlv.20190913.0000.f*.grib1; do j=`echo $i|sed 's/.mlv./.sfc./'`; k=`echo $i|sed 's/.mlv././'|sed 's/.grib1/.grib/'`; cat $i $j > $k; done + ```sh title="concatenate grib files" + for i in det.CROSSINN.mlv.20190913.0000.f*.grib1; + do + j=`echo $i|sed 's/.mlv./.sfc./'`; + k=`echo $i|sed 's/.mlv././'|sed 's/.grib1/.grib/'`; + cat $i $j > $k; + done + ``` 3. In the WPS main directory: - link_grib.csh /data/GRIB_IC_for_LAM/ECMWF/20190913_CROSSINN_IOP8/det.CROSSINN.20190913.0000.f*.grib - ln -s ungrib/Variable_Tables/Vtable.ECMWF_sigma Vtable - ./ungrib.exe + ```sh title="link grib files and convert" + link_grib.csh /data/GRIB_IC_for_LAM/ECMWF/20190913_CROSSINN_IOP8/det.CROSSINN.20190913.0000.f*.grib + ln -s ungrib/Variable_Tables/Vtable.ECMWF_sigma Vtable + ./ungrib.exe + ``` An alternative procedure would be to convert everything to grib2 instead of grib1. Then, one has to use a Vtable with grib2 information for the surface fields, for instance the one included here at the bottom. But: Data from the bottom soil level will not be read correctly with this Vtable, because the Level2 value for the bottom level is actually MISSING in grib2 files (at the moment of writing, 6 May 2022; this may be fixed in the future). - GRIB1| Level| From | To | metgrid | metgrid | metgrid |GRIB2|GRIB2|GRIB2|GRIB2| - Param| Type |Level1|Level2| Name | Units | Description |Discp|Catgy|Param|Level| - -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ - 130 | 109 | * | | TT | K | Temperature | 0 | 0 | 0 | 105 | - 131 | 109 | * | | UU | m s-1 | U | 0 | 2 | 2 | 105 | - 132 | 109 | * | | VV | m s-1 | V | 0 | 2 | 3 | 105 | - 133 | 109 | * | | SPECHUMD | kg kg-1 | Specific humidity | 0 | 1 | 0 | 105 | - 152 | 109 | * | | LOGSFP | Pa | Log surface pressure | 0 | 3 | 25 | 105 | - 129 | 109 | * | | SOILGEO | m | Surface geopotential | 0 | 3 | 4 | 1 | - | 109 | * | | SOILHGT | m | Terrain field of source analysis | 0 | 3 | 5 | 1 | - 134 | 109 | 1 | | PSFCH | Pa | | 0 | 3 | 0 | 1 | - 157 | 109 | * | | RH | % | Relative Humidity | 0 | 1 | 1 | 105 | - 165 | 1 | 0 | | UU | m s-1 | U | 0 | 2 | 2 | 103 | - 166 | 1 | 0 | | VV | m s-1 | V | 0 | 2 | 3 | 103 | - 167 | 1 | 0 | | TT | K | Temperature | 0 | 0 | 0 | 103 | - 168 | 1 | 0 | | DEWPT | K | | 0 | 0 | 6 | 103 | - 172 | 1 | 0 | | LANDSEA | 0/1 Flag | Land/Sea flag | 2 | 0 | 0 | 1 | - 151 | 1 | 0 | | PMSL | Pa | Sea-level Pressure | 0 | 3 | 0 | 101 | - 235 | 1 | 0 | | SKINTEMP | K | Sea-Surface Temperature | 0 | 0 | 17 | 1 | - 34 | 1 | 0 | | SST | K | Sea-Surface Temperature | 10 | 3 | 0 | 1 | - 139 | 112 | 0| 700| ST000007 | K | T of 0-7 cm ground layer | 192 | 128 | 139 | 106 | - 170 | 112 | 700| 2800| ST007028 | K | T of 7-28 cm ground layer | 192 | 128 | 170 | 106 | - 183 | 112 | 2800| 10000| ST028100 | K | T of 28-100 cm ground layer | 192 | 128 | 183 | 106 | - 236 | 112 | 10000| 0| ST100289 | K | T of 100-289 cm ground layer | 192 | 128 | 236 | 106 | - 39 | 112 | 0| 700| SM000007 | fraction | Soil moisture of 0-7 cm ground layer | 192 | 128 | 39 | 106 | - 40 | 112 | 700| 2800| SM007028 | fraction | Soil moisture of 7-28 cm ground layer | 192 | 128 | 40 | 106 | - 41 | 112 | 2800| 10000| SM028100 | fraction | Soil moisture of 28-100 cm ground layer | 192 | 128 | 41 | 106 | - 42 | 112 | 10000| 0| SM100289 | fraction | Soil moisture of 100-289 cm ground layer | 192 | 128 | 42 | 106 | - -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ - - -### Spinning up soil fields - -## After running the model - -### Converting model output to CF-compliant NetCDF + + GRIB1| Level| From | To | metgrid | metgrid | metgrid |GRIB2|GRIB2|GRIB2|GRIB2| + Param| Type |Level1|Level2| Name | Units | Description |Discp|Catgy|Param|Level| + -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ + 130 | 109 | * | | TT | K | Temperature | 0 | 0 | 0 | 105 | + 131 | 109 | * | | UU | m s-1 | U | 0 | 2 | 2 | 105 | + 132 | 109 | * | | VV | m s-1 | V | 0 | 2 | 3 | 105 | + 133 | 109 | * | | SPECHUMD | kg kg-1 | Specific humidity | 0 | 1 | 0 | 105 | + 152 | 109 | * | | LOGSFP | Pa | Log surface pressure | 0 | 3 | 25 | 105 | + 129 | 109 | * | | SOILGEO | m | Surface geopotential | 0 | 3 | 4 | 1 | + | 109 | * | | SOILHGT | m | Terrain field of source analysis | 0 | 3 | 5 | 1 | + 134 | 109 | 1 | | PSFCH | Pa | | 0 | 3 | 0 | 1 | + 157 | 109 | * | | RH | % | Relative Humidity | 0 | 1 | 1 | 105 | + 165 | 1 | 0 | | UU | m s-1 | U | 0 | 2 | 2 | 103 | + 166 | 1 | 0 | | VV | m s-1 | V | 0 | 2 | 3 | 103 | + 167 | 1 | 0 | | TT | K | Temperature | 0 | 0 | 0 | 103 | + 168 | 1 | 0 | | DEWPT | K | | 0 | 0 | 6 | 103 | + 172 | 1 | 0 | | LANDSEA | 0/1 Flag | Land/Sea flag | 2 | 0 | 0 | 1 | + 151 | 1 | 0 | | PMSL | Pa | Sea-level Pressure | 0 | 3 | 0 | 101 | + 235 | 1 | 0 | | SKINTEMP | K | Sea-Surface Temperature | 0 | 0 | 17 | 1 | + 34 | 1 | 0 | | SST | K | Sea-Surface Temperature | 10 | 3 | 0 | 1 | + 139 | 112 | 0| 700| ST000007 | K | T of 0-7 cm ground layer | 192 | 128 | 139 | 106 | + 170 | 112 | 700| 2800| ST007028 | K | T of 7-28 cm ground layer | 192 | 128 | 170 | 106 | + 183 | 112 | 2800| 10000| ST028100 | K | T of 28-100 cm ground layer | 192 | 128 | 183 | 106 | + 236 | 112 | 10000| 0| ST100289 | K | T of 100-289 cm ground layer | 192 | 128 | 236 | 106 | + 39 | 112 | 0| 700| SM000007 | fraction | Soil moisture of 0-7 cm ground layer | 192 | 128 | 39 | 106 | + 40 | 112 | 700| 2800| SM007028 | fraction | Soil moisture of 7-28 cm ground layer | 192 | 128 | 40 | 106 | + 41 | 112 | 2800| 10000| SM028100 | fraction | Soil moisture of 28-100 cm ground layer | 192 | 128 | 41 | 106 | + 42 | 112 | 10000| 0| SM100289 | fraction | Soil moisture of 100-289 cm ground layer | 192 | 128 | 42 | 106 | + -----+------+------+------+----------+----------+------------------------------------------+-----------------------+ + + +#### Spinning up soil fields + +### After running the model + +#### Converting model output to CF-compliant NetCDF 1. To convert WRF output to CF-compliant NetCDF, use `wrfout_to_cf.ncl` (from <https://sundowner.colorado.edu/wrfout_to_cf/overview.html>): - ncl 'file_in="wrfinput_d01"' 'file_out="wrfpost.nc"' wrfout_to_cf.ncl + ``` + ncl 'file_in="wrfinput_d01"' 'file_out="wrfpost.nc"' wrfout_to_cf.ncl + ``` -### Interpolating model output to a new grid +#### Interpolating model output to a new grid 1. First convert to CF-compliant NetCDF (see above) 1. Then use cdo to interpolate the CF-compliant WRF output: - cdo -remapnn,gridfile.lonlat.txt wrfpost.nc wrfpost_interpolated.nc + ``` + cdo -remapnn,gridfile.lonlat.txt wrfpost.nc wrfpost_interpolated.nc + ``` -1. In the code snippet above, -remapnn specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: <https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields> +1. In the code snippet above, `-remapnn` specifies the interpolation engine, in this case nearest-neighbour. See alternatives here: <https://code.mpimet.mpg.de/projects/cdo/wiki/Tutorial#Horizontal-fields> 1. File gridfile.lonlat.txt contans the grid specifications, e.g.: @@ -558,11 +559,11 @@ An alternative procedure would be to convert everything to grib2 instead of grib yfirst = 43.00 yinc = 0.01 -### Subsetting model output +#### Subsetting model output -### Further compression of model output (data packing) +#### Further compression of model output (data packing) -### 3D visualization +#### 3D visualization -# Useful tools +## Useful tools diff --git a/mkdocs/stylesheets/extra.css b/mkdocs/stylesheets/extra.css index f4e7f99..7e78954 100644 --- a/mkdocs/stylesheets/extra.css +++ b/mkdocs/stylesheets/extra.css @@ -2,4 +2,8 @@ --md-primary-fg-color: #0063a6; --md-primary-fg-color--light: #0574bf; --md-primary-fg-color--dark: #004675; - } \ No newline at end of file + } +/* Maximum space for text block */ +.md-grid { + max-width: 100%; /* or 100%, if you want to stretch to full-width */ +} \ No newline at end of file -- GitLab