Update WRF.md
Compare changes
+ 663
− 0
@@ -148,3 +148,666 @@ mv geo_em.d0?.nc met_em*nc ${archive}
* `test`: this contains several subdirectories, each of which refers to a specific compilation mode. For instance, compiling WRF for large-eddy simulation will link some executables in `em_les`, while compiling WRF for real-case simulations will link some other executables and lookup tables in `em_real`. Most of the test subdirectories refer to simple idealized simulations, some of which are two-dimensional. These test cases are used to valide the model's dynamical core (e.g., check if it correctly reproduces analytical solution of the Euler or Navier-Stokes equations).
In most cases, precompiled libraries can be made available to the operating system using environment modules. Environment modules modify the Linux shell environment so that the operating system is aware of where to find specific executable files, include files, software libraries, documentation files. Each server has its own set of available modules. As of 1.3.2023, WRF is known to compile and run with the following module collections.
Load modules with `module load LIST-OF-MODULE-NAMES`, unload them one by one with `module unload LIST-OF-MODULE-NAMES`, unload all of them at the same time with `module purge`, get information about a specific module with `module show MODULE_NAME`. Modules may depend on each other. If the system is set up properly, a request to load one module will automatically load any other prerequisite ones.
This will test the system to check that all libraries can be properly linked. Type `./configure`, pick a generic dmpar INTEL (ifort/icc) configuration (usually 15), answer 1 when asked if you want to compile for nesting, then hit enter. "dmpar" means "distributed memory parallelization" and enables running WRF in parallel computing mode. For test compilations or for a toy setup, you might also choose a "serial" configuration.
The configure script stores the model configuration to a file called `configure.wrf`. This is specific to the source code version, to the server where the source code is compiled, and to the software environment. If you a have a working `configure.wrf` file for a given source code/server/environment, back it up.
The first file, `configure.wrf`, is the result of the (wrong) automatic configuration. The second file, `configure.wrf.dmpar` is the manually fixed one. In the latter, additional library link directives (`-lnetcdf` and `-lhdf5`) are added to the variable `LIB_EXTERNAL`, and the full paths to these extra libraries are added to the variable `DEP_LIB_PATH`.
The `> compile.log` tells the operating system to redirect the output stream from the terminal to a file called `compile.log`. The `2>&1` tells the operating system to merge the standard and error output streams, so `compile.log` will contain both regular output and error messages. The final `&` tells the operating system to run the job in the background, and returns to the terminal prompt.
The compiled code will be created in the `run` directory, and some of the compiled programs will be linked in either of the `test/em_les`, `test/em_quarter_ss` or `test/em_real` directories. Executable WRF files typically have names ending with `.exe` (this is just conventional; it is actually not necessary for them to run).
Most Fortran compilers allow passing the source code through a C preprocessor (CPP; sometimes also called the Fortran preprocessor, FPP) to allow for conditional compilation. In the C programming language, there are some directives that make it possible to compile portions of the source code selectively.
For instance, in `dyn_em/module_initialize_ideal.F`, the following bits of code define the model orography for idealized large-eddy simulation runs. Four possibilities are given: `MTN`, `EW_RIDGE`, `NS_RIDGE`, and `NS_VALLEY`. If none is selected at compile time (select by adding `!` in front of #ifdef and #endif), none of these code lines is compiled and `grid%ht(i,j)` (the model orography) is set to 0:
An alternative procedure would be to convert everything to grib2 instead of grib1. Then, one has to use a Vtable with grib2 information for the surface fields, for instance the one included here at the bottom. But: Data from the bottom soil level will not be read correctly with this Vtable, because the Level2 value for the bottom level is actually MISSING in grib2 files (at the moment of writing, 6 May 2022; this may be fixed in the future).
* It is recommended to run 3D visualization software on GPUs. Running on a CPU (e.g., own laptop) is possible, but will be extremely slow. CPU is not the only bottleneck, because visualization software uses a lot of computer memory. Rendering 3D fields, in particular, is out of reach for normal laptops with 8GB or 16GB of RAM. Paraview is available on VSC5 and should be available soon on srvx8. Currently, Mayavi must be installed by individual users as a Python package.
1. On your local machine, open the Paraview client (graphical user interface, GUI). Then select File > Connect and enter the url of the Paraview server (`localhost:11111`). Select the datasets you want to display and work on them in the GUI. Save the Paraview state to avoid repeating work at the next session. Paraview has extensive [documentation](https://docs.paraview.org/en/latest/UsersGuide/index.html), tutorials ([one](https://docs.paraview.org/en/latest/Tutorials/SelfDirectedTutorial/index.html), [two](https://public.kitware.com/Wiki/The_ParaView_Tutorial) and [three](https://public.kitware.com/Wiki/images/b/bc/ParaViewTutorial56.pdf)) and a [wiki](https://public.kitware.com/Wiki/ParaView).
Whether done with Paraview or with Mayavi, the visualization will result in a collection of png files, e.g., `InnValley.%04d.png`. There are several tools to convert invidual frames into movies. Among them, `ffmpeg` and `apngasm`. At the moment neither of them is available on IMGW servers (precompiled binaries are available through `apt-get` for Ubuntu).