diff --git a/WRF.md b/WRF.md index 4d99c2fbd573a1c7cdf2112613adc7bb9ebe915e..e984402ba381ee807e07deea518597a642898302 100644 --- a/WRF.md +++ b/WRF.md @@ -627,52 +627,59 @@ For 3D visualization of WRF output, it is recommended to use either [Paraview](h * It is recommended to run 3D visualization software on GPUs. Running on a CPU (e.g., own laptop) is possible, but will be extremely slow. CPU is not the only bottleneck, because visualization software uses a lot of computer memory. Rendering 3D fields, in particular, is out of reach for normal laptops with 8GB or 16GB of RAM. Paraview is available on VSC5 and should be available soon on srvx8. Currently, Mayavi must be installed by individual users as a Python package. +**Notes for readers/contributors: (1) Mayavi is untested yet. (2) It would be useful to add example batch scripts for both Paraview and Mayavi.** + ##### Paraview workflow 1. Log in to VSC5 in a terminal window. 1. On VSC5, convert the WRF output in a format that Paraview can ingest. One option is to use [siso](https://github.com/TheBB/SISO). -```sh -siso -f vts ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso.out 2>&1 & -siso -f vts --planar ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso_sfc.out 2>&1 & -``` -The first and second statements handle respectively 3D and 2D WRF output. They process the native output from WRF in netcdf format and return a collection of files in VTS format (the VTK format for structured grids). + ```sh + siso -f vts ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso.out 2>&1 & + siso -f vts --planar ~/serafins/TEAMx_LES/output/100m/wrfout_d01_0001-01-01_00\:00\:00 > siso_sfc.out 2>&1 & + ``` + The first and second statements handle respectively 3D and 2D WRF output. They process the native output from WRF in netcdf format and return collections of files in VTS format (the VTK format for structured grids). There will be two independent datasets (for 3D and 2D output). 1. In the VSC5 terminal, request access to a GPU node. One of the private IMGW nodes has a GPU, and can be accessed with specific account/partition/quality of service directives. -```sh -(zen3) [sserafin4@l50 ~]$ salloc -N 1 --gres=gpu:2 --account=p71386 -p zen3_0512_a100x2 -q p71386_a100dual -salloc: Pending job allocation 233600 -salloc: job 233600 queued and waiting for resources -salloc: job 233600 has been allocated resources -salloc: Granted job allocation 233600 -salloc: Waiting for resource configuration -salloc: Nodes n3072-006 are ready for job -``` - + ```sh + (zen3) [sserafin4@l50 ~]$ salloc -N 1 --gres=gpu:2 --account=p71386 -p zen3_0512_a100x2 -q p71386_a100dual + salloc: Pending job allocation 233600 + salloc: job 233600 queued and waiting for resources + salloc: job 233600 has been allocated resources + salloc: Granted job allocation 233600 + salloc: Waiting for resource configuration + salloc: Nodes n3072-006 are ready for job + ``` 1. Once the GPU node becomes available, open up a new terminal session on your local machine, and set up an ssh tunnel to the GPU node through the login node. -```sh -(mypy39) stefano@stefano-XPS-13-9370:~$ ssh -L 11111:n3072-006:11112 sserafin4@vsc5.vsc.ac.at -``` -This will redirect TCP/IP traffic from port 11111 of your local machine to port 11112 of the VSC5 GPU node, through the VSC5 login node. Port numbers are arbitary, but the remote port (11112) needs to match the Paraview server settings (see below). + ```sh + (mypy39) stefano@stefano-XPS-13-9370:~$ ssh -L 11111:n3072-006:11112 sserafin4@vsc5.vsc.ac.at + ``` + This will redirect TCP/IP traffic from port 11111 of your local machine to port 11112 of the VSC5 GPU node, through the VSC5 login node. Port numbers are arbitary, but the remote port (11112) needs to match the Paraview server settings (see below). 1. In the VSC5 terminal, log in to the GPU node: -```sh -(zen3) [sserafin4@l50 ~]$ ssh n3072-006 -Warning: Permanently added 'n3072-006,10.191.72.6' (ECDSA) to the list of known hosts. -sserafin4@n3072-006's password: + ```sh + (zen3) [sserafin4@l50 ~]$ ssh n3072-006 + Warning: Permanently added 'n3072-006,10.191.72.6' (ECDSA) to the list of known hosts. + sserafin4@n3072-006's password: -(zen3) [sserafin4@n3072-006 ~]$ -``` + (zen3) [sserafin4@n3072-006 ~]$ + ``` 1. In the VSC5 terminal on the GPU node, load the Paraview module and start the Paraview server: -(zen3) [sserafin4@n3072-006 ~]$ module load paraview -(zen3) [sserafin4@n3072-006 ~]$ pvserver --force-offscreen-rendering --server-port=11112 -Waiting for client... -Connection URL: cs://n3072-006:11112 -Accepting connection(s): n3072-006:11112 + ```sh + (zen3) [sserafin4@n3072-006 ~]$ module load paraview + (zen3) [sserafin4@n3072-006 ~]$ pvserver --force-offscreen-rendering --server-port=11112 + Waiting for client... + Connection URL: cs://n3072-006:11112 + Accepting connection(s): n3072-006:11112 + ``` 1. On your local machine, open the Paraview client (graphical user interface, GUI). Then select File > Connect and enter the url of the Paraview server (`localhost:11111`). Select the datasets you want to display and work on them in the GUI. Save the Paraview state to avoid repeating work at the next session. Paraview has extensive [documentation](https://docs.paraview.org/en/latest/UsersGuide/index.html) and [tutorials](https://www.paraview.org/tutorials/). +##### Mayavi workflow + +Not tested yet. + ## Useful tools