diff --git a/docs/source/tutorial1.ipynb b/docs/source/tutorial1.ipynb index b9bced7e2d0b1d7b79120ef36aa889be1f3069d3..a23a5fb3f6a83b128382998d86edb7f08f25e34f 100644 --- a/docs/source/tutorial1.ipynb +++ b/docs/source/tutorial1.ipynb @@ -1,146 +1,29 @@ { "cells": [ { - "cell_type": "markdown", - "id": "fd5c3005-f237-4495-9185-2d4d474cafd5", - "metadata": {}, - "source": [ - "\n", - "# Tutorial 1: The assimilation step\n", - "DART-WRF is a python package which automates many things like configuration, saving configuration and output, handling computing resources, etc.\n", - "\n", - "The data for this experiment is accessible for students on the server srvx1.\n" - ] - }, - { - "cell_type": "markdown", - "id": "93d59d4d-c514-414e-81fa-4ff390290811", - "metadata": {}, - "source": [ - "### Configuring the experiment\n", - "Firstly, you need to configure the experiment in `config/cfg.py`.\n", - "\n", - "Let's go through the most important settings:\n", - "\n", - "- expname should be a unique identifier and will be used as folder name\n", - "- model_dx is the model resolution in meters\n", - "- n_ens is the ensemble size\n", - "- update_vars are the WRF variables which shall be updated by the assimilation\n", - "- filter_kind is 1 for the EAKF (see the DART documentation for more)\n", - "- prior and post_inflation defines what inflation we want (see the DART docs)\n", - "- sec is the statistical sampling error correction from Anderson (2012)\n", - "\n", - "```python\n", - "exp = utils.Experiment()\n", - "exp.expname = \"test_newcode\"\n", - "exp.model_dx = 2000\n", - "exp.n_ens = 10\n", - "exp.update_vars = ['U', 'V', 'W', 'THM', 'PH', 'MU', 'QVAPOR', 'QCLOUD', 'QICE', 'PSFC']\n", - "exp.filter_kind = 1\n", - "exp.prior_inflation = 0\n", - "exp.post_inflation = 4\n", - "exp.sec = True\n", - "\n", - "```\n", - "In case you want to generate new observations like for an observing system simulations experiment, OSSE), set \n", - "```python\n", - "exp.use_existing_obsseq = False`.\n", - "```\n", - "\n", - "`exp.nature` defines which WRF files will be used to draw observations from, e.g.: \n", - "```python\n", - "exp.nature = '/users/students/lehre/advDA_s2023/data/sample_nature/'\n", - "```\n", - "\n", - "`exp.input_profile` is used, if you create initial conditions from a so called wrf_profile (see WRF guide).\n", - "```python\n", - "exp.input_profile = '/doesnt_exist/initial_profiles/wrf/ens/raso.fc.<iens>.wrfprof'\n", - "```\n", - "\n", - "\n", - "For horizontal localization half-width of 20 km and 3 km vertically, set\n", - "```python\n", - "exp.cov_loc_vert_km_horiz_km = (3, 20)\n", - "```\n", - "You can also set it to False for no vertical localization.\n", - "\n", - "#### Single observation\n", - "Set your desired observations like this. \n", - "```python\n", - "t = dict(plotname='Temperature', plotunits='[K]',\n", - " kind='RADIOSONDE_TEMPERATURE', \n", - " n_obs=1, # number of observations\n", - " obs_locations=[(45., 0.)], # location of observations\n", - " error_generate=0.2, # observation error used to generate observations\n", - " error_assimilate=0.2, # observation error used for assimilation\n", - " heights=[1000,], # for radiosondes, use range(1000, 17001, 2000)\n", - " cov_loc_radius_km=50) # horizontal localization half-width\n", - "\n", - "exp.observations = [t,] # select observations for assimilation\n", - "```\n", - "\n", - "#### Multiple observations\n", - "To generate a grid of observations, use\n", - "```python\n", - "vis = dict(plotname='VIS 0.6µm', plotunits='[1]',\n", - " kind='MSG_4_SEVIRI_BDRF', sat_channel=1, \n", - " n_obs=961, obs_locations='square_array_evenly_on_grid',\n", - " error_generate=0.03, error_assimilate=0.03,\n", - " cov_loc_radius_km=20)\n", - "```\n", - "\n", - "But caution, n_obs should only be one of the following:\n", - "\n", - "- 22500 for 2km observation density/resolution \n", - "- 5776 for 4km; \n", - "- 961 for 10km; \n", - "- 256 for 20km; \n", - "- 121 for 30km\n", - "\n", - "For vertically resolved data, like radar, n_obs is the number of observations at each observation height level." - ] - }, - { - "cell_type": "markdown", - "id": "16bd3521-f98f-4c4f-8019-31029fd678ae", + "cell_type": "code", + "execution_count": 1, + "id": "59276951-d31b-4c5d-bca3-fac762e663cc", "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['/mnt/users/staff/lkugler/AdvDA23/DART-WRF/docs/source', '/home/swd/manual/nwp/2023.1/lib/python310.zip', '/home/swd/manual/nwp/2023.1/lib/python3.10', '/home/swd/manual/nwp/2023.1/lib/python3.10/lib-dynload', '', '/users/staff/lkugler/.local/lib/python3.10/site-packages', '/home/swd/manual/nwp/2023.1/lib/python3.10/site-packages']\n" + ] + } + ], "source": [ - "### Configuring the hardware\n", - "In case you use a cluster which is not supported, configure paths inside `config/clusters.py`.\n", - "\n", - "\n", - "\n", - "\n", - "### Assimilate observations\n", - "We start by importing some modules:\n", - "```python\n", - "import datetime as dt\n", - "from dartwrf.workflows import WorkFlows\n", - "```\n", - "\n", - "To assimilate observations at dt.datetime `time` we set the directory paths and times of the prior ensemble forecasts:\n", - "\n", - "```python\n", - "prior_path_exp = '/users/students/lehre/advDA_s2023/data/sample_ensemble/'\n", - "prior_init_time = dt.datetime(2008,7,30,12)\n", - "prior_valid_time = dt.datetime(2008,7,30,12,30)\n", - "assim_time = prior_valid_time\n", - "```\n", - "\n", - "Finally, we run the data assimilation by calling\n", - "```python\n", - "w = WorkFlows(exp_config='cfg.py', server_config='srvx1.py')\n", - "\n", - "w.assimilate(assim_time, prior_init_time, prior_valid_time, prior_path_exp)\n", - "```\n", - "\n", - "Congratulations! You're done!" + "# Test\n", + "import os, sys\n", + "print(sys.path)" ] }, { "cell_type": "code", "execution_count": null, - "id": "82e809a8-5972-47f3-ad78-6290afe4ae17", + "id": "feeeb389-8ec6-4186-8abd-d10a0c715e63", "metadata": {}, "outputs": [], "source": []