Skip to content
Snippets Groups Projects
Commit 5a4604a6 authored by lkugler's avatar lkugler
Browse files

.

parent 56b33558
Branches
Tags 4.0.4
No related merge requests found
...@@ -35,6 +35,10 @@ Other helpful resources ...@@ -35,6 +35,10 @@ Other helpful resources
tutorial2 tutorial2
tutorial3 tutorial3
:caption: Tutorials:
api
modules
API API
=== ===
......
%% Cell type:markdown id:7df4195a-4a40-476a-9776-5865a84a2b8d tags: %% Cell type:markdown id:fd5c3005-f237-4495-9185-2d4d474cafd5 tags:
Test1 Test2 Test3 # Tutorial 1: The assimilation step
DART-WRF is a python package which automates many things like configuration, saving configuration and output, handling computing resources, etc.
%% Cell type:code id:feeeb389-8ec6-4186-8abd-d10a0c715e63 tags: The data for this experiment is accessible for students on the server srvx1.
%% Cell type:markdown id:93d59d4d-c514-414e-81fa-4ff390290811 tags:
### Configuring the experiment
Firstly, you need to configure the experiment in `config/cfg.py`.
Let's go through the most important settings:
- expname should be a unique identifier and will be used as folder name
- model_dx is the model resolution in meters
- n_ens is the ensemble size
- update_vars are the WRF variables which shall be updated by the assimilation
- filter_kind is 1 for the EAKF (see the DART documentation for more)
- prior and post_inflation defines what inflation we want (see the DART docs)
- sec is the statistical sampling error correction from Anderson (2012)
```python
exp = utils.Experiment()
exp.expname = "test_newcode"
exp.model_dx = 2000
exp.n_ens = 10
exp.update_vars = ['U', 'V', 'W', 'THM', 'PH', 'MU', 'QVAPOR', 'QCLOUD', 'QICE', 'PSFC']
exp.filter_kind = 1
exp.prior_inflation = 0
exp.post_inflation = 4
exp.sec = True
```
In case you want to generate new observations like for an observing system simulations experiment, OSSE), set
```python
exp.use_existing_obsseq = False`.
```
`exp.nature` defines which WRF files will be used to draw observations from, e.g.:
```python
exp.nature = '/users/students/lehre/advDA_s2023/data/sample_nature/'
```
`exp.input_profile` is used, if you create initial conditions from a so called wrf_profile (see WRF guide).
```python
exp.input_profile = '/doesnt_exist/initial_profiles/wrf/ens/raso.fc.<iens>.wrfprof'
```
For horizontal localization half-width of 20 km and 3 km vertically, set
```python
exp.cov_loc_vert_km_horiz_km = (3, 20)
```
You can also set it to False for no vertical localization.
#### Single observation
Set your desired observations like this.
```python
t = dict(plotname='Temperature', plotunits='[K]',
kind='RADIOSONDE_TEMPERATURE',
n_obs=1, # number of observations
obs_locations=[(45., 0.)], # location of observations
error_generate=0.2, # observation error used to generate observations
error_assimilate=0.2, # observation error used for assimilation
heights=[1000,], # for radiosondes, use range(1000, 17001, 2000)
cov_loc_radius_km=50) # horizontal localization half-width
exp.observations = [t,] # select observations for assimilation
```
#### Multiple observations
To generate a grid of observations, use
```python
vis = dict(plotname='VIS 0.6µm', plotunits='[1]',
kind='MSG_4_SEVIRI_BDRF', sat_channel=1,
n_obs=961, obs_locations='square_array_evenly_on_grid',
error_generate=0.03, error_assimilate=0.03,
cov_loc_radius_km=20)
```
But caution, n_obs should only be one of the following:
- 22500 for 2km observation density/resolution
- 5776 for 4km;
- 961 for 10km;
- 256 for 20km;
- 121 for 30km
For vertically resolved data, like radar, n_obs is the number of observations at each observation height level.
%% Cell type:markdown id:16bd3521-f98f-4c4f-8019-31029fd678ae tags:
### Configuring the hardware
In case you use a cluster which is not supported, configure paths inside `config/clusters.py`.
### Assimilate observations
We start by importing some modules:
```python
import datetime as dt
from dartwrf.workflows import WorkFlows
```
To assimilate observations at dt.datetime `time` we set the directory paths and times of the prior ensemble forecasts:
```python
prior_path_exp = '/users/students/lehre/advDA_s2023/data/sample_ensemble/'
prior_init_time = dt.datetime(2008,7,30,12)
prior_valid_time = dt.datetime(2008,7,30,12,30)
assim_time = prior_valid_time
```
Finally, we run the data assimilation by calling
```python
w = WorkFlows(exp_config='cfg.py', server_config='srvx1.py')
w.assimilate(assim_time, prior_init_time, prior_valid_time, prior_path_exp)
```
Congratulations! You're done!
%% Cell type:code id:82e809a8-5972-47f3-ad78-6290afe4ae17 tags:
``` python ``` python
``` ```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment