Skip to content
Snippets Groups Projects
Commit 67218cb5 authored by lkugler's avatar lkugler
Browse files

.

parent c2061999
No related branches found
No related tags found
No related merge requests found
...@@ -22,6 +22,7 @@ At the command line: ...@@ -22,6 +22,7 @@ At the command line:
self self
tutorial1 tutorial1
tutorial2 tutorial2
tutorial3
......
%% Cell type:markdown id:fd5c3005-f237-4495-9185-2d4d474cafd5 tags: %% Cell type:markdown id:fd5c3005-f237-4495-9185-2d4d474cafd5 tags:
# Tutorial 2: Cycled experiment # Tutorial 2: Cycled experiment
**Goal**: To run a cycled data assimilation experiment. **Goal**: To run a cycled data assimilation experiment.
[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here: [`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:
#### Configure your experiment #### Configure your experiment
See tutorial 1. We start again by configuring `config/cfg.py`.
#### Prepare initial conditions from previous forecasts Then we write a script (or edit an existing one) in the main directory `DART-WRF/`.
Before starting, let's set up the directory structure with `nano new_experiment.py`
---
Any script needs to import some modules:
```python ```python
begin = dt.datetime(2008, 7, 30, 6) import os, sys, shutil
prepare_WRFrundir(begin) import datetime as dt
from dartwrf import utils
from config.cfg import exp
from config.clusters import cluster
``` ```
#### Run a forecast ---
Let's say you Now there are two options:
- want to run a forecast starting at 6 UTC until 12 UTC - To start a forecast from an existing forecast, i.e. from WRF restart files
- do not want WRF restart files - To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`
#### Run a forecast from initial conditions of a previous forecasts
Let's say you want to run a forecast starting at 9 UTC until 12 UTC.
Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.
Then the code would be
then the required code is
```python ```python
begin = dt.datetime(2008, 7, 30, 6) prior_path_exp = '/user/test/data/sim_archive/exp_abc'
prior_init_time = dt.datetime(2008,7,30,6)
prior_valid_time = dt.datetime(2008,7,30,9)
cluster.setup()
begin = dt.datetime(2008, 7, 30, 9)
end = dt.datetime(2008, 7, 30, 12) end = dt.datetime(2008, 7, 30, 12)
id = run_ENS(begin=begin, # start integration from here prepare_WRFrundir(begin)
end=end, # integrate until here
output_restart_interval=9999, # do not write WRF restart files prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time)
depends_on=id)
run_ENS(begin=begin, # start integration from here
end=end, # integrate until here
output_restart_interval=9999, # do not write WRF restart files
)
``` ```
Note that `begin` and `end` are `dt.datetime` objects. Note that we use predefined workflow functions like `run_ENS`.
#### Assimilate #### Assimilate observations with a prior given by previous forecasts
To assimilate observations at dt.datetime `time` use this command: To assimilate observations at dt.datetime `time` use this command:
`id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)` ```python
prior_path_exp = '/user/test/data/sim_archive/exp_abc'
prior_init_time = dt.datetime(2008,7,30,6)
prior_valid_time = dt.datetime(2008,7,30,9)
time = dt.datetime(2008,7,30,9) # time of assimilation
assimilate(time, prior_init_time, prior_valid_time, prior_path_exp)
```
#### Update initial conditions from Data Assimilation #### Update initial conditions from Data Assimilation
In order to continue after assimilation you need the posterior = prior (1) + increments (2) In order to continue after assimilation you need the posterior = prior (1) + increments (2)
1. Set prior with this function: 1. Set prior with this function:
`id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)` `id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)`
where path is `str`, times are `dt.datetime`. where path is `str`, times are `dt.datetime`.
2. To update the model state with assimilation increments, you need to update the WRF restart files by running 2. To update the model state with assimilation increments, you need to update the WRF restart files by running
`id = update_IC_from_DA(time, depends_on=id)` `id = update_IC_from_DA(time, depends_on=id)`
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`. After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.
#### Cycled data assimilation code
Then the loop looks like
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```bash
$ squeue -u `whoami` --sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1710274 mem_0384 prepwrfr lkugler PD 0:00 1 (Priority)
1710275 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710276 mem_0384 Assim-42 lkugler PD 0:00 1 (Dependency)
1710277 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710278 mem_0384 IC-updat lkugler PD 0:00 1 (Dependency)
1710279 mem_0384 preWRF2- lkugler PD 0:00 1 (Dependency)
1710280_[1-10] mem_0384 runWRF2- lkugler PD 0:00 1 (Dependency)
1710281 mem_0384 pRTTOV-6 lkugler PD 0:00 1 (Dependency)
1710282 mem_0384 Assim-3a lkugler PD 0:00 1 (Dependency)
1710283 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710284 mem_0384 IC-updat lkugler PD 0:00 1 (Dependency)
1710285 mem_0384 preWRF2- lkugler PD 0:00 1 (Dependency)
1710286_[1-10] mem_0384 runWRF2- lkugler PD 0:00 1 (Dependency)
1710287 mem_0384 pRTTOV-7 lkugler PD 0:00 1 (Dependency)
```
%% Cell type:code id:400244f1-098b-46ea-b29d-2226c7cbc827 tags: %% Cell type:code id:400244f1-098b-46ea-b29d-2226c7cbc827 tags:
``` python ``` python
``` ```
......
%% Cell type:markdown id:fd5c3005-f237-4495-9185-2d4d474cafd5 tags:
# Tutorial 3: Cycled experiment
**Goal**: To run a cycled data assimilation experiment.
[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:
After configuring your experiment the loop looks like
```python
prior_path_exp = '/jetfs/home/lkugler/data/sim_archive/exp_v1.19_P2_noDA'
init_time = dt.datetime(2008, 7, 30, 13)
time = dt.datetime(2008, 7, 30, 14)
last_assim_time = dt.datetime(2008, 7, 30, 14)
forecast_until = dt.datetime(2008, 7, 30, 14, 15)
prepare_WRFrundir(init_time)
id = run_ideal(depends_on=id)
prior_init_time = init_time
prior_valid_time = time
while time <= last_assim_time:
# usually we take the prior from the current time
# but one could use a prior from a different time from another run
# i.e. 13z as a prior to assimilate 12z observations
prior_valid_time = time
id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)
# 1) Set posterior = prior
id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)
# 2) Update posterior += updates from assimilation
id = update_IC_from_DA(time, depends_on=id)
# How long shall we integrate?
timedelta_integrate = timedelta_btw_assim
output_restart_interval = timedelta_btw_assim.total_seconds()/60
if time == last_assim_time: #this_forecast_init.minute in [0,]: # longer forecast every full hour
timedelta_integrate = forecast_until - last_assim_time # dt.timedelta(hours=4)
output_restart_interval = 9999 # no restart file after last assim
# 3) Run WRF ensemble
id = run_ENS(begin=time, # start integration from here
end=time + timedelta_integrate, # integrate until here
output_restart_interval=output_restart_interval,
depends_on=id)
# as we have WRF output, we can use own exp path as prior
prior_path_exp = cluster.archivedir
id_sat = create_satimages(time, depends_on=id)
# increment time
time += timedelta_btw_assim
# update time variables
prior_init_time = time - timedelta_btw_assim
verify_sat(id_sat)
verify_wrf(id)
verify_fast(id)
```
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```bash
$ squeue -u `whoami` --sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
1710274 mem_0384 prepwrfr lkugler PD 0:00 1 (Priority)
1710275 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710276 mem_0384 Assim-42 lkugler PD 0:00 1 (Dependency)
1710277 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710278 mem_0384 IC-updat lkugler PD 0:00 1 (Dependency)
1710279 mem_0384 preWRF2- lkugler PD 0:00 1 (Dependency)
1710280_[1-10] mem_0384 runWRF2- lkugler PD 0:00 1 (Dependency)
1710281 mem_0384 pRTTOV-6 lkugler PD 0:00 1 (Dependency)
1710282 mem_0384 Assim-3a lkugler PD 0:00 1 (Dependency)
1710283 mem_0384 IC-prior lkugler PD 0:00 1 (Dependency)
1710284 mem_0384 IC-updat lkugler PD 0:00 1 (Dependency)
1710285 mem_0384 preWRF2- lkugler PD 0:00 1 (Dependency)
1710286_[1-10] mem_0384 runWRF2- lkugler PD 0:00 1 (Dependency)
1710287 mem_0384 pRTTOV-7 lkugler PD 0:00 1 (Dependency)
```
%% Cell type:code id:400244f1-098b-46ea-b29d-2226c7cbc827 tags:
``` python
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment