"[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:\n",
"\n",
"#### Configure your experiment\n",
"See tutorial 1.\n",
"We start again by configuring `config/cfg.py`.\n",
"\n",
"Then we write a script (or edit an existing one) in the main directory `DART-WRF/`.\n",
"`nano new_experiment.py`\n",
"\n",
"#### Prepare initial conditions from previous forecasts\n",
"Before starting, let's set up the directory structure with\n",
"---\n",
"Any script needs to import some modules:\n",
"```python\n",
"begin = dt.datetime(2008, 7, 30, 6)\n",
"prepare_WRFrundir(begin)\n",
"import os, sys, shutil\n",
"import datetime as dt\n",
"\n",
"from dartwrf import utils\n",
"from config.cfg import exp\n",
"from config.clusters import cluster\n",
"```\n",
"\n",
"#### Run a forecast\n",
"Let's say you\n",
"- want to run a forecast starting at 6 UTC until 12 UTC\n",
"- do not want WRF restart files\n",
"---\n",
"Now there are two options:\n",
"- To start a forecast from an existing forecast, i.e. from WRF restart files\n",
"- To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`\n",
"\n",
"\n",
"#### Run a forecast from initial conditions of a previous forecasts\n",
"Let's say you want to run a forecast starting at 9 UTC until 12 UTC.\n",
"Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.\n",
"\n",
"\n",
"\n",
"#### Cycled data assimilation code\n",
"\n",
"Then the loop looks like\n",
"\n",
"\n",
"\n",
"#### Job scheduling status\n",
"The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
"```bash\n",
"$ squeue -u `whoami` --sort=i\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`."
**Goal**: To run a cycled data assimilation experiment.
[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:
#### Configure your experiment
See tutorial 1.
We start again by configuring `config/cfg.py`.
#### Prepare initial conditions from previous forecasts
Before starting, let's set up the directory structure with
Then we write a script (or edit an existing one) in the main directory `DART-WRF/`.
`nano new_experiment.py`
---
Any script needs to import some modules:
```python
begin=dt.datetime(2008,7,30,6)
prepare_WRFrundir(begin)
importos,sys,shutil
importdatetimeasdt
fromdartwrfimportutils
fromconfig.cfgimportexp
fromconfig.clustersimportcluster
```
#### Run a forecast
Let's say you
- want to run a forecast starting at 6 UTC until 12 UTC
- do not want WRF restart files
---
Now there are two options:
- To start a forecast from an existing forecast, i.e. from WRF restart files
- To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`
#### Run a forecast from initial conditions of a previous forecasts
Let's say you want to run a forecast starting at 9 UTC until 12 UTC.
Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.
2. To update the model state with assimilation increments, you need to update the WRF restart files by running
`id = update_IC_from_DA(time, depends_on=id)`
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.
#### Cycled data assimilation code
Then the loop looks like
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```bash
$ squeue -u`whoami`--sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
" prior_init_time = time - timedelta_btw_assim\n",
" \n",
"verify_sat(id_sat)\n",
"verify_wrf(id)\n",
"verify_fast(id)\n",
"```\n",
"\n",
"#### Job scheduling status\n",
"The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
"```bash\n",
"$ squeue -u `whoami` --sort=i\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
output_restart_interval=9999# no restart file after last assim
# 3) Run WRF ensemble
id=run_ENS(begin=time,# start integration from here
end=time+timedelta_integrate,# integrate until here
output_restart_interval=output_restart_interval,
depends_on=id)
# as we have WRF output, we can use own exp path as prior
prior_path_exp=cluster.archivedir
id_sat=create_satimages(time,depends_on=id)
# increment time
time+=timedelta_btw_assim
# update time variables
prior_init_time=time-timedelta_btw_assim
verify_sat(id_sat)
verify_wrf(id)
verify_fast(id)
```
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```bash
$ squeue -u`whoami`--sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)