"**Goal**: To run a cycled data assimilation experiment.\n",
"[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:\n",
"\n",
"Now there are two options:\n",
"1) To start a forecast from an existing forecast, i.e. from WRF restart files\n",
"2) To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`\n",
"\n",
"\n",
"### Restart a forecast\n",
"To run a forecast from initial conditions of a previous forecasts, we import these modules\n",
"```python\n",
"import datetime as dt\n",
"from dartwrf.workflows import WorkFlows\n",
"```\n",
"\n",
"Let's say you want to run a forecast starting at 9 UTC until 12 UTC.\n",
"Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.\n",
"2) Update posterior with increments from assimilation\n",
"After this, the wrfrst files are updated with assimilation increments from DART output and copied to the WRF's run directories so you can continue to run the forecast ensemble.\n",
**Goal**: To run a cycled data assimilation experiment.
[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:
Now there are two options:
1) To start a forecast from an existing forecast, i.e. from WRF restart files
2) To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`
### Restart a forecast
To run a forecast from initial conditions of a previous forecasts, we import these modules
```python
importdatetimeasdt
fromdartwrf.workflowsimportWorkFlows
```
Let's say you want to run a forecast starting at 9 UTC until 12 UTC.
Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.
2) Update posterior with increments from assimilation
After this, the wrfrst files are updated with assimilation increments from DART output and copied to the WRF's run directories so you can continue to run the forecast ensemble.
```python
id=w.update_IC_from_DA(time,depends_on=id)
```
3) Define how long you want to run the forecast and when you want WRF restart files. Since they take a lot of space, we want as few as possible.
```python
timedelta_integrate=dt.timedelta(hours=5)
output_restart_interval=9999# any value larger than the forecast duration
" prior_init_time = time - timedelta_btw_assim\n",
" \n",
"w.verify_sat(id_sat)\n",
"w.verify_wrf(id)\n",
"w.verify_fast(id)\n",
"```\n",
"\n",
"#### Job scheduling status\n",
"If you work on a server with a queueing system, the script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes. You can look at the status with\n",
"```bash\n",
"$ squeue -u `whoami` --sort=i\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
output_restart_interval=9999# no restart file after last assim
# 3) Run WRF ensemble
id=w.run_ENS(begin=time,# start integration from here
end=time+timedelta_integrate,# integrate until here
output_restart_interval=output_restart_interval,
depends_on=id)
# as we have WRF output, we can use own exp path as prior
prior_path_exp=cluster.archivedir
id_sat=w.create_satimages(time,depends_on=id)
# increment time
time+=timedelta_btw_assim
# update time variables
prior_init_time=time-timedelta_btw_assim
w.verify_sat(id_sat)
w.verify_wrf(id)
w.verify_fast(id)
```
#### Job scheduling status
If you work on a server with a queueing system, the script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes. You can look at the status with
```bash
$ squeue -u`whoami`--sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)