"3) Create 3D initial conditions from input_sounding etc.:\n",
"`id = run_ideal(depends_on=id)`\n",
"\n",
"#### Run free forecast\n",
"Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.\n",
"Then you can run an ensemble forecast using:\n",
"#### Prepare initial conditions from previous forecasts\n",
"Before starting, let's set up the directory structure with\n",
"```python\n",
"begin = dt.datetime(2008, 7, 30, 6)\n",
"prepare_WRFrundir(begin)\n",
"```\n",
"\n",
"#### Run a forecast\n",
"Let's say you\n",
"- want to run a forecast starting at 6 UTC until 12 UTC\n",
"- do not want WRF restart files\n",
"\n",
"then the required code is\n",
"```python\n",
"begin = dt.datetime(2008, 7, 30, 6)\n",
"end = dt.datetime(2008, 7, 30, 12)\n",
"\n",
"id = run_ENS(begin=begin, # start integration from here\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.\n",
"\n",
"\n",
"\n",
"#### Cycled data assimilation code\n",
"\n",
"Then the loop looks like\n",
"\n",
"```\n",
"id = run_ENS(begin=time, # start integration from here\n",
" end=time + timedelta_integrate, # integrate until here\n",
"where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.\n",
"\n",
"\n",
"#### Job scheduling status\n",
"The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
"```\n",
"```bash\n",
"$ squeue -u `whoami` --sort=i\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
3) Create 3D initial conditions from input_sounding etc.:
`id = run_ideal(depends_on=id)`
#### Run free forecast
Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.
Then you can run an ensemble forecast using:
```
#### Prepare initial conditions from previous forecasts
Before starting, let's set up the directory structure with
```python
begin=dt.datetime(2008,7,30,6)
prepare_WRFrundir(begin)
```
#### Run a forecast
Let's say you
- want to run a forecast starting at 6 UTC until 12 UTC
- do not want WRF restart files
then the required code is
```python
begin=dt.datetime(2008,7,30,6)
end=dt.datetime(2008,7,30,12)
id=run_ENS(begin=begin,# start integration from here
2. To update the model state with assimilation increments, you need to update the WRF restart files by running
`id = update_IC_from_DA(time, depends_on=id)`
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.
#### Cycled data assimilation code
Then the loop looks like
```
id = run_ENS(begin=time, # start integration from here
end=time + timedelta_integrate, # integrate until here
where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```
```bash
$ squeue -u`whoami`--sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)