"#### Prepare initial conditions from previous forecasts\n",
"1) Define starting time:\n",
"Before starting, let's set up the directory structure with\n",
"`begin = dt.datetime(2008, 7, 30, 6)`\n",
"```python\n",
"2) WRF needs directories with certain files:\n",
"begin = dt.datetime(2008, 7, 30, 6)\n",
"`id = prepare_WRFrundir(begin)`\n",
"prepare_WRFrundir(begin)\n",
"3) Create 3D initial conditions from input_sounding etc.:\n",
"`id = run_ideal(depends_on=id)`\n",
"\n",
"#### Run free forecast\n",
"Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.\n",
"Then you can run an ensemble forecast using:\n",
"```\n",
"```\n",
"\n",
"#### Run a forecast\n",
"Let's say you\n",
"- want to run a forecast starting at 6 UTC until 12 UTC\n",
"- do not want WRF restart files\n",
"\n",
"then the required code is\n",
"```python\n",
"begin = dt.datetime(2008, 7, 30, 6)\n",
"end = dt.datetime(2008, 7, 30, 12)\n",
"\n",
"id = run_ENS(begin=begin, # start integration from here\n",
"id = run_ENS(begin=begin, # start integration from here\n",
" end=end, # integrate until here\n",
" end=end, # integrate until here\n",
" input_is_restart=False,\n",
" output_restart_interval=9999, # do not write WRF restart files\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using\n",
"After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.\n",
"\n",
"\n",
"\n",
"#### Cycled data assimilation code\n",
"\n",
"Then the loop looks like\n",
"\n",
"\n",
"```\n",
"id = run_ENS(begin=time, # start integration from here\n",
" end=time + timedelta_integrate, # integrate until here\n",
"where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.\n",
"\n",
"\n",
"\n",
"\n",
"#### Job scheduling status\n",
"#### Job scheduling status\n",
"The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
"The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
"```\n",
"```bash\n",
"$ squeue -u `whoami` --sort=i\n",
"$ squeue -u `whoami` --sort=i\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
" JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)\n",
#### Prepare initial conditions from previous forecasts
1) Define starting time:
Before starting, let's set up the directory structure with
`begin = dt.datetime(2008, 7, 30, 6)`
```python
2) WRF needs directories with certain files:
begin=dt.datetime(2008,7,30,6)
`id = prepare_WRFrundir(begin)`
prepare_WRFrundir(begin)
3) Create 3D initial conditions from input_sounding etc.:
```
`id = run_ideal(depends_on=id)`
#### Run a forecast
#### Run free forecast
Let's say you
Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.
- want to run a forecast starting at 6 UTC until 12 UTC
Then you can run an ensemble forecast using:
- do not want WRF restart files
```
then the required code is
```python
begin=dt.datetime(2008,7,30,6)
end=dt.datetime(2008,7,30,12)
id=run_ENS(begin=begin,# start integration from here
id=run_ENS(begin=begin,# start integration from here
end=end,# integrate until here
end=end,# integrate until here
input_is_restart=False,
output_restart_interval=9999,# do not write WRF restart files
2. To update the model state with assimilation increments, you need to update the WRF restart files by running
2. To update the model state with assimilation increments, you need to update the WRF restart files by running
`id = update_IC_from_DA(time, depends_on=id)`
`id = update_IC_from_DA(time, depends_on=id)`
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using
After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.
#### Cycled data assimilation code
Then the loop looks like
```
id = run_ENS(begin=time, # start integration from here
end=time + timedelta_integrate, # integrate until here
where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.
#### Job scheduling status
#### Job scheduling status
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:
```
```bash
$ squeue -u`whoami`--sort=i
$ squeue -u`whoami`--sort=i
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)