diff --git a/docs/source/tutorial1.ipynb b/docs/source/tutorial1.ipynb
index 2916ade793923559f1823ce387db7bc410d8e69a..06a5e08a535fc3d9bc407a4fc57899370c18d3cc 100644
--- a/docs/source/tutorial1.ipynb
+++ b/docs/source/tutorial1.ipynb
@@ -1,11 +1,12 @@
 {
  "cells": [
   {
+   "attachments": {},
    "cell_type": "markdown",
    "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
    "metadata": {},
    "source": [
-    "# DART-WRF introductory example\n",
+    "# Tutorial 1: The assimilation step\n",
     "DART-WRF is a python package which automates many things like configuration, saving configuration and output, handling computing resources, etc.\n",
     "\n",
     "**Goal**: Using a predefined configuration file, run an example of Data Assimilation.\n",
diff --git a/docs/source/tutorial2.ipynb b/docs/source/tutorial2.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..054e068ffc482dc2e112ca583a8f388fc71a5612
--- /dev/null
+++ b/docs/source/tutorial2.ipynb
@@ -0,0 +1,121 @@
+{
+ "cells": [
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
+   "metadata": {},
+   "source": [
+    "# Tutorial 2: Cycled experiment\n",
+    "\n",
+    "\n",
+    "**Goal**: To run a cycled data assimilation experiment.\n",
+    "[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:\n",
+    "\n",
+    "#### Configure your experiment\n",
+    "See tutorial 1.\n",
+    "\n",
+    "#### Prepare initial conditions (from input_sounding)\n",
+    "1) Define starting time:\n",
+    "`begin = dt.datetime(2008, 7, 30, 6)`\n",
+    "2) WRF needs directories with certain files:\n",
+    "`id = prepare_WRFrundir(begin)`\n",
+    "3) Create 3D initial conditions from input_sounding etc.:\n",
+    "`id = run_ideal(depends_on=id)`\n",
+    "\n",
+    "#### Run free forecast\n",
+    "Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.\n",
+    "Then you can run an ensemble forecast using:\n",
+    "```\n",
+    "id = run_ENS(begin=begin,  # start integration from here\n",
+    "             end=end,      # integrate until here\n",
+    "             input_is_restart=False,\n",
+    "             output_restart_interval=(end-begin).total_seconds()/60,\n",
+    "             depends_on=id)\n",
+    "```\n",
+    "where `begin` & `end` are `dt.datetime` objects.\n",
+    "\n",
+    "#### Assimilate\n",
+    "To assimilate observations at dt.datetime `time` use this command:\n",
+    "\n",
+    "`id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)`\n",
+    "\n",
+    "#### Update initial conditions from Data Assimilation\n",
+    "In order to continue after assimilation you need the posterior = prior (1) + increments (2)\n",
+    "\n",
+    "1. Set prior with this function:\n",
+    "\n",
+    "`id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)`\n",
+    "\n",
+    "where path is `str`, times are `dt.datetime`.\n",
+    "\n",
+    "2. To update the model state with assimilation increments, you need to update the WRF restart files by running\n",
+    "\n",
+    "`id = update_IC_from_DA(time, depends_on=id)`\n",
+    "\n",
+    "After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using\n",
+    "\n",
+    "```\n",
+    "id = run_ENS(begin=time,  # start integration from here\n",
+    "             end=time + timedelta_integrate,  # integrate until here\n",
+    "             restart_path=cluster.archivedir+prior_init_time.strftime('/%Y-%m-%d_%H:%M/'),\n",
+    "             output_restart_interval=timedelta_btw_assim.total_seconds()/60,\n",
+    "             depends_on=id)\n",
+    "```\n",
+    "where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.\n",
+    "\n",
+    "\n",
+    "#### Job scheduling status\n",
+    "The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
+    "```\n",
+    "$ squeue -u `whoami` --sort=i\n",
+    "             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)\n",
+    "           1710274  mem_0384 prepwrfr  lkugler PD       0:00      1 (Priority)\n",
+    "           1710275  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710276  mem_0384 Assim-42  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710277  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710278  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710279  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "    1710280_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710281  mem_0384 pRTTOV-6  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710282  mem_0384 Assim-3a  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710283  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710284  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710285  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "    1710286_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710287  mem_0384 pRTTOV-7  lkugler PD       0:00      1 (Dependency)\n",
+    "```\n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "400244f1-098b-46ea-b29d-2226c7cbc827",
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "nwp 2023.1 - 3.10",
+   "language": "python",
+   "name": "nwp2023.1"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/source/tutorial2.rst b/docs/source/tutorial2.rst
deleted file mode 100644
index 254ae4b037d2a70e743689102d4865cb3f38aded..0000000000000000000000000000000000000000
--- a/docs/source/tutorial2.rst
+++ /dev/null
@@ -1,99 +0,0 @@
-Cycled experiment
-=================
-
-#### Configure your experiment
-Define simulation specific variables in [`config/cfg.py`](https://github.com/lkugler/DART-WRF/blob/master/config/cfg.py).
-Define paths for python, ncks, etc. in [`config/clusters.py`](https://github.com/lkugler/DART-WRF/blob/master/config/clusters.py).
-Dependencies are `numpy, pandas, scipy, xarray, netCDF4`. Install non-standard packages with `pip install docopt slurmpy --user`.
-Workflow is defined using meta-routines (functions) like `run_ENS` which are defined in `scheduler.py`.
-
-#### Prepare initial conditions (from input_sounding)
-1) Define starting time:
-`begin = dt.datetime(2008, 7, 30, 6)`
-2) WRF needs directories with certain files:
-`id = prepare_WRFrundir(begin)`
-3) Create 3D initial conditions from input_sounding etc.:
-`id = run_ideal(depends_on=id)`
-
-### Run free forecast
-Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.
-Then you can run an ensemble forecast using:
-```
-id = run_ENS(begin=begin,  # start integration from here
-             end=end,      # integrate until here
-             input_is_restart=False,
-             output_restart_interval=(end-begin).total_seconds()/60,
-             depends_on=id)
-```
-where `begin` & `end` are `dt.datetime` objects.
-
-### Assimilation experiment
-#### Assimilate
-To assimilate observations at dt.datetime `time` use this command:
-
-`id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)`
-
-#### Update initial conditions from Data Assimilation
-In order to continue after assimilation you need the posterior = prior (1) + increments (2)
-
-1. Set prior with this function:
-
-`id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)`
-
-where path is `str`, times are `dt.datetime`.
-
-2. To update the model state with assimilation increments, you need to update the WRF restart files by running
-
-`id = update_IC_from_DA(time, depends_on=id)`
-
-After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using
-
-```
-id = run_ENS(begin=time,  # start integration from here
-             end=time + timedelta_integrate,  # integrate until here
-             restart_path=cluster.archivedir+prior_init_time.strftime('/%Y-%m-%d_%H:%M/'),
-             output_restart_interval=timedelta_btw_assim.total_seconds()/60,
-             depends_on=id)
-```
-where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.
-
-### Examples
-[`scheduler.py`](https://github.com/lkugler/DART-WRF/blob/master/scheduler.py)
-[`generate_free.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py)
-
-## Finally
-
-### SLURM submissions
-`scheduler.py` submits jobs into the SLURM queue with dependencies, so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only one node, but model integration is done in a SLURM job array across e.g. 10 nodes:
-```
-$ squeue -u `whoami` --sort=i
-             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
-           1710274  mem_0384 prepwrfr  lkugler PD       0:00      1 (Priority)
-           1710275  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)
-           1710276  mem_0384 Assim-42  lkugler PD       0:00      1 (Dependency)
-           1710277  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)
-           1710278  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)
-           1710279  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)
-    1710280_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)
-           1710281  mem_0384 pRTTOV-6  lkugler PD       0:00      1 (Dependency)
-           1710282  mem_0384 Assim-3a  lkugler PD       0:00      1 (Dependency)
-           1710283  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)
-           1710284  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)
-           1710285  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)
-    1710286_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)
-           1710287  mem_0384 pRTTOV-7  lkugler PD       0:00      1 (Dependency)
-```
-
-### Easily switch between clusters
-Define cluster specific variables in `config/clusters.py `:
-```python
-
-clusterA = ClusterConfig()
-clusterA.name = 'vsc'
-clusterA.userdir = '/home/pathA/myuser/'
-...
-clusterB = ClusterConfig()
-clusterB.name = 'jet'
-clusterB.userdir = '/home/pathB/myuser/'
-```
-