diff --git a/docs/source/index.rst b/docs/source/index.rst
index 50d81839ec4fbb6eeec2a8f9286683e48dede0fe..ef7c001a48a5e058f533b6791c256cd78d346c6b 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -22,6 +22,7 @@ At the command line:
    self
    tutorial1
    tutorial2
+   tutorial3
 
 
 
diff --git a/docs/source/tutorial2.ipynb b/docs/source/tutorial2.ipynb
index f8bcae0a68381ffa4b48478bf0f001e3ddc6bced..79f0df098d96c938b9a3d6e29b3d716fd9c8ccdc 100644
--- a/docs/source/tutorial2.ipynb
+++ b/docs/source/tutorial2.ipynb
@@ -3,7 +3,9 @@
   {
    "cell_type": "markdown",
    "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
-   "metadata": {},
+   "metadata": {
+    "tags": []
+   },
    "source": [
     "# Tutorial 2: Cycled experiment\n",
     "\n",
@@ -12,36 +14,67 @@
     "[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:\n",
     "\n",
     "#### Configure your experiment\n",
-    "See tutorial 1.\n",
+    "We start again by configuring `config/cfg.py`.\n",
     "\n",
-    "#### Prepare initial conditions from previous forecasts\n",
-    "Before starting, let's set up the directory structure with\n",
+    "Then we write a script (or edit an existing one) in the main directory `DART-WRF/`.\n",
+    "`nano new_experiment.py`\n",
+    "\n",
+    "---\n",
+    "Any script needs to import some modules:\n",
     "```python\n",
-    "begin = dt.datetime(2008, 7, 30, 6)\n",
-    "prepare_WRFrundir(begin)\n",
+    "import os, sys, shutil\n",
+    "import datetime as dt\n",
+    "\n",
+    "from dartwrf import utils\n",
+    "from config.cfg import exp\n",
+    "from config.clusters import cluster\n",
     "```\n",
     "\n",
-    "#### Run a forecast\n",
-    "Let's say you\n",
-    "- want to run a forecast starting at 6 UTC until 12 UTC\n",
-    "- do not want WRF restart files\n",
+    "---\n",
+    "Now there are two options:\n",
+    "- To start a forecast from an existing forecast, i.e. from WRF restart files\n",
+    "- To start a forecast from defined thermodynamic profiles, i.e. from a `wrf_profile`\n",
+    "\n",
+    "\n",
+    "#### Run a forecast from initial conditions of a previous forecasts\n",
+    "Let's say you want to run a forecast starting at 9 UTC until 12 UTC.\n",
+    "Initial conditions shall be taken from a previous experiment in `/user/test/data/sim_archive/exp_abc` which was initialized at 6 UTC and there are WRF restart files for 9 UTC.\n",
+    "Then the code would be\n",
     "\n",
-    "then the required code is\n",
     "```python\n",
-    "begin = dt.datetime(2008, 7, 30, 6)\n",
+    "prior_path_exp = '/user/test/data/sim_archive/exp_abc'\n",
+    "prior_init_time = dt.datetime(2008,7,30,6)\n",
+    "prior_valid_time = dt.datetime(2008,7,30,9)\n",
+    "\n",
+    "cluster.setup()\n",
+    "\n",
+    "begin = dt.datetime(2008, 7, 30, 9)\n",
     "end = dt.datetime(2008, 7, 30, 12)\n",
     "\n",
-    "id = run_ENS(begin=begin,  # start integration from here\n",
-    "             end=end,      # integrate until here\n",
-    "             output_restart_interval=9999,  # do not write WRF restart files\n",
-    "             depends_on=id)\n",
+    "prepare_WRFrundir(begin)\n",
+    "\n",
+    "prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time)\n",
+    "\n",
+    "run_ENS(begin=begin,  # start integration from here\n",
+    "        end=end,      # integrate until here\n",
+    "        output_restart_interval=9999,  # do not write WRF restart files\n",
+    "        )\n",
     "```\n",
-    "Note that `begin` and `end` are `dt.datetime` objects.\n",
+    "Note that we use predefined workflow functions like `run_ENS`.\n",
+    "\n",
     "\n",
-    "#### Assimilate\n",
+    "#### Assimilate observations with a prior given by previous forecasts\n",
     "To assimilate observations at dt.datetime `time` use this command:\n",
     "\n",
-    "`id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)`\n",
+    "```python\n",
+    "prior_path_exp = '/user/test/data/sim_archive/exp_abc'\n",
+    "prior_init_time = dt.datetime(2008,7,30,6)\n",
+    "prior_valid_time = dt.datetime(2008,7,30,9)\n",
+    "time = dt.datetime(2008,7,30,9)  # time of assimilation\n",
+    "\n",
+    "assimilate(time, prior_init_time, prior_valid_time, prior_path_exp)\n",
+    "```\n",
+    "\n",
     "\n",
     "#### Update initial conditions from Data Assimilation\n",
     "In order to continue after assimilation you need the posterior = prior (1) + increments (2)\n",
@@ -56,37 +89,7 @@
     "\n",
     "`id = update_IC_from_DA(time, depends_on=id)`\n",
     "\n",
-    "After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.\n",
-    "\n",
-    "\n",
-    "\n",
-    "#### Cycled data assimilation code\n",
-    "\n",
-    "Then the loop looks like\n",
-    "\n",
-    "\n",
-    "\n",
-    "#### Job scheduling status\n",
-    "The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
-    "```bash\n",
-    "$ squeue -u `whoami` --sort=i\n",
-    "             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)\n",
-    "           1710274  mem_0384 prepwrfr  lkugler PD       0:00      1 (Priority)\n",
-    "           1710275  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710276  mem_0384 Assim-42  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710277  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710278  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710279  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
-    "    1710280_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710281  mem_0384 pRTTOV-6  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710282  mem_0384 Assim-3a  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710283  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710284  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710285  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
-    "    1710286_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
-    "           1710287  mem_0384 pRTTOV-7  lkugler PD       0:00      1 (Dependency)\n",
-    "```\n",
-    "\n"
+    "After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`."
    ]
   },
   {
diff --git a/docs/source/tutorial3.ipynb b/docs/source/tutorial3.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..9769038bb239d6866d21edc38d703fdec0f560a9
--- /dev/null
+++ b/docs/source/tutorial3.ipynb
@@ -0,0 +1,130 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
+   "metadata": {
+    "jp-MarkdownHeadingCollapsed": true,
+    "tags": []
+   },
+   "source": [
+    "# Tutorial 3: Cycled experiment\n",
+    "\n",
+    "\n",
+    "**Goal**: To run a cycled data assimilation experiment.\n",
+    "[`cycled_exp.py`](https://github.com/lkugler/DART-WRF/blob/master/generate_free.py) contains an example which will be explained here:\n",
+    "\n",
+    "After configuring your experiment the loop looks like\n",
+    "\n",
+    "```python\n",
+    "prior_path_exp = '/jetfs/home/lkugler/data/sim_archive/exp_v1.19_P2_noDA'\n",
+    "\n",
+    "init_time = dt.datetime(2008, 7, 30, 13)\n",
+    "time = dt.datetime(2008, 7, 30, 14)\n",
+    "last_assim_time = dt.datetime(2008, 7, 30, 14)\n",
+    "forecast_until = dt.datetime(2008, 7, 30, 14, 15)\n",
+    "\n",
+    "prepare_WRFrundir(init_time)\n",
+    "id = run_ideal(depends_on=id)\n",
+    "\n",
+    "prior_init_time = init_time\n",
+    "prior_valid_time = time\n",
+    "\n",
+    "while time <= last_assim_time:\n",
+    "\n",
+    "    # usually we take the prior from the current time\n",
+    "    # but one could use a prior from a different time from another run\n",
+    "    # i.e. 13z as a prior to assimilate 12z observations\n",
+    "    prior_valid_time = time\n",
+    "\n",
+    "    id = assimilate(time, prior_init_time, prior_valid_time, prior_path_exp, depends_on=id)\n",
+    "\n",
+    "    # 1) Set posterior = prior\n",
+    "    id = prepare_IC_from_prior(prior_path_exp, prior_init_time, prior_valid_time, depends_on=id)\n",
+    "\n",
+    "    # 2) Update posterior += updates from assimilation\n",
+    "    id = update_IC_from_DA(time, depends_on=id)\n",
+    "\n",
+    "    # How long shall we integrate?\n",
+    "    timedelta_integrate = timedelta_btw_assim\n",
+    "    output_restart_interval = timedelta_btw_assim.total_seconds()/60\n",
+    "    if time == last_assim_time: #this_forecast_init.minute in [0,]:  # longer forecast every full hour\n",
+    "        timedelta_integrate = forecast_until - last_assim_time  # dt.timedelta(hours=4)\n",
+    "        output_restart_interval = 9999  # no restart file after last assim\n",
+    "\n",
+    "    # 3) Run WRF ensemble\n",
+    "    id = run_ENS(begin=time,  # start integration from here\n",
+    "                end=time + timedelta_integrate,  # integrate until here\n",
+    "                output_restart_interval=output_restart_interval,\n",
+    "                depends_on=id)\n",
+    "\n",
+    "    # as we have WRF output, we can use own exp path as prior\n",
+    "    prior_path_exp = cluster.archivedir       \n",
+    "\n",
+    "    id_sat = create_satimages(time, depends_on=id)\n",
+    "\n",
+    "    # increment time\n",
+    "    time += timedelta_btw_assim\n",
+    "\n",
+    "    # update time variables\n",
+    "    prior_init_time = time - timedelta_btw_assim\n",
+    "        \n",
+    "verify_sat(id_sat)\n",
+    "verify_wrf(id)\n",
+    "verify_fast(id)\n",
+    "```\n",
+    "\n",
+    "#### Job scheduling status\n",
+    "The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
+    "```bash\n",
+    "$ squeue -u `whoami` --sort=i\n",
+    "             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)\n",
+    "           1710274  mem_0384 prepwrfr  lkugler PD       0:00      1 (Priority)\n",
+    "           1710275  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710276  mem_0384 Assim-42  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710277  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710278  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710279  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "    1710280_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710281  mem_0384 pRTTOV-6  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710282  mem_0384 Assim-3a  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710283  mem_0384 IC-prior  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710284  mem_0384 IC-updat  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710285  mem_0384 preWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "    1710286_[1-10]  mem_0384 runWRF2-  lkugler PD       0:00      1 (Dependency)\n",
+    "           1710287  mem_0384 pRTTOV-7  lkugler PD       0:00      1 (Dependency)\n",
+    "```\n",
+    "\n"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "400244f1-098b-46ea-b29d-2226c7cbc827",
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "nwp 2023.1 - 3.10",
+   "language": "python",
+   "name": "nwp2023.1"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.9"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}