diff --git a/docs/source/tutorial1.ipynb b/docs/source/tutorial1.ipynb
index 06a5e08a535fc3d9bc407a4fc57899370c18d3cc..256149a59150b418fb01e63ce88dde881f2c20d4 100644
--- a/docs/source/tutorial1.ipynb
+++ b/docs/source/tutorial1.ipynb
@@ -1,7 +1,6 @@
 {
  "cells": [
   {
-   "attachments": {},
    "cell_type": "markdown",
    "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
    "metadata": {},
@@ -22,10 +21,73 @@
     "- Copy the configuration file from `` to your config/ folder.\n"
    ]
   },
+  {
+   "cell_type": "markdown",
+   "id": "24b23d8c-29e6-484c-ad1f-f2bc07e66c66",
+   "metadata": {},
+   "source": [
+    "We start by importing some modules:\n",
+    "```python\n",
+    "import os, sys, shutil\n",
+    "import datetime as dt\n",
+    "\n",
+    "from dartwrf import utils\n",
+    "from config.cfg import exp\n",
+    "from config.clusters import cluster\n",
+    "```\n",
+    "\n",
+    "Then, we set the directory paths and times of the prior ensemble forecasts:\n",
+    "\n",
+    "```python\n",
+    "prior_path_exp = '/mnt/jetfs/scratch/lkugler/data/sim_archive/exp_v1.19_P3_wbub7_noDA'\n",
+    "prior_init_time = dt.datetime(2008,7,30,12)\n",
+    "prior_valid_time = dt.datetime(2008,7,30,12,30)\n",
+    "assim_time = prior_valid_time\n",
+    "```\n",
+    "\n",
+    "Finally, we run the data assimilation by calling\n",
+    "```python\n",
+    "cluster.setup()\n",
+    "\n",
+    "os.system(\n",
+    "    cluster.python+' '+cluster.scripts_rundir+'/assim_synth_obs.py '\n",
+    "                +assim_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "                +prior_init_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "                +prior_valid_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "                +prior_path_exp\n",
+    "    )\n",
+    "\n",
+    "create_satimages(time)\n",
+    "```\n",
+    "\n",
+    "Congratulations! You're done!"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "31b23faf-0986-407f-b07f-d635a71ec2c6",
+   "metadata": {},
+   "source": [
+    "---\n",
+    "#### Queueing systems\n",
+    "Note: In case you have to use a queueing system, use the builtin job scheduler, e.g.:\n",
+    "```python\n",
+    "id = cluster.create_job(\"Assim\", \n",
+    "                        cfg_update={\"ntasks\": \"12\", \"time\": \"60\", \"mem\": \"200G\", \n",
+    "                                    \"ntasks-per-node\": \"12\", \"ntasks-per-core\": \"2\"}\n",
+    "      ).run(cluster.python+' '+cluster.scripts_rundir+'/assim_synth_obs.py '\n",
+    "           +assim_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "           +prior_init_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "           +prior_valid_time.strftime('%Y-%m-%d_%H:%M ')\n",
+    "           +prior_path_exp, depends_on=[depends_on])\n",
+    "```\n",
+    "where `depends_on` is either `None` or `int` (a previous job's SLURM id)."
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
-   "id": "400244f1-098b-46ea-b29d-2226c7cbc827",
+   "id": "82e809a8-5972-47f3-ad78-6290afe4ae17",
    "metadata": {},
    "outputs": [],
    "source": []
diff --git a/docs/source/tutorial2.ipynb b/docs/source/tutorial2.ipynb
index 054e068ffc482dc2e112ca583a8f388fc71a5612..f8bcae0a68381ffa4b48478bf0f001e3ddc6bced 100644
--- a/docs/source/tutorial2.ipynb
+++ b/docs/source/tutorial2.ipynb
@@ -1,7 +1,6 @@
 {
  "cells": [
   {
-   "attachments": {},
    "cell_type": "markdown",
    "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
    "metadata": {},
@@ -15,25 +14,29 @@
     "#### Configure your experiment\n",
     "See tutorial 1.\n",
     "\n",
-    "#### Prepare initial conditions (from input_sounding)\n",
-    "1) Define starting time:\n",
-    "`begin = dt.datetime(2008, 7, 30, 6)`\n",
-    "2) WRF needs directories with certain files:\n",
-    "`id = prepare_WRFrundir(begin)`\n",
-    "3) Create 3D initial conditions from input_sounding etc.:\n",
-    "`id = run_ideal(depends_on=id)`\n",
-    "\n",
-    "#### Run free forecast\n",
-    "Let's say you want to run a free forecast starting at 6z, which you want to use as prior for an assimilation at 9z. Then you need can use the above defined 3 steps to create initial conditions.\n",
-    "Then you can run an ensemble forecast using:\n",
+    "#### Prepare initial conditions from previous forecasts\n",
+    "Before starting, let's set up the directory structure with\n",
+    "```python\n",
+    "begin = dt.datetime(2008, 7, 30, 6)\n",
+    "prepare_WRFrundir(begin)\n",
     "```\n",
+    "\n",
+    "#### Run a forecast\n",
+    "Let's say you\n",
+    "- want to run a forecast starting at 6 UTC until 12 UTC\n",
+    "- do not want WRF restart files\n",
+    "\n",
+    "then the required code is\n",
+    "```python\n",
+    "begin = dt.datetime(2008, 7, 30, 6)\n",
+    "end = dt.datetime(2008, 7, 30, 12)\n",
+    "\n",
     "id = run_ENS(begin=begin,  # start integration from here\n",
     "             end=end,      # integrate until here\n",
-    "             input_is_restart=False,\n",
-    "             output_restart_interval=(end-begin).total_seconds()/60,\n",
+    "             output_restart_interval=9999,  # do not write WRF restart files\n",
     "             depends_on=id)\n",
     "```\n",
-    "where `begin` & `end` are `dt.datetime` objects.\n",
+    "Note that `begin` and `end` are `dt.datetime` objects.\n",
     "\n",
     "#### Assimilate\n",
     "To assimilate observations at dt.datetime `time` use this command:\n",
@@ -53,21 +56,19 @@
     "\n",
     "`id = update_IC_from_DA(time, depends_on=id)`\n",
     "\n",
-    "After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using\n",
+    "After this, the wrfrst files are updated with assimilation increments (filter_restart) and copied to the WRF's run directories so you can continue to run the ENS after assimilation using function `run_ENS()`.\n",
+    "\n",
+    "\n",
+    "\n",
+    "#### Cycled data assimilation code\n",
+    "\n",
+    "Then the loop looks like\n",
     "\n",
-    "```\n",
-    "id = run_ENS(begin=time,  # start integration from here\n",
-    "             end=time + timedelta_integrate,  # integrate until here\n",
-    "             restart_path=cluster.archivedir+prior_init_time.strftime('/%Y-%m-%d_%H:%M/'),\n",
-    "             output_restart_interval=timedelta_btw_assim.total_seconds()/60,\n",
-    "             depends_on=id)\n",
-    "```\n",
-    "where times are `dt.datetime`; `timedelta` variables are `dt.timedelta`.\n",
     "\n",
     "\n",
     "#### Job scheduling status\n",
     "The script submits jobs into the SLURM queue with dependencies so that SLURM starts the jobs itself as soon as resources are available. Most jobs need only a few cores, but model integration is done across many nodes:\n",
-    "```\n",
+    "```bash\n",
     "$ squeue -u `whoami` --sort=i\n",
     "             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)\n",
     "           1710274  mem_0384 prepwrfr  lkugler PD       0:00      1 (Priority)\n",