diff --git a/docs/source/conf.py b/docs/source/conf.py
index 4a482c98d9ef5a764939f34283b30f48887a7bb6..399c927e52b89a0f9c3d9f84c266e232e7477b33 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -20,6 +20,7 @@ extensions = [
     'sphinx.ext.doctest',
     'sphinx.ext.autodoc',
     'sphinx.ext.autosummary',
+    'sphinx.ext.autosectionlabel',
     'sphinx.ext.intersphinx',
     'sphinx.ext.napoleon',
     'nbsphinx',
diff --git a/docs/source/custom_scripts.rst b/docs/source/custom_scripts.rst
index 5fe678adcfcda59e3f798fdadd715ae8caf356ad..350a8a6b996e502aba80e89eaed48be75aa87859 100644
--- a/docs/source/custom_scripts.rst
+++ b/docs/source/custom_scripts.rst
@@ -19,8 +19,6 @@ A workflow method is for example :meth:`dartwrf.workflows.WorkFlows.assimilate`,
 
 Calling :meth:`dartwrf.workflows.WorkFlows.assimilate` triggers the execution of the script `dartwrf/assimilate.py`.
 
-- Why do I need a separate script (in this case `assimilate.py`) to execute a script?
-Because some users need to use SLURM, which can only call scripts, not run python code directly.
 
 Recipe to add new functionality
 *******************************
@@ -31,4 +29,5 @@ If you need write a new script, you need to
 #. write a workflow method (`dartwrf/workflows.py`), e.g. copy and modify an existing one, 
 #. therein you call the script with :meth:`dartwrf.utils.ClusterConfig.run_job` available via `self.cluster.run_job`, be careful which command-line arguments you need
 #. write the script and parse the command-line arguments
-#. call whatever python functions you may need
\ No newline at end of file
+#. call whatever python functions you may need
+
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 1f76a8eb7a39602178d1cf85ff64a07f5177af19..e84e10e6d017506eb9eefc944d6149308936eed1 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -1,28 +1,33 @@
-Welcome to the DART-WRF documentation!
-======================================
+DART-WRF documentation
+=======================
+
+**DART-WRF** is a python package which allows you to
+* run the `weather research and forecast model` (`WRF <https://www2.mmm.ucar.edu/wrf/users/docs/docs_and_pubs.html>`_),
+* generate (satellite) observations from a nature run,
+* and assimilate these observations in ensemble data assimilation using `DART <https://docs.dart.ucar.edu/en/latest/>`_,
+* on a computing cluster or on your local machine.
 
-**DART-WRF** is a python package to run an Ensemble Data Assimilation system using the data assimilation suite `DART <https://docs.dart.ucar.edu/en/latest/README.html>`_ and the weather research and forecast model `WRF <https://www2.mmm.ucar.edu/wrf/users/docs/docs_and_pubs.html>`_.
 
 Installation
-------------
+*************
 
-DART-WRF is available at `github.com/lkugler/DART-WRF <https://github.com/lkugler/DART-WRF>`_ using the command line. To use it, you don't need to install it, but only its requirements:
+DART-WRF can be downloaded from `github.com/lkugler/DART-WRF <https://github.com/lkugler/DART-WRF>`_. To use it, you don't need to install it, but only its requirements:
 
 .. code-block::
    
    git clone https://github.com/lkugler/DART-WRF.git
    pip install xarray netCDF4 docopt pysolar==0.10.0
 
-Note that `pysolar` is only necessary if you will be using satellite observations.
-
+Note that `pysolar` is necessary to generate synthetic satellite observations.
 
 
-Other helpful resources
------------------------
-
-**DART documentation** `[here] <https://docs.dart.ucar.edu/en/latest/README.html>`_
-**WRF user guide** `[here] <http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.2/WRFUsersGuide_v42.pdf>`_
+First steps
+************
 
+To get started, go through the tutorials in the :ref:`tutorials` section.
+:ref:`tutorial1` shows you how to configure DART-WRF, generate observations and assimilate them.
+:ref:`tutorial2` shows you how to run a WRF forecast with the output from data assimilation.
+:ref:`tutorial3` shows you how assimilation and forecast can be run in a cycle.
 
 .. toctree::
    :hidden:
@@ -37,6 +42,16 @@ Other helpful resources
    notebooks/tutorial2
    notebooks/tutorial3
    custom_scripts
+
+
+Other helpful resources
+=======================
+
+**DART documentation** `[here] <https://docs.dart.ucar.edu/en/latest/README.html>`_
+**WRF user guide** `[here] <http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/v4.2/WRFUsersGuide_v42.pdf>`_
+
+
+
    
 .. toctree::
    :hidden:
diff --git a/docs/source/notebooks/tutorial1.ipynb b/docs/source/notebooks/tutorial1.ipynb
deleted file mode 100644
index b0f99f4513f83bae657e70cec42a6884d4f8d044..0000000000000000000000000000000000000000
--- a/docs/source/notebooks/tutorial1.ipynb
+++ /dev/null
@@ -1,214 +0,0 @@
-{
- "cells": [
-  {
-   "cell_type": "markdown",
-   "id": "fd5c3005-f237-4495-9185-2d4d474cafd5",
-   "metadata": {},
-   "source": [
-    "# Tutorial 1: The assimilation step\n",
-    "DART-WRF is a python package which automates many things like configuration, saving configuration and output, handling computing resources, etc.\n",
-    "\n",
-    "The data for this experiment is accessible for students on the server srvx1.\n"
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "id": "93d59d4d-c514-414e-81fa-4ff390290811",
-   "metadata": {},
-   "source": [
-    "## Configuring the hardware\n",
-    "In case you use a cluster which is not supported, copy an existing cluster configuration and modify it, e.g. `config/jet.py`.\n",
-    "\n",
-    "\n",
-    "## Configuring the experiment\n",
-    "Firstly, you need to configure the experiment.\n",
-    "Copy the existing template and modify it `cp config/exp_template.py config/exp1.py`.\n",
-    "\n",
-    "Customize your settings:\n",
-    "\n",
-    "- expname should be a unique identifier and will be used as folder name\n",
-    "- model_dx is the model resolution in meters\n",
-    "- n_ens is the ensemble size\n",
-    "- update_vars are the WRF variables which shall be updated by the assimilation\n",
-    "\n",
-    "```python\n",
-    "exp = utils.Experiment()\n",
-    "exp.expname = \"test_newcode\"\n",
-    "exp.model_dx = 2000\n",
-    "exp.n_ens = 40\n",
-    "exp.update_vars = ['U', 'V', 'W', 'THM', 'PH', 'MU', 'QVAPOR', 'QCLOUD', 'QICE', 'PSFC']\n",
-    "\n",
-    "```\n",
-    "\n",
-    "### Generating observations\n",
-    "In case you want to generate new observations, like for an observing system simulations experiment (OSSE), set \n",
-    "```python\n",
-    "exp.use_existing_obsseq = False\n",
-    "```\n",
-    "in this case, you need to set the path to WRF nature run files from where DART can generate observations:\n",
-    "```python\n",
-    "exp.nature_wrfout_pattern = '/usr/data/sim_archive/exp_v1_nature/*/1/wrfout_d01_%Y-%m-%d_%H:%M:%S'\n",
-    "```\n",
-    "\n",
-    "### Using pre-existing observation files\n",
-    "\n",
-    "You can use pre-existing observation files with\n",
-    "```python\n",
-    "exp.use_existing_obsseq = '/usr/data/sim_archive/exp_ABC/obs_seq_out/%Y-%m-%d_%H:%M_obs_seq.out'\n",
-    "```\n",
-    "where times are filled, depending on the assimilation time.\n",
-    "\n",
-    "### Customizing the DART namelist\n",
-    "By default, the DART namelist of the build directory will be used (copied). \n",
-    "If you want to modify any parameters, specify your changes in a python dictionary like below. For a description of the parameters, see [the official DART documentation](https://docs.dart.ucar.edu/).\n",
-    "```python\n",
-    "exp.dart_nml = {'&assim_tools_nml':\n",
-    "                    dict(filter_kind='1',\n",
-    "                         sampling_error_correction='.true.',\n",
-    "                        ),\n",
-    "                '&filter_nml':\n",
-    "                    dict(ens_size=exp.n_ens,\n",
-    "                         num_output_state_members=exp.n_ens,\n",
-    "                         num_output_obs_members=exp.n_ens,\n",
-    "                         inf_flavor=['0', '4'],\n",
-    "                         output_members='.true.',\n",
-    "                         output_mean='.true.',\n",
-    "                         output_sd='.true.',\n",
-    "                         stages_to_write='output',\n",
-    "                        ),\n",
-    "                '&quality_control_nml':\n",
-    "                    dict(outlier_threshold='-1',\n",
-    "                        ),\n",
-    "                '&location_nml':\n",
-    "                    dict(horiz_dist_only='.false.',\n",
-    "                '&model_nml':\n",
-    "                    dict(wrf_state_variables =\n",
-    "                        [['U',     'QTY_U_WIND_COMPONENT',     'TYPE_U',    'UPDATE','999',],\n",
-    "                         ['V',     'QTY_V_WIND_COMPONENT',     'TYPE_V',    'UPDATE','999',],\n",
-    "                         ['W',     'QTY_VERTICAL_VELOCITY',    'TYPE_W',    'UPDATE','999',],\n",
-    "                         ['PH',    'QTY_GEOPOTENTIAL_HEIGHT',  'TYPE_GZ',   'UPDATE','999',],\n",
-    "                         ['THM',   'QTY_POTENTIAL_TEMPERATURE','TYPE_T',    'UPDATE','999',],\n",
-    "                         ['MU',    'QTY_PRESSURE',             'TYPE_MU',   'UPDATE','999',],\n",
-    "                         ['QVAPOR','QTY_VAPOR_MIXING_RATIO',   'TYPE_QV',   'UPDATE','999',],\n",
-    "                         ['QICE',  'QTY_ICE_MIXING_RATIO',     'TYPE_QI',   'UPDATE','999',],\n",
-    "                         ['QCLOUD','QTY_CLOUDWATER_MIXING_RATIO','TYPE_QC', 'UPDATE','999',],\n",
-    "                         ['CLDFRA','QTY_CLOUD_FRACTION',       'TYPE_CFRAC','UPDATE','999',],\n",
-    "                         ['PSFC',  'QTY_SURFACE_PRESSURE',     'TYPE_PSFC', 'UPDATE','999',],\n",
-    "                         ['T2',    'QTY_2M_TEMPERATURE',       'TYPE_T',    'UPDATE','999',],\n",
-    "                         ['TSK',   'QTY_SKIN_TEMPERATURE',     'TYPE_T',    'UPDATE','999',],\n",
-    "                         ['REFL_10CM','QTY_RADAR_REFLECTIVITY','TYPE_REFL', 'UPDATE','999',]]),\n",
-    "                }\n",
-    "```\n",
-    "Any parameters in this dictionary will be overwritten compared to the default namelist.\n",
-    "\n",
-    "\n",
-    "\n",
-    "### Single observation experiment\n",
-    "If you want to assimilate one observation, use \n",
-    "```python\n",
-    "t = dict(plotname='Temperature', plotunits='[K]',\n",
-    "         kind='RADIOSONDE_TEMPERATURE', \n",
-    "         n_obs=1,                    # number of observations\n",
-    "         obs_locations=[(45., 0.)],  # location of observations\n",
-    "         error_generate=0.2,    # observation error used to generate observations\n",
-    "         error_assimilate=0.2,  # observation error used for assimilation\n",
-    "         heights=[1000,],       # for radiosondes, use range(1000, 17001, 2000)\n",
-    "         loc_horiz_km=50,       # horizontal localization half-width\n",
-    "         loc_vert_km=2.5        # vertical localization half-width\n",
-    "        )  \n",
-    "\n",
-    "exp.observations = [t,]  # select observations for assimilation\n",
-    "```\n",
-    "\n",
-    "### Assimilating multiple observations\n",
-    "To generate a grid of observations, use\n",
-    "```python\n",
-    "vis = dict(plotname='VIS 0.6µm', plotunits='[1]',\n",
-    "           kind='MSG_4_SEVIRI_BDRF', sat_channel=1, \n",
-    "           n_obs=961, obs_locations='square_array_evenly_on_grid',\n",
-    "           error_generate=0.03, error_assimilate=0.03,\n",
-    "           loc_horiz_km=50)\n",
-    "exp.observations = [t, vis,]\n",
-    "```\n",
-    "\n",
-    "Caution, n_obs should only be one of the following:\n",
-    "\n",
-    "- 22500 for 2km observation density/resolution \n",
-    "- 5776 for 4km; \n",
-    "- 961 for 10km; \n",
-    "- 256 for 20km; \n",
-    "- 121 for 30km\n",
-    "\n",
-    "For vertically resolved data, like radar, `n_obs` is the number of observations at each observation height level."
-   ]
-  },
-  {
-   "cell_type": "markdown",
-   "id": "16bd3521-f98f-4c4f-8019-31029fd678ae",
-   "metadata": {},
-   "source": [
-    "\n",
-    "\n",
-    "\n",
-    "## Configuring the assimilation experiment\n",
-    "We start by importing some modules:\n",
-    "```python\n",
-    "import datetime as dt\n",
-    "from dartwrf.workflows import WorkFlows\n",
-    "```\n",
-    "\n",
-    "To assimilate observations at dt.datetime `time` we set the directory paths and times of the prior ensemble forecasts:\n",
-    "\n",
-    "```python\n",
-    "prior_path_exp = '/users/students/lehre/advDA_s2023/data/sample_ensemble/'\n",
-    "prior_init_time = dt.datetime(2008,7,30,12)\n",
-    "prior_valid_time = dt.datetime(2008,7,30,12,30)\n",
-    "assim_time = prior_valid_time\n",
-    "```\n",
-    "\n",
-    "To set up the experiment, call\n",
-    "```python\n",
-    "w = WorkFlows(exp_config='exp1.py', server_config='srvx1.py')\n",
-    "```\n",
-    "It will also create the output folders and backup the configuration files and scripts.\n",
-    "\n",
-    "\n",
-    "Finally, we run the data assimilation by calling\n",
-    "```python\n",
-    "w.assimilate(assim_time, prior_init_time, prior_valid_time, prior_path_exp)\n",
-    "```\n",
-    "\n",
-    "Congratulations! You're done!"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "id": "82e809a8-5972-47f3-ad78-6290afe4ae17",
-   "metadata": {},
-   "outputs": [],
-   "source": []
-  }
- ],
- "metadata": {
-  "kernelspec": {
-   "display_name": "nwp 2023.1 - 3.10",
-   "language": "python",
-   "name": "nwp2023.1"
-  },
-  "language_info": {
-   "codemirror_mode": {
-    "name": "ipython",
-    "version": 3
-   },
-   "file_extension": ".py",
-   "mimetype": "text/x-python",
-   "name": "python",
-   "nbconvert_exporter": "python",
-   "pygments_lexer": "ipython3",
-   "version": "3.10.9"
-  }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}