diff --git a/docs/source/custom_scripts.rst b/docs/source/custom_scripts.rst
new file mode 100644
index 0000000000000000000000000000000000000000..1b25e60e8ec90b6154e541bd6e5005ee5e5acd52
--- /dev/null
+++ b/docs/source/custom_scripts.rst
@@ -0,0 +1,32 @@
+Adding and modifying scripts
+============================
+
+Workflow methods are defined in `dartwrf/workflows.py`.
+A workflow method is for example :meth:`dartwrf.workflows.assimilate`, which can be run like this
+
+.. code-block:: python
+    from dartwrf.workflows import WorkFlows
+
+    prior_path_exp = '/users/students/lehre/advDA_s2023/data/sample_ensemble/'
+    prior_init_time = dt.datetime(2008,7,30,12)
+    prior_valid_time = dt.datetime(2008,7,30,13)
+    assim_time = prior_valid_time
+
+    w = WorkFlows(exp_config='exp_template.py', server_config='srvx1.py')
+
+    id = w.assimilate(assim_time, prior_init_time, prior_valid_time, prior_path_exp)
+
+Calling :meth:`dartwrf.workflows.assimilate` triggers the execution of the script `dartwrf/assim_synth_obs.py`.
+
+- Why do I need a separate script (in this case `assim_synth_obs.py`) to execute a script?
+Because some users need to use SLURM, which can only call scripts, not run python code directly.
+
+Recipe to add new functionality
+*******************************
+
+Do you need a new script? If not, use an existing one.
+If you need write a new script, you need to 
+1. write a workflow method (`dartwrf/workflows.py`), e.g. copy and modify an existing one, 
+2. therein you call the script with :meth:`dartwrf.utils.ClusterConfig.run_job` available via `self.cluster.run_job`, be careful which command-line arguments you need
+3. write the script and parse the command-line arguments
+4. call whatever python functions you may need
\ No newline at end of file
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 251bf42ee787509d873eb4d790c533352b0d291e..495298c2861e73d300f0906b038f7bd9b32423e8 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -34,6 +34,7 @@ Other helpful resources
    notebooks/tutorial1
    notebooks/tutorial2
    notebooks/tutorial3
+   custom_scripts
    
 .. toctree::
    :hidden:
diff --git a/docs/source/notebooks/tutorial1.ipynb b/docs/source/notebooks/tutorial1.ipynb
index 192bddad849b33102efdbd6aa8bcee30fe836b61..24de18dee6bbd0f0e8f4f4c99a0cca83fc91ab09 100644
--- a/docs/source/notebooks/tutorial1.ipynb
+++ b/docs/source/notebooks/tutorial1.ipynb
@@ -17,38 +17,36 @@
    "metadata": {},
    "source": [
     "### Configuring the experiment\n",
-    "Firstly, you need to configure the experiment in `config/cfg.py`.\n",
+    "Firstly, you need to configure the experiment.\n",
+    "Copy the existing template and modify it `cp config/exp_template.py config/exp1.py`.\n",
     "\n",
-    "Let's go through the most important settings:\n",
+    "Customize your settings:\n",
     "\n",
     "- expname should be a unique identifier and will be used as folder name\n",
     "- model_dx is the model resolution in meters\n",
     "- n_ens is the ensemble size\n",
     "- update_vars are the WRF variables which shall be updated by the assimilation\n",
-    "- filter_kind is 1 for the EAKF (see the DART documentation for more)\n",
-    "- prior and post_inflation defines what inflation we want (see the DART docs)\n",
-    "- sec is the statistical sampling error correction from Anderson (2012)\n",
     "\n",
     "```python\n",
     "exp = utils.Experiment()\n",
     "exp.expname = \"test_newcode\"\n",
     "exp.model_dx = 2000\n",
-    "exp.n_ens = 10\n",
+    "exp.n_ens = 40\n",
     "exp.update_vars = ['U', 'V', 'W', 'THM', 'PH', 'MU', 'QVAPOR', 'QCLOUD', 'QICE', 'PSFC']\n",
-    "exp.filter_kind = 1\n",
-    "exp.prior_inflation = 0\n",
-    "exp.post_inflation = 4\n",
-    "exp.sec = True\n",
     "\n",
     "```\n",
     "In case you want to generate new observations like for an observing system simulations experiment, OSSE), set \n",
     "```python\n",
-    "exp.use_existing_obsseq = False`.\n",
+    "exp.use_existing_obsseq = False\n",
+    "```\n",
+    "Else, you can use pre-existing observation files:\n",
+    "```python\n",
+    "exp.use_existing_obsseq = '/users/students/lehre/advDA_s2023/dartwrf_tutorial/very_cold_observation.out'\n",
     "```\n",
     "\n",
-    "`exp.nature` defines which WRF files will be used to draw observations from, e.g.: \n",
+    "`exp.nature` defines the path from where observations can be generated (necessary if `exp.use_existing_obsseq = False`)\n",
     "```python\n",
-    "exp.nature = '/users/students/lehre/advDA_s2023/data/sample_nature/'\n",
+    "exp.nature_wrfout_pattern = '/usr/data/sim_archive/exp_v1_nature/*/1/wrfout_d01_%Y-%m-%d_%H:%M:%S'\n",
     "```\n",
     "\n",
     "`exp.input_profile` is used, if you create initial conditions from a so called wrf_profile (see WRF guide).\n",
@@ -57,14 +55,8 @@
     "```\n",
     "\n",
     "\n",
-    "For horizontal localization half-width of 20 km and 3 km vertically, set\n",
-    "```python\n",
-    "exp.cov_loc_vert_km_horiz_km = (3, 20)\n",
-    "```\n",
-    "You can also set it to False for no vertical localization.\n",
-    "\n",
     "#### Single observation\n",
-    "Set your desired observations like this. \n",
+    "If you want to assimilate one observation, use \n",
     "```python\n",
     "t = dict(plotname='Temperature', plotunits='[K]',\n",
     "         kind='RADIOSONDE_TEMPERATURE', \n",
@@ -73,7 +65,9 @@
     "         error_generate=0.2,    # observation error used to generate observations\n",
     "         error_assimilate=0.2,  # observation error used for assimilation\n",
     "         heights=[1000,],       # for radiosondes, use range(1000, 17001, 2000)\n",
-    "         cov_loc_radius_km=50)  # horizontal localization half-width\n",
+    "         loc_horiz_km=50,       # horizontal localization half-width\n",
+    "         loc_vert_km=2.5        # vertical localization half-width\n",
+    "        )  \n",
     "\n",
     "exp.observations = [t,]  # select observations for assimilation\n",
     "```\n",
@@ -85,10 +79,11 @@
     "           kind='MSG_4_SEVIRI_BDRF', sat_channel=1, \n",
     "           n_obs=961, obs_locations='square_array_evenly_on_grid',\n",
     "           error_generate=0.03, error_assimilate=0.03,\n",
-    "           cov_loc_radius_km=20)\n",
+    "           loc_horiz_km=50)\n",
+    "exp.observations = [vis,]\n",
     "```\n",
     "\n",
-    "But caution, n_obs should only be one of the following:\n",
+    "Caution, n_obs should only be one of the following:\n",
     "\n",
     "- 22500 for 2km observation density/resolution \n",
     "- 5776 for 4km; \n",
@@ -96,7 +91,7 @@
     "- 256 for 20km; \n",
     "- 121 for 30km\n",
     "\n",
-    "For vertically resolved data, like radar, n_obs is the number of observations at each observation height level."
+    "For vertically resolved data, like radar, `n_obs` is the number of observations at each observation height level."
    ]
   },
   {
@@ -128,7 +123,7 @@
     "\n",
     "To set up the experiment, call\n",
     "```python\n",
-    "w = WorkFlows(exp_config='cfg.py', server_config='srvx1.py')\n",
+    "w = WorkFlows(exp_config='exp1.py', server_config='srvx1.py')\n",
     "```\n",
     "It will also create the output folders and backup the configuration files and scripts.\n",
     "\n",