diff --git a/Documentation/html/Documentation/Api/api_fortran.html b/Documentation/html/Documentation/Api/api_fortran.html
index 850bec622499a72cb76cefc79af44ea21299748b..1ed5ca1d74329ff5c4048805a10703f317425798 100644
--- a/Documentation/html/Documentation/Api/api_fortran.html
+++ b/Documentation/html/Documentation/Api/api_fortran.html
@@ -183,8 +183,7 @@
             
   <div class="section" id="fortran-s-auto-generated-documentation">
 <h1>Fortran’s Auto Generated Documentation<a class="headerlink" href="#fortran-s-auto-generated-documentation" title="Permalink to this headline">¶</a></h1>
-<p>Link to other documentation!</p>
-<p>…. f:autoprogram:: preconvert</p>
+<p><a class="reference external" href="Fortran/index.html">Fortran API</a></p>
 <div class="toctree-wrapper compound">
 </div>
 </div>
diff --git a/For_developers/InstallationParameter.xls b/For_developers/InstallationParameter.xls
index 8a22817ff6c997da9670f3e58ca60e22b32fe13b..5e897bc13c277bfdd7683703dfa7ea816d3c154b 100644
Binary files a/For_developers/InstallationParameter.xls and b/For_developers/InstallationParameter.xls differ
diff --git a/For_developers/Sphinx/source/Documentation/Api/api_fortran.rst b/For_developers/Sphinx/source/Documentation/Api/api_fortran.rst
index 1bb81d5baf120630908ebec4a5bac9a6f4c624e2..f1bea12568b3b0aec245274df3fc82bab8bb6ece 100644
--- a/For_developers/Sphinx/source/Documentation/Api/api_fortran.rst
+++ b/For_developers/Sphinx/source/Documentation/Api/api_fortran.rst
@@ -1,26 +1,14 @@
 **************************************
-Fortran's Auto Generated Documentation
+Auto-generated documentation for the Fortran programme
 **************************************
 
 .. contents::
     :local:
     
-    
-    
-Link to other documentation!
-
+   
+`Fortran API <Fortran/index.html>`_ 
 
-
-
-.... f:autoprogram:: preconvert    
-    
-    
-    
     
 .. toctree::
     :hidden:
     :maxdepth: 2
-    
-    
-
-    
diff --git a/For_developers/Sphinx/source/Documentation/Api/api_python.rst b/For_developers/Sphinx/source/Documentation/Api/api_python.rst
index 724892ec33987c1c962692b2f10896a6a6641cc1..9fbfa052c28d1f8b75fba33d2468ada7589474ac 100644
--- a/For_developers/Sphinx/source/Documentation/Api/api_python.rst
+++ b/For_developers/Sphinx/source/Documentation/Api/api_python.rst
@@ -1,5 +1,5 @@
 *************************************
-Python's Auto Generated Documentation
+Auto-generated documentation for the Python scripts
 *************************************
 
 .. contents::
diff --git a/For_developers/Sphinx/source/Documentation/Input/changes.rst b/For_developers/Sphinx/source/Documentation/Input/changes.rst
index 4f11b051c9cd9d394a88fc99a7e2e9dddc768a8e..0c3b257f5e674fd0c609c1c1106cfeee73f736df 100644
--- a/For_developers/Sphinx/source/Documentation/Input/changes.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/changes.rst
@@ -7,8 +7,8 @@ Changes from version 7.0.4 to version 7.1
     - grid resolution not in 1/1000 degress anymore (but is still available for downward compatibility)
     - comments available with ``#``
     - only parameters which are needed to override the default values are necessary 
-    - number of type/step/time elements do not have to be 24 any more. Just select the interval you need. 
-    - the ``dtime`` parameter needs to be consistent with ``type/step/time``. For example ``dtime`` can be coarser as ``time`` intervals are available, but not finer.
+    - number of type/step/time elements does not have to be 24 anymore. Just provide what you need. 
+    - the ``dtime`` parameter needs to be consistent with ``type/step/time``, for example, ``dtime`` can be coarser than the ``time`` intervals available, but not finer.
 
  
 
diff --git a/For_developers/Sphinx/source/Documentation/Input/compilejob.rst b/For_developers/Sphinx/source/Documentation/Input/compilejob.rst
index 9c7c3000a35657801bc7de9b7f1cb51c4d5bb999..cfc12eaa14df253e69dec97c7f826f8ac7d2631f 100644
--- a/For_developers/Sphinx/source/Documentation/Input/compilejob.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/compilejob.rst
@@ -1,14 +1,14 @@
 ********************************************
-The Compilation Jobscript ``compilejob.ksh``
+The compilation job script ``compilejob.ksh``
 ********************************************
 
-The compilejob is a Korn-shell script which will be created during the installation process for the application modes **remote** and **gateway** from a template called ``compilejob.template`` in the template directory.
+The compile job is a Korn-shell script which will be created during the installation process for the application modes **remote** and **gateway** from a template called ``compilejob.template`` in the template directory.
 
-``Flex_extract`` uses the python package `genshi <https://genshi.edgewall.org/>`_ to generate
+``Flex_extract`` uses the Python package `genshi <https://genshi.edgewall.org/>`_ to generate
 the Korn-shell script from the template files by substituting the individual parameters. 
 These individual parameters are marked by a doubled ``$`` sign in ``job.temp``. 
 
-The jobscript has a number of settings for the batch system which are fixed and differentiates between the *ecgate* and the *cca/ccb* 
+The job script has a number of settings for the batch system which are fixed, and it differentiates between the *ecgate* and the *cca/ccb* 
 server system to load the necessary modules for the environment when submitted to the batch queue.
 
 The submission is done by the ``ECaccess`` tool from within ``flex_extract`` with the command ``ecaccess-job-submit``.
@@ -18,13 +18,13 @@ The submission is done by the ``ECaccess`` tool from within ``flex_extract`` wit
 What does the compilation script do?
 ------------------------------------
 
- #. It sets necessary batch system parameters
+ #. It sets the necessary batch-system parameters
  #. It prepares the job environment at the ECMWF servers by loading the necessary library modules
- #. It sets some environment variabels for the single session
+ #. It sets some environment variables for the single session
  #. It creates the ``flex_extract`` root directory in the ``$HOME`` path of the user
- #. It untars the tar-ball into the root directory.
- #. It compiles the Fortran programs's ``Makefile``.
- #. At the end it checks if the script returned an error or not and send the log file via email to the user.
+ #. It untars the tarball into the root directory.
+ #. It compiles the Fortran program using ``Makefile``.
+ #. At the end, it checks whether the script has returned an error or not, and emails the log file to the user.
 
 
 
diff --git a/For_developers/Sphinx/source/Documentation/Input/control.rst b/For_developers/Sphinx/source/Documentation/Input/control.rst
index fb770de664ca9a11a62bc68d4b912119711d4817..13e5baa2d05c50773145b0218b05ed9fcaf2c279 100644
--- a/For_developers/Sphinx/source/Documentation/Input/control.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/control.rst
@@ -9,34 +9,34 @@ The CONTROL file
  
  
 This file is an input file for :literal:`flex_extract's` main script :literal:`submit.py`.
-It contains the controlling parameters :literal:`flex_extract` needs to decide on dataset specifications,
-handling of the retrieved data and general bahaviour. The naming convention is usually (but not necessary):
+It contains the controlling parameters which :literal:`flex_extract` needs to decide on data set specifications,
+handling of the  data retrieved, and general behaviour. The naming convention is usually (but not necessarily):
 
    :literal:`CONTROL_<Dataset>[.optionalIndications]`
 
-The tested datasets are the operational dataset and the re-analysis datasets CERA-20C, ERA5 and ERA-Interim.
-The optional extra indications for the re-analysis datasets mark the files for *public users* 
-and *global* domain. For the operational datasets (*OD*) the file names contain also information of
-the stream, the field type for forecasts, the method for extracting the vertical coordinate and other things like time or horizontal resolution.
+There are a number of data sets for which the procedures have been tested, the operational data and the re-analysis datasets CERA-20C, ERA5, and ERA-Interim.
+The optional indications for the re-analysis data sets mark the files for *public users* 
+and *global* domain. For the operational data sets (*OD*), the file names contain also information of
+the stream, the field type for forecasts, the method for extracting the vertical wind, and other information such as temporal or horizontal resolution.
 
 
 Format of CONTROL files
 ----------------------------------
 The first string of each line is the parameter name, the following string(s) (separated by spaces) is (are) the parameter values.
-The parameters can be sorted in any order with one parameter per line. 
+The parameters can be listed in any order with one parameter per line. 
 Comments are started with a '#' - sign. Some of these parameters can be overruled by the command line
 parameters given to the :literal:`submit.py` script. 
-All parameters have default values. Only those parameters which have to be changed
-must be listed in the :literal:`CONTROL` files. 
+All parameters have default values; only those parameters which deviate from default
+have be listed in the :literal:`CONTROL` files. 
 
 
 Example CONTROL files
 --------------------------------
 
 A number of example files can be found in the directory :literal:`flex_extract_vX.X/Run/Control/`.
-They can be used as a template for adaptations and understand what's possible to 
-retrieve from ECMWF's archive.
-For each main dataset there is an example and additionally some variances in resolution, type of field or type of retrieving the vertical coordinate. 
+They can be used as a template for adaptation, and to understand what can be 
+retrievee from ECMWF's archives.
+There is an example for each main data set, and in addition, some more varied with respect to resolution, type of field, or way of retrieving the vertical wind. 
 
 
  
@@ -44,8 +44,9 @@ For each main dataset there is an example and additionally some variances in res
 CONTROL file
 ------------
 The file :literal:`CONTROL.documentation` documents the available parameters
-in grouped sections with their default values. In :doc:`control_params` you can find a more
-detailed description with additional hints, possible values and some useful information about
+in grouped sections together with their default values. 
+In :doc:`control_params`, you can find a more
+detailed description with additional hints, possible values, and further information about
 the setting of these parameters.
 
 .. literalinclude:: ../../../../../Run/Control/CONTROL.documentation 
diff --git a/For_developers/Sphinx/source/Documentation/Input/control_params.rst b/For_developers/Sphinx/source/Documentation/Input/control_params.rst
index d505c9871963f4773e523aa8e1125e98c789304a..60fa4dc03e5041ff380b1a5ce126af415ceee380 100644
--- a/For_developers/Sphinx/source/Documentation/Input/control_params.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/control_params.rst
@@ -10,7 +10,7 @@ The CONTROL parameters
 User Section
 ************
     
-.. exceltable:: User parameter in CONTROL file 
+.. exceltable:: User parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: UserSection 
    :header: 1
@@ -20,7 +20,7 @@ User Section
 General Section
 ***************
 
-.. exceltable:: General parameter in CONTROL file 
+.. exceltable:: General parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: GeneralSection 
    :header: 1
@@ -30,7 +30,7 @@ General Section
 Time Section
 ************
    
-.. exceltable:: Time parameter in CONTROL file 
+.. exceltable:: Time parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: TimeSection  
    :header: 1      
@@ -41,7 +41,7 @@ Time Section
 Data Section
 ************ 
    
-.. exceltable:: Data parameter in CONTROL file 
+.. exceltable:: Data parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: DataSection 
    :header: 1
@@ -52,7 +52,7 @@ Data Section
 Data field Section
 ******************
     
-.. exceltable:: Data field parameter in CONTROL file 
+.. exceltable:: Data field parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: DatafieldsSection 
    :header: 1
@@ -63,7 +63,7 @@ Data field Section
 Flux data Section
 *****************
 
-.. exceltable:: Flux data parameter in CONTROL file 
+.. exceltable:: Flux data parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls
    :sheet: FluxDataSection
    :header: 1
@@ -74,7 +74,7 @@ Flux data Section
 Domain Section
 **************
    
-.. exceltable:: Domain parameter in CONTROL file 
+.. exceltable:: Domain parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls
    :sheet: DomainSection
    :header: 1
@@ -98,7 +98,7 @@ Vertical wind Section
 Additional data Section
 ***********************
    
-.. exceltable:: Additional data parameter in CONTROL file 
+.. exceltable:: Additional data parameters in CONTROL file 
    :file: ../../_files/CONTROLparameter.xls 
    :sheet: AddDataSection 
    :header: 1
diff --git a/For_developers/Sphinx/source/Documentation/Input/ecmwf_env.rst b/For_developers/Sphinx/source/Documentation/Input/ecmwf_env.rst
index 057136caee885f01651f2490160f433584883fba..565b5a9c7af88b675defbaed8aff931bd794631e 100644
--- a/For_developers/Sphinx/source/Documentation/Input/ecmwf_env.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/ecmwf_env.rst
@@ -1,5 +1,5 @@
 ****************************************
-ECMWF User Credential file ``ECMWF_ENV``
+ECMWF user credential file ``ECMWF_ENV``
 ****************************************
 
 This file contains the user credentials for working on ECMWF servers and transferring files between the ECMWF servers and the local gateway server. It is located in the ``flex_extract_vX.X/Run`` directory and will be created in the installation process for the application modes **remote** and **gateway**.
@@ -15,7 +15,7 @@ This file is based on the template ``ECMWF_ENV.template`` which is located in th
 Content of ``ECMWF_ENV``
 ------------------------
 
-The following shows an example of the content of an ``ECMWF_ENV`` file:
+An example of the content of an ``ECMWF_ENV`` file is shown below:
   
 .. code-block:: bash
 
diff --git a/For_developers/Sphinx/source/Documentation/Input/examples.rst b/For_developers/Sphinx/source/Documentation/Input/examples.rst
index c61b1a54076329f417b4ebc258b2dbf899db2ff1..6ad28b0f34eedd11ced5e0b9e515fd95de022ae7 100644
--- a/For_developers/Sphinx/source/Documentation/Input/examples.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/examples.rst
@@ -2,12 +2,12 @@
 CONTROL file examples
 **********************
 
-``Flex_extract`` has a couple of example ``CONTROL`` files for a number of different data set constellations in the directory path ``flex_extract_vX.X/Run/Control``. 
+``Flex_extract`` comes with a number of example ``CONTROL`` files for a number of different data set constellations in the directory path ``flex_extract_vX.X/Run/Control``. 
 
-Here is a list of the example files and a description of the data set:
+Here is a list of the example files:
 
 CONTROL.documentation
-   This file is not intended to be used with ``flex_extract``. It has a list of all possible parameters and their default values for a quick overview. 
+   This file is not intended to be used with ``flex_extract``. It just contains a list of all possible parameters and their default values for a quick overview. 
    
 .. code-block:: bash
 
@@ -32,10 +32,11 @@ CONTROL.documentation
 	CONTROL_OD.OPER.FC.eta.highres
 	CONTROL_OD.OPER.FC.gauss.highres
 	CONTROL_OD.OPER.FC.operational
-	CONTROL_OD.OPER.FC.twiceaday.1hourly
-	CONTROL_OD.OPER.FC.twiceaday.3hourly
+	CONTROL_OD.OPER.FC.twicedaily.1hourly
+	CONTROL_OD.OPER.FC.twicedaily.3hourly
 
-    
+   #PS some information to be added.
+ 
 .. toctree::
     :hidden:
     :maxdepth: 2
diff --git a/For_developers/Sphinx/source/Documentation/Input/fortran_makefile.rst b/For_developers/Sphinx/source/Documentation/Input/fortran_makefile.rst
index e176135d4d176861cf1c168a68953ada23851f47..4c6b1a3af17185f7484025f8e2ced36f2a60b9d6 100644
--- a/For_developers/Sphinx/source/Documentation/Input/fortran_makefile.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/fortran_makefile.rst
@@ -1,40 +1,35 @@
 **************************************
-The Fortran Makefile - ``calc_etadot``
+The Fortran makefile for ``calc_etadot``
 **************************************
 
 .. _ref-convert:
 
-``Flex_extract``'s Fortran program will be compiled during 
-the installation process to get the executable named ``calc_etadot``. 
+The Fortran program ``calc_etadot`` will be compiled during 
+the installation process to produce the executable called ``calc_etadot``. 
 
-``Flex_extract`` has a couple of ``makefiles`` prepared which can be found in the directory 
-``flex_extract_vX.X/Source/Fortran``, where ``vX.X`` should be substituted with the current version number.
-A list of these ``makefiles`` are shown below: 
+``Flex_extract`` includes several ``makefiles``  which can be found in the directory 
+``flex_extract_vX.X/Source/Fortran``, where ``vX.X`` should be substituted by the current flex_extract version number.
+A list of these ``makefiles`` is shown below: 
 
 
 | **Remote/Gateway mode**: 
 | Files to be used as they are!
     
-    | **makefile_ecgate**
-    | For the use on ECMWF's server **ecgate**.
-
-    | **makefile_cray**
-    | For the use on ECMWF's server **cca/ccb**. 
+    | **makefile_ecgate**: For  use on ECMWF's server **ecgate**.
+    | **makefile_cray**:   For  use on ECMWF's server **cca/ccb**. 
     
 | **Local mode**
-| It is necessary to adapt **ECCODES_INCLUDE_DIR** and **ECCODES_LIB**
+| It is necessary to adapt **ECCODES_INCLUDE_DIR** and **ECCODES_LIB** if they don't correspond to the standard paths pre-set in the makefiles.
  
-    | **makefile_fast**
-    | For the use with gfortran compiler and optimization mode.
-
-    | **makefile_debug**
-    | For the use with gfortran compiler in debugging mode.
+    | **makefile_fast**:  For use with the gfortran compiler and optimisation mode.
+    | **makefile_debug**: For use with the gfortran compiler and debugging mode. Primarily for developers.
 
+If you want to use another compiler than gfortran locally, you can still take ``makefile_fast``,
+and adapt everything that is compiler-specific in this file.
 
-For instructions on how to adapt the ``makefiles`` for the local application mode
+For instructions on how to adapt the ``makefile`` (local application mode only),
 please see :ref:`ref-install-local`.
 
-
    
 .. toctree::
     :hidden:
diff --git a/For_developers/Sphinx/source/Documentation/Input/jobscript.rst b/For_developers/Sphinx/source/Documentation/Input/jobscript.rst
index 466c91f34ae0c36d5afbca74fdb276d37a6c8c0d..7510327253a52cd7c3fcc8ba70c1edb8ab9981dd 100644
--- a/For_developers/Sphinx/source/Documentation/Input/jobscript.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/jobscript.rst
@@ -1,33 +1,33 @@
 *************************
-The Jobscript ``job.ksh``
+The job script ``job.ksh``
 *************************
 
-The jobscript is a Korn-shell script which will be created at runtime for each ``flex_extract`` execution in the application modes **remote** and **gateway**.
+The job script is a Korn-shell script which will be created at runtime for each ``flex_extract`` execution in the application modes **remote** and **gateway**.
 
-It is based on the ``job.temp`` template file which is stored in the ``Templates`` directory.
-This template is by itself generated in the installation process from a ``job.template`` template file.
+It is based on the ``job.temp`` template file stored in the ``Templates`` directory.
+This template is generated in the installation process from a ``job.template`` template file.
 
-``Flex_extract`` uses the python package `genshi <https://genshi.edgewall.org/>`_ to generate
+``Flex_extract`` uses the Python package `genshi <https://genshi.edgewall.org/>`_ to generate
 the Korn-shell script from the template files by substituting the individual parameters. 
-These individual parameters are marked by a doubled ``$`` sign in ``job.temp``. 
+These individual parameters are marked by ``$$`` in ``job.temp``. 
 
-The jobscript has a number of settings for the batch system which are fixed and differentiates between the *ecgate* and the *cca/ccb* 
+The job script has a number of settings for the batch system which are fixed, and differentiates between the *ecgate* and the *cca/ccb* 
 server system to load the necessary modules for the environment when submitted to the batch queue.
 
 The submission is done by the ``ECaccess`` tool from within ``flex_extract`` with the command ``ecaccess-job-submit``.
 
 
 
-What does the jobscript do?
+What does the job script do?
 ---------------------------
 
- #. It sets necessary batch system parameters
- #. It prepares the job environment at the ECMWF servers by loading the necessary library modules
- #. It sets some environment variabels for the single session
- #. It creates the directory structure in the users ``$SCRATCH`` file system
- #. It creates a CONTROL file on the ECMWF servers whith the parameters set before creating the ``jobscript.ksh``. ``Flex_extract`` has a set of parameters which are given to the jobscript with its default or the user defined values. It also sets the ``CONTROL`` as an environment variable.
- #. ``Flex_extract`` is started from within the ``work`` directory of the new directory structure by calling the ``submit.py`` script. It sets new pathes for input and output directory and the recently generated ``CONTROL`` file.
- #. At the end it checks if the script returned an error or not and send the log file via email to the user.
+ #. It sets necessary batch system parameters.
+ #. It prepares the job environment at the ECMWF servers by loading the necessary library modules.
+ #. It sets some environment variables for the single session.
+ #. It creates the directory structure in the user's ``$SCRATCH`` file system.
+ #. It creates a CONTROL file on the ECMWF servers whith the parameters set before creating the ``jobscript.ksh``. ``Flex_extract`` has a set of parameters which are passed to the job script with their default or the user-defined values. It also sets ``CONTROL`` as an environment variable.
+ #. ``Flex_extract`` is started from within the ``work`` directory of the new directory structure by calling the ``submit.py`` script. It sets new paths for input and output directories and the recently generated ``CONTROL`` file.
+ #. At the end, it checks whether the script has returned an error or not, and emails the log file to the user.
 
 
 
diff --git a/For_developers/Sphinx/source/Documentation/Input/run.rst b/For_developers/Sphinx/source/Documentation/Input/run.rst
index 658b89275a0eea08a37f2d53b15326a690ef2ce0..5baa5afa0782c38bc2809017f3944e8e5be4aac9 100644
--- a/For_developers/Sphinx/source/Documentation/Input/run.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/run.rst
@@ -1,23 +1,23 @@
 **********************************
-The executable Script - ``run.sh``
+The executable script - ``run.sh``
 **********************************
 
-The execution of ``flex_extract`` is done by the ``run.sh`` Shell script, which is a wrapping script for the top-level Python script ``submit.py``. 
+The execution of ``flex_extract`` is done by the ``run.sh`` shell script, which is a wrapper script for the top-level Python script ``submit.py``. 
 The Python script constitutes the entry point to ECMWF data retrievals with ``flex_extract`` and controls the program flow. 
 
-``submit.py`` has two (three) sources for input parameters with information about program flow and ECMWF data selection, the so-called ``CONTROL`` file,  
-the command line parameters and the so-called ``ECMWF_ENV`` file. Whereby, the command line parameters will override the ``CONTROL`` file parameters. 
+``submit.py`` has two (or three) sources for input parameters with information about program flow and ECMWF data selection, the so-called ``CONTROL`` file,  
+the command line parameters, and the so-called ``ECMWF_ENV`` file. Command line parameters will override parameters specified in the ``CONTROL`` file. 
 
-Based on these input information ``flex_extract`` applies one of the application modes to either retrieve the ECMWF data via a Web API on a local maschine or submit a jobscript to ECMWF servers and retrieve the data there with sending the files to the local system eventually.
+Based on this input information, ``flex_extract`` applies one of the application modes to either retrieve the ECMWF data via a web API on a local maschine, or submit a job script to an ECMWF server and retrieve the data there, and at the end sends the files to the local system.
 
 
 
 
-Submission Parameter
+Submission parameters
 --------------------
 
 
-.. exceltable:: Parameter for Submission
+.. exceltable:: Parameters for submission
     :file:  ../../_files/SubmitParameters.xls
     :header: 1  
     :sheet: 0
@@ -38,9 +38,9 @@ Content of ``run.sh``
 Usage of ``submit.py`` (optional)
 ---------------------------------
 
-It is also possible to start ``flex_extract`` directly from command line by using the ``submit.py`` script instead of the wrapping Shell script ``run.sh``.  This top-level script is located in 
-``flex_extract_vX.X/Source/Python`` and is executable. With the ``help`` parameter we see again all possible 
-command line parameter. 
+It is also possible to start ``flex_extract`` directly from command line by using the ``submit.py`` script instead of the wrapper shell script ``run.sh``.  This top-level script is located in 
+``flex_extract_vX.X/Source/Python`` and is executable. With the ``--help`` parameter 
+we see again all possible command line parameters. 
 
 .. code-block:: bash
 
diff --git a/For_developers/Sphinx/source/Documentation/Input/setup.rst b/For_developers/Sphinx/source/Documentation/Input/setup.rst
index 05fce15064106a3afe4b0821d66974bb74cb6867..fd47f4ba15c1b76870751929cb7082c747c53644 100644
--- a/For_developers/Sphinx/source/Documentation/Input/setup.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/setup.rst
@@ -1,25 +1,24 @@
 **************************************
-The Installation Script - ``setup.sh``
+The installation script - ``setup.sh``
 **************************************
 
+The installation of ``flex_extract`` is done by the shell script ``setup.sh`` located in the root directory of ``flex_extract``.
+It calls the top-level Python script ``install.py`` which does all the necessary operations to prepare the  application environment selected. This includes:
 
-The installation of ``flex_extract`` is done by the Shell script ``setup.sh`` which is located in the root directory of ``flex_extract``.
-It calls the top-level Python script ``install.py`` which does all necessary operations to prepare the selected application environment. This includes:
-
-- preparing the file ``ECMWF_ENV`` with the user credentials for member state access to ECMWF servers (in **remote** and **gateway** mode)
+- preparing the file ``ECMWF_ENV`` with the user credentials for member-state access to ECMWF servers (in **remote** and **gateway** mode)
 - preparation of a compilation Korn-shell script (in **remote** and **gateway** mode)
 - preparation of a job template with user credentials (in **remote** and **gateway** mode)
-- create a tar-ball of all necessary files
-- copying tar-ball to target location (depending on application mode and installation path)
-- submit compilation script to batch queue at ECMWF servers (in **remote** and **gateway** mode) or just untar tar-ball at target location (**local mode**)
-- compilation of the FORTRAN90 program ``calc_etadot``
+- create a tarball of all necessary files
+- copying the tarball to the target location (depending on application mode and installation path)
+- submit the compilation script to the batch queue at ECMWF servers (in **remote** and **gateway** mode) or just untar the tarball at target location (**local mode**)
+- compilation of the Fortran program ``calc_etadot``
 
 
-The Python installation script ``install.py`` has a couple of command line arguments which are defined in ``setup.sh`` in the section labelled with "*AVAILABLE COMMANDLINE ARGUMENTS TO SET*". The user has to adapt these parameters for his personal use. The parameters are listed and described in :ref:`ref-instparams`. The script also does some checks to guarantee necessary parameters were set.
+The Python installation script ``install.py`` has several command line arguments defined in ``setup.sh``, in the section labelled "*AVAILABLE COMMANDLINE ARGUMENTS TO SET*". The user has to adapt these parameters according to his/her personal needs. The parameters are listed and described in :ref:`ref-instparams`. The script also does some checks to guarantee that the necessary parameters were set.
    
 After the installation process, some tests can be conducted. They are described in section :ref:`ref-testinstallfe`.
 
-The following diagram sketches the involved files and scripts in the installation process:
+The following diagram sketches the files and scripts involved in the installation process:
 
 .. _ref-install-blockdiag:
 
@@ -114,7 +113,7 @@ The following diagram sketches the involved files and scripts in the installatio
 
 
 .. blockdiag::
-   :caption: Diagram of data flow during the installation process. The trapezoids are input files with the light blue area being the template files. The edge-rounded, orange boxes are the executable files which start the installation process and reads the input files. The rectangular, green boxes are the output files. The light green files are files which are only needed in the remota and gateway mode.
+   :caption: Diagram of data flow during the installation process. Trapezoids are input files with the light blue area being the template files. Round-edge orange boxes are executable files which start the installation process and read the input files. Rectangular green boxes are  output files. Light green files are  needed only in the remota and gateway mode.
 
    blockdiag {
    
@@ -132,8 +131,8 @@ The following diagram sketches the involved files and scripts in the installatio
 
 .. _ref-instparams:
 
-Installation Parameter
-----------------------
+Installation parameters
+-----------------------
    
 .. exceltable:: Parameter for Installation
     :file:  ../../_files/InstallationParameter.xls
@@ -154,9 +153,9 @@ Content of ``setup.sh``
 Usage of ``install.py`` (optional)
 ----------------------------------
 
-It is also possible to start the installation process of ``flex_extract`` directly from command line by using the ``install.py`` script instead of the wrapping Shell script ``setup.sh``.  This top-level script is located in 
-``flex_extract_vX.X/Source/Python`` and is executable. With the ``help`` parameter we see again all possible 
-command line parameter. 
+It is also possible to start the installation process of ``flex_extract`` directly from the command line by using the ``install.py`` script instead of the wrapper shell script ``setup.sh``.  This top-level script is located in 
+``flex_extract_vX.X/Source/Python`` and is executable. With the ``--help`` parameter, 
+we see again all possible command line parameters. 
 
 .. code-block:: bash
  
diff --git a/For_developers/Sphinx/source/Documentation/Input/templates.rst b/For_developers/Sphinx/source/Documentation/Input/templates.rst
index 109175a39fa211a778cc407bd264f6a0ea593660..786679efaf40d9bb6eaced866ba44a1cb59800e3 100644
--- a/For_developers/Sphinx/source/Documentation/Input/templates.rst
+++ b/For_developers/Sphinx/source/Documentation/Input/templates.rst
@@ -2,19 +2,19 @@
 Templates
 *********
 
-In ``flex_extract`` we use the Python package `genshi <https://genshi.edgewall.org/>`_ to create specific files from templates. It is the most efficient way to be able to quickly adapt e.g. the job scripts send to the ECMWF batch queue system or the namelist file für the Fortran program without the need to change the program code. 
+In ``flex_extract``, the Python package `genshi <https://genshi.edgewall.org/>`_ is used to create specific files from templates. It is the most efficient way to be able to quickly adapt, e. g., the job scripts sent to the ECMWF batch queue system, or the namelist file für the Fortran program, without the need to change the program code. 
 
 .. note::
-   Usually it is not recommended to change anything in these files without being able to understand the effects.
+   Do not change anything in these files unless you understand the effects!
    
-Each template file has its content framework and keeps so-called placeholder variables in the positions where the values needs to be substituted at run time. These placeholders are marked by a leading ``$`` sign. In case of the Kornshell job scripts, where (environment) variables are used the ``$`` sign needs to be doubled to `escape` and keep a single ``$`` sign as it is.
+Each template file has its content framework and keeps so-called placeholder variables in the positions where the values need to be substituted at run time. These placeholders are marked by a leading ``$`` sign. In case of the Kornshell job scripts, where (environment) variables are used, the ``$`` sign needs to be doubled for `escaping`.
    
-The following templates are used and can be found in directory ``flex_extract_vX.X/Templates``:
+The following templates are used; they can be found in the directory ``flex_extract_vX.X/Templates``:
 
 convert.nl
 ----------
 
-    This is the template for a Fortran namelist file called ``fort.4`` which will be read by ``calc_etadot``.
+    This is the template for a Fortran namelist file called ``fort.4`` read by ``calc_etadot``.
     It contains all the parameters ``calc_etadot`` needs. 
     
     .. code-block:: fortran
@@ -56,13 +56,13 @@ compilejob.template
 
     This template is used to create the job script file called ``compilejob.ksh`` during the installation process for the application modes **remote** and **gateway**. 
 
-    At the beginning some directives for the batch system are set. 
-    On the **ecgate** server the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
-    For the high performance computers **cca** and **ccb** the ``PBS`` comments are necessary and can be view at `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
+    At the beginning, some directives for the batch system are set. 
+    On the **ecgate** server, the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
+    For the high-performance computers **cca** and **ccb**, the ``PBS`` comments are necessary;  for details see `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
 
-    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending in the ``HOST``. It should not be changed without testing.
+    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending on the ``HOST``. It should not be changed without testing.
     
-    Afterwards the installation steps as such are done. Including the generation of the root directory, putting files in place, compiling the Fortran program and sending a log file via email.
+    Afterwards, the installation steps as such are done. They included the generation of the root directory, putting files in place, compiling the Fortran program, and sending a log file by email.
 
     .. code-block:: ksh
     
@@ -144,13 +144,14 @@ job.temp
 
     This template is used to create the actual job script file called ``job.ksh`` for the execution of ``flex_extract`` in the application modes **remote** and **gateway**. 
 
-    At the beginning some directives for the batch system are set. 
-    On the **ecgate** server the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
-    For the high performance computers **cca** and **ccb** the ``PBS`` comments are necessary and can be view at `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
+    At the beginning, some directives for the batch system are set. 
+    On the **ecgate** server, the ``SBATCH`` comments are the directives for the SLURM workload manager. A description of the single lines can be found at `SLURM directives <https://confluence.ecmwf.int/display/UDOC/Writing+SLURM+jobs>`_.
+    For the high performance computers **cca** and **ccb**, the ``PBS`` comments are necessary; 
+    for details see `PBS directives <https://confluence.ecmwf.int/display/UDOC/Batch+environment%3A++PBS>`_.
 
-    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending in the ``HOST``. It should not be changed without testing.
+    The software environment requirements mentioned in :ref:`ref-requirements` are prepared by loading the corresponding modules depending on the ``HOST``. It should not be changed without testing.
     
-    Afterwards the run directory and the ``CONTROL`` file are created and ``flex_extract`` is executed. In the end a log file is send via email.
+    Afterwards, the run directory and the ``CONTROL`` file are created and ``flex_extract`` is executed. In the end, a log file is send by email.
     
     .. code-block:: ksh
     
@@ -238,7 +239,7 @@ job.temp
 job.template
 ------------
 
-    This template is used to create the template for the execution job script ``job.temp`` for ``flex_extract`` in the installation process. A description of the file can be found under ``job.temp``. A couple of parameters are set in this process, such as the user credentials and the ``flex_extract`` version number.
+    This template is used to create the template for the execution job script ``job.temp`` for ``flex_extract`` in the installation process. A description of the file can be found under ``job.temp``. Several parameters are set in this process, such as the user credentials and the ``flex_extract`` version number.
         
     .. code-block:: ksh
     
@@ -324,21 +325,6 @@ job.template
 
 
 
-
-
-
-
-  
-
-
-
-
-
-   
-   
-
- 
-   
    
 
 .. toctree::
diff --git a/For_developers/Sphinx/source/Documentation/Overview/app_modes.rst b/For_developers/Sphinx/source/Documentation/Overview/app_modes.rst
index 27fd4c1ebaa21a36dd2faee8889e8187c85740ee..5d8d44bb899dbdada34c5c0f98d7ff785a5a934e 100644
--- a/For_developers/Sphinx/source/Documentation/Overview/app_modes.rst
+++ b/For_developers/Sphinx/source/Documentation/Overview/app_modes.rst
@@ -1,5 +1,5 @@
 *****************
-Application Modes
+Application modes
 *****************
 
 .. role:: underline
@@ -12,25 +12,25 @@ Application Modes
     
 .. _ref-app-modes:
 
-Arising from the two user groups described in :doc:`../../Ecmwf/access`, ``flex_extract`` has 4 different :underline:`user application modes`:
+Arising from the two user groups described in :doc:`../../Ecmwf/access`, ``flex_extract`` has four different :underline:`user application modes`:
 
 .. _ref-remote-desc:
 
   1. Remote (member)
-      In the **Remote mode** the user works directly on ECMWF Linux member state server, such as ``ecgate`` or ``cca/ccb``. The software will be installed in the ``$HOME`` directory. The user does not need to install any of the additional third-party libraries mentioned in :ref:`ref-requirements` as ECMWF provides everything with environment modules. The module selection will be done automatically in ``flex_extract``. 
+      In the **Remote mode** the user works directly on a ECMWF member-state Linux server, such as ``ecgate`` or ``cca/ccb``. The software will be installed in the ``$HOME`` directory. The user does not need to install any of the third-party libraries mentioned in :ref:`ref-requirements`, as ECMWF provides everything with environment modules. The module selection will be done automatically by ``flex_extract``. 
       
 .. _ref-gateway-desc:
       
   2. Gateway (member)
-      The **Gateway mode** can be used if a local member state gateway server is in place. Then the job scripts can be submitted to the ECMWF Linux member state server via the ECMWF web access tool ``ecaccess``. The installation script of ``flex_extract`` must be executed at the local gateway server such that the software will be installed in the ``$HOME`` directory at the ECMWF server and some extra setup is done in the local ``flex_extract`` directory at the local gateway server. For more information about establishing a gateway server please see `ECMWF's instructions on gateway server`_. For the **Gateway mode** the necessary environment has to be established which is described in :ref:`ref-prep-gateway`.
+      The **Gateway mode** can be used if a local member-state gateway server is in place. Then, the job scripts can be submitted to the ECMWF member-state Linux server via the ECMWF web access tool ``ecaccess``. The installation script of ``flex_extract`` must be executed on the local gateway server such that the software will be installed in the ``$HOME`` directory at the ECMWF server and that some extra setup is done in the ``flex_extract`` directory on the local gateway server. For more information about establishing a gateway server, please refer to `ECMWF's instructions on gateway server`_. For the **Gateway mode** the necessary environment has to be established which is described in :ref:`ref-prep-gateway`.
 
 .. _ref-local-desc:
       
   3. Local member
-      Scripts are installed and executed on a local machine, either in the current ``flex_extract`` directory or in a path given to the installation script. Under this scenario a software environment similar to that at ECMWF is required. Additionally, Web API's have to be installed to access ECMWF server. The complete installation process is described in :ref:`ref-local-mode`.
+      Scripts are installed and executed on a local machine, either in the current ``flex_extract`` directory or in a path given to the installation script. Under this scenario, a software environment similar to that at ECMWF is required. Additionally, web API's have to be installed to access ECMWF server. The complete installation process is described in :ref:`ref-local-mode`.
       
   4. Local public
-      Scripts are installed and executed on a local machine, either in the current ``flex_extract`` directory or in a path given to the installation script. Under this scenario a software environment similar to that at ECMWF is required. Additionally, Web API's have to be installed to access ECMWF server. The complete installation process is described in :ref:`ref-local-mode`. In this case a direct registration at ECMWF is necessary and the user has to accept a specific license agreement for each dataset he/she intends to retrieve. 
+      Scripts are installed and executed on a local machine, either in the current ``flex_extract`` directory or in a path given to the installation script. Under this scenario, a software environment similar to that at ECMWF is required. Additionally, web API's have to be installed to access ECMWF servers. The complete installation process is described in :ref:`ref-local-mode`. In this case, a direct registration at ECMWF is necessary and the user has to accept a specific license agreement for each dataset he/she intends to retrieve. 
       
       
 An overview is sketched in figure :ref:`ref-fig-marsaccess`.
diff --git a/For_developers/Sphinx/source/Documentation/Overview/prog_flow.rst b/For_developers/Sphinx/source/Documentation/Overview/prog_flow.rst
index 074d414c625e265e2dadb6d25fc79d0a802beeb9..eeced25f4c7c61afea8542dfdfa4fa240bf89270 100644
--- a/For_developers/Sphinx/source/Documentation/Overview/prog_flow.rst
+++ b/For_developers/Sphinx/source/Documentation/Overview/prog_flow.rst
@@ -1,5 +1,5 @@
 ************
-Program Flow
+Program flow
 ************
 
 
@@ -15,24 +15,24 @@ The following flow diagram shows the general steps performed by ``flex_extract``
 
 .. figure:: ../../_files/submit.png    
     
-    Overview of the call of python's ``submit.py`` script and raw sequence of working steps done in ``flex_extract``.
+    Overview of the call of the``submit.py`` Python script and raw sequence of work steps in ``flex_extract``.
 
     
-The ``submit.py`` Python program is called by the Shell script ``run.sh`` or ``run_local.sh`` and accomplish the following steps: 
+The ``submit.py`` Python script is called by the shell script ``run.sh`` or ``run_local.sh`` and accomplishes the following steps: 
 
-    1. Setup the control data:
-        It gets all command-line and ``CONTROL`` file parameters as well as optionally the ECMWF user credentials. Depending the :doc:`app_modes`, it might also prepare a job script which is then send to the ECMWF queue. 
-    2. Retrieves data from MARS:
-        It creates and sends MARS-requests either on the local machine or on ECMWF server, that receives the data and stores them in a specific format in GRIB files. If the parameter ``REQUEST`` was set ``1`` the data are not received but a file ``mars_requests.csv`` is created with a list of MARS requests and their settings. If it is set to ``2`` the file is created in addition to retrieving the data. The requests are created in an optimised way by splitting in time, jobs  and parameters.   
-    3. Post-process data to create final ``FLEXPART`` input files:
-        After all data is retrieved, the disaggregation of flux fields (`see here <../disagg.html>`_ ) is done as well as the calculation of vertical velocity (`see here <../vertco.html>`_) by the Fortran program ``calc_etadot``. Eventually, the GRIB fields are merged together such that a single grib file per time step is available with all fields for ``FLEXPART``. Since model level fields are typically in *GRIB2* format whereas surface level fields are still in *GRIB1* format, they can be converted into GRIB2 if parameter ``FORMAT`` is set to *GRIB2*. Please note, however, that older versions of FLEXPART may have difficulties reading pure *GRIB2* files since some parameter IDs change in *GRIB2*. If the retrieval is executed remotely at ECMWF, the resulting files can be communicated to the local gateway server via the ``ECtrans`` utility if the parameter ``ECTRANS`` is set to ``1`` and the parameters ``GATEWAY``, ``DESTINATION`` have been set properly during installation. The status of the transfer can be checked with the command ``ecaccess-ectrans-list`` (on the local gateway server). If the script is executed locally the progress of the script can be followed with the usual Linux tools.
+    1. Setup of control data:
+        Command-line and ``CONTROL``-file parameters are read, as well as (optionally) the ECMWF user credentials. Depending the :doc:`app_modes`, a job script might be prepared which is then sent to the ECMWF queue. 
+    2. Retrieval of data from MARS:
+        MARS requests are created either on the local machine or on the ECMWF server and then submitted which retrieve the data and store them in GRIB files. If the parameter ``REQUEST`` was set ``1``, the data are not retrieved and instead a file ``mars_requests.csv`` is created, which contains a list of the MARS requests and their settings. If ``REQEST`` is set to ``2``, the csv file is created in addition to retrieving the data. The requests are created in an optimised way by splitting with respect to time, jobs  and parameters.   
+    3. Post-processing of data to create final ``FLEXPART`` input files:
+        After all data have been retrieved, flux fields are disaggregated (`see here <../disagg.html>`_ ) and the vertical velocity is calculated (`see here <../vertco.html>`_) by the Fortran program ``calc_etadot``. Finally, the GRIB fields are merged into a single grib file per time step containing all the fields for ``FLEXPART``. Since model-level fields are typically in *GRIB2* format, whereas surface-level fields are still in *GRIB1* format, they will be converted into GRIB2 if parameter ``FORMAT`` is set to *GRIB2*. Please note, however, that older versions of FLEXPART may have difficulties to read these *GRIB2* files since some parameter IDs have been change in *GRIB2*. If the retrieval is executed remotely at ECMWF, the resulting files will be sent to the local gateway server via the ``ECtrans`` utility if the parameter ``ECTRANS`` is set to ``1`` and the parameters ``GATEWAY``, ``DESTINATION`` have been set properly during installation. The status of the transfer can be checked with the command ``ecaccess-ectrans-list`` (on the local gateway server). If the script is executed locally, the progress of the script can be followed with the usual Linux tools.
 
 
 
 Workflows of different application modes
 ========================================
 
-More details on how different the program flow is for the different :doc:`app_modes` is sketched in the following diagrams:  
+The following diagrams show how different the program flow is for the different :doc:`app_modes`:  
 
 +-------------------------------------------------+------------------------------------------------+
 | .. figure:: ../../_files/mode_remote.png        | .. figure:: ../../_files/mode_gateway.png      |
diff --git a/For_developers/Sphinx/source/Documentation/api.rst b/For_developers/Sphinx/source/Documentation/api.rst
index 5c0bff7d88c9354d70df4de575aa4fb853191c2c..4494165114752802da0472037da11916a80e3552 100644
--- a/For_developers/Sphinx/source/Documentation/api.rst
+++ b/For_developers/Sphinx/source/Documentation/api.rst
@@ -1,5 +1,5 @@
 ****************************
-Auto Generated Documentation
+Auto-generated documentation
 ****************************
     
     
diff --git a/For_developers/Sphinx/source/Documentation/disagg.rst b/For_developers/Sphinx/source/Documentation/disagg.rst
index deb420864c82f618b2142df0a954e4816214136c..5a035f8be97175182318fbe97ca1d8f8e4edcd8f 100644
--- a/For_developers/Sphinx/source/Documentation/disagg.rst
+++ b/For_developers/Sphinx/source/Documentation/disagg.rst
@@ -1,12 +1,17 @@
 ***************************
-Disaggregation of Flux Data
+Disaggregation of flux data
 ***************************
     
-``FLEXPART`` interpolates meteorological input data linearly to the position of computational particles in time and space. This method requires point values in the discrete input fields. However, flux data (as listed in table :ref:`ref-table-fluxpar`) from the ECMWF represent cell averages or integrals and are accumulated over a specific time interval, depending on the dataset. Hence, to conserve the integral quantity with ``FLEXPART``'s linear interpolation a pre-processing scheme has to be applied. 
+``FLEXPART`` interpolates meteorological input data linearly to the position of computational 
+particles in time and space. This method requires point values in the discrete input fields. 
+However, flux data (as listed in table :ref:`ref-table-fluxpar` below) from the ECMWF represent cell 
+averages or integrals and are accumulated over a specific time interval, depending on the data 
+set. Hence, to conserve the integral quantity with the linear interpolation used in ``FLEXPART``,
+pre-processing has to be applied. 
 
 .. _ref-table-fluxpar:
 
-.. csv-table:: flux fields
+.. csv-table:: Flux fields
     :header: "Short Name", "Name", "Units", "Interpolation Type"
     :align: center
     :widths: 5,15,5,10
@@ -19,25 +24,25 @@ Disaggregation of Flux Data
     SSR,  "surface net solar radiation",        ":math:`J m^{-2}`",   "bicubic interpolation"
     
 
-The first step is to *de-accumulate* the fields in time so that each value represents an integral in x, y, t space.
-Afterwards, a *disaggregation* scheme is applied which means to break down the integral value into point values. 
-In order to be able to carry out the disaggregation procedure proposed by Paul James, additional flux data is retrieved automatically for one day at the beginning and one day at the end of the period specified. Thus, data for flux computation will be requested for the period START_DATE-1 to END_DATE+1. Note that these (additional) dates are used only for interpolation within ``flex_extract`` and are not communicated to the final ``FLEXPART`` input files.
+The first step is to *de-accumulate* the fields in time so that each value represents non-overlapping integrals in x-, y-, and t-space.
+Afterwards, a *disaggregation* scheme is applied which means to convert the integral value to corresponding point values to be used late for the interpolation. 
+The disaggregation procedure as proposed by Paul James (currently, the standard) requires additional flux data for one day at the beginning and one day at the end of the period specified.
+They are retrieved automatically. Thus, data for flux computation will be requested for the period START_DATE-1 to END_DATE+1. Note that these (additional) dates are used only for interpolation within ``flex_extract`` and are not contained in the final ``FLEXPART`` input files.
 
-The flux disaggregation produces files named ``fluxYYYYMMDDHH``, where ``YYYYMMDDHH`` is the date format. Note, that the first two and last two flux files do not contain any data.
+The flux disaggregation produces files named ``fluxYYYYMMDDHH``, where ``YYYYMMDDHH`` is the date format. Note that the first two and last two flux files do not contain any data.
 
 .. note::
 
-    Note also that for operational retrievals (``BASETIME`` set to 00 or 12) forecast fluxes are only available until ``BASETIME``, so that no polynomial interpolation is possible in the last two time intervals. This is the reason why setting ``BASETIME`` is not recommended for on demand scripts.        
+    Note also that for operational retrievals (``BASETIME`` set to 00 or 12), forecast fluxes are only available until ``BASETIME``, so that no polynomial interpolation is possible in the last two time intervals. This is the reason why setting ``BASETIME`` is not recommended for on-demand scripts.        
         
 
 Disaggregation for precipitation in older versions
 --------------------------------------------------
 
-In ``flex_extract`` up to version 5 the disaggregation was done with a Fortran program (FLXACC2). In version 6 this part was converted to Python.
+In ``flex_extract`` up to version 5, the disaggregation was done with a Fortran program (FLXACC2). In version 6, this part was recoded in Python.
 
-
-In the old versions (below 7.1) a relatively simple method processes the precipitation fields in a way that is consistent with the scheme applied in ``FLEXPART`` for all variables: linear interpolation between times where input fields are available.
-At first the accumulated values are divided by the number of hours (i.e., 3 or 6).
+In the old versions (below 7.1), a relatively simple method processes the precipitation fields in a way that is consistent with the linear interpolation between times where input fields are available that is applied in ``FLEXPART`` for all variables. 
+This scheme (from Paul James) at first divides the accumulated values by the number of hours (i.e., 3 or 6). ???
 The best option for disaggregation, which was realised, is conservation within the interval under consideration plus the two adjacent ones. 
 Unfortunately, this leads to undesired temporal smoothing of the precipitation time series – maxima are damped and minima are raised. 
 It is even possible to produce non-zero precipitation in dry intervals bordering a precipitation period as shown in Fig. 1.
@@ -52,11 +57,11 @@ However, the supporting points in space are not shifted between precipitation an
 .. figure:: ../_files/old_disagg.png
     :figclass: align-center
 
-    Fig. 1: Example of disaggregation scheme as implemented in older versions for an isolated precipitation event lasting one time interval (thick blue line). The amount of original precipitation after de-accumulation is given by the blue-shaded area. The green circles represent the discrete grid points after disaggregation and linearly interpolate in between them as indicated by the green line and the green-shaded area. Note that supporting points for the interpolation are shifted by a half-time interval compared to the times when other meteorological fields are available (Hittmeir et al. 2018).
+    Fig. 1: Example of disaggregation scheme as implemented in older versions for an isolated precipitation event lasting one time interval (thick blue line). The amount of original precipitation after de-accumulation is given by the blue-shaded area. The green circles represent the discrete grid points after disaggregation and linearly interpolate in between them as indicated by the green line and the green-shaded area. Note that supporting points for the interpolation are shifted by half a time interval compared to the times when other meteorological fields are available (Hittmeir et al. 2018).
 
 
 
-Disaggregation is done for 4 adjacent timespans (:math:`a_0, a_1, a_2, a_3`) which generates a new, disaggregated value which is output at the central point of the 4 adjacent timespans. 
+Disaggregation is done for four adjacent timespans (:math:`a_0, a_1, a_2, a_3`) which generates a new, disaggregated value which is output at the central point of the four adjacent timespans. 
 
 .. math::
 
@@ -68,7 +73,7 @@ Disaggregation is done for 4 adjacent timespans (:math:`a_0, a_1, a_2, a_3`) whi
     p_{bd}(m) &= a_1(m) * a_2(m) / (a_1(m) + a_3(m))\\
 
 
-This new point :math:`p` is used for linear interpolation of the complete timeseries afterwards. If one of the 4 original timespans has a value below 0 it is set to 0 prior to the calculation.
+This new point :math:`p` is used for linear interpolation of the complete timeseries afterwards. If one of the four original timespans has a value below 0, it is set to 0 prior to the calculation.
     
 .. math::
 
@@ -77,14 +82,12 @@ This new point :math:`p` is used for linear interpolation of the complete timese
 
 
 
-
-
 Disaggregation for precipitation in version 7.1
 -----------------------------------------------
 
-Due to the problems with generating precipitation in originally dry (or lower) intervals and the temporal smoothing a new algorithm was developed. The approach is based on a one dimensional piecewise linear function with two additional supporting grid points within each grid cell, dividing the interval into three pieces. It fulfils the desired requirements by preserving the integral precipitation in each time interval, guaranteeing continuity at interval boundaries, and maintaining non-negativity. An additional monotonicity filter helps to gain monotonicity. 
-The more natural requirements of symmetry, reality, computational efficiency and easy implementation motivates the linear formulation.
-These requirements on the reconstruction algorithm imply that time intervals with no precipitation remain unchanged, i.e. the reconstructed values vanish throughout this whole time interval, too. 
+Due to the problems mentioned above, a new algorithm was developed. The approach is based on a one-dimensional, piecewise-linear function with two additional supporting grid points within each grid cell, dividing the interval into three pieces. It fulfils the desired requirements of preserving the integral precipitation in each time interval, guaranteeing continuity at interval boundaries, and maintaining non-negativity. An additional filter improves monotonicity. 
+The more natural requirements of symmetry, reality, computational efficiency and easy implementation motivates the use of a linear formulation.
+These requirements for the reconstruction algorithm imply that time intervals with no precipitation remain unchanged, i. e., the reconstructed values vanish throughout this whole time interval, too. 
 In the simplest scenario of an isolated precipitation event, where in the time interval before and after the data values are zero, the reconstruction algorithm therefore has to vanish at the boundaries of the interval, too. 
 The additional conditions of continuity and conservation of the precipitation amount then require us to introduce sub-grid points if we want to keep a linear interpolation (Fig. 2). 
 The height is thereby determined by the condition of conservation of the integral of the function over the time interval.
@@ -141,27 +144,24 @@ The following lists the equations of the new algorithm.
     \textbf{endif}
 
 
-In the case of the new disaggregation method for precipitation, the two new sub grid points are added in the ``flux`` output files. They are identified by the forecast step parameter ``step`` which is 0 for the original time interval and 1 or 2 for the two new sub grid points respectively. The filenames do not change.   
+In the case of the new disaggregation method for precipitation, the two new sub-grid points are added in the ``flux`` output files. They are identified by the forecast step parameter ``step`` which is 0 for the original time interval, and 1 or 2, respectively, for the two new sub-grid points. The filenames do not change.   
 
    
 .. note::
 
-    The new method for disaggregation was published in the Geoscientific Model Development Journal in 2018:
+    The new method for disaggregation was published in the journal Geoscientific Model Development in 2018:
     
     Hittmeir, S., Philipp, A., and Seibert, P.: A conservative reconstruction scheme for the interpolation of extensive quantities in the Lagrangian particle dispersion model FLEXPART, Geosci. Model Dev., 11, 2503-2523, https://doi.org/10.5194/gmd-11-2503-2018, 2018.
 
-      
-   
 
- 
 
 
-Disaggregation for the rest of the flux fields
+Disaggregation for the other flux fields
 ----------------------------------------------
       
 The accumulated values for the other variables are first divided by the number of hours and
-then interpolated to the exact times X using a bicubic interpolation which conserves the integrals of the fluxes within each timespan.
-Disaggregation is done for 4 adjacent timespans (:math:`p_a, p_b, p_c, p_d`) which generates a new, disaggregated value which is output at the central point of the 4 adjacent timespans.
+then interpolated to the exact times using a bicubic interpolation which conserves the integrals of the fluxes within each timespan.
+Disaggregation is done for four adjacent timespans (:math:`p_a, p_b, p_c, p_d`) which produces a new, disaggregated value that is the output at the central point of the four adjacent timespans.
 
 .. math::
     
diff --git a/For_developers/Sphinx/source/Documentation/input.rst b/For_developers/Sphinx/source/Documentation/input.rst
index 1dbfe31f6fbfc5f447fa2bacb2a522023185e5a6..0a0e11d66cd7bdffa7762fdb86eaafe2c59302de 100644
--- a/For_developers/Sphinx/source/Documentation/input.rst
+++ b/For_developers/Sphinx/source/Documentation/input.rst
@@ -1,22 +1,20 @@
 ********************
-Control & Input Data
+Control & input data
 ********************
 
-Input Data
+Input data
     - :doc:`Input/control`
-          ``Flex_extract`` needs a number of controlling parameters to decide on the behaviour and the actual dataset to be retrieved. They are initialized by ``flex_extract`` with their default values and can be overwritten with definitions set in the so called :doc:`Input/control`. 
+          ``Flex_extract`` needs a number of controlling parameters to decide on the behaviour and the actual data set to be retrieved. They are initialised by ``flex_extract`` with certain default values which can be overwritten with definitions set in the so-called :doc:`Input/control`. 
 
-          To be able to successfully retrieve data from the ECMWF Mars archive it is necessary to understand these parameters and set them to proper and consistent values. They are described in :doc:`Input/control_params` section. 
+          For a successfull retrieval of data from the ECMWF MARS archive it is necessary to understand these parameters and to set them to proper and consistent values. They are described in :doc:`Input/control_params` section. 
 
-          We also have some :doc:`Input/examples` and description of :doc:`Input/changes` changes to previous versions and downward compatibilities.
+          Furthermore, some :doc:`Input/examples` are provided, and in :doc:`Input/changes` changes to previous versions and downward compatibilities are described.
         
     - :doc:`Input/ecmwf_env` 
-         For ``flex_extract`` it is necessary to be able to reach ECMWF servers in the **remote mode** and the **gateway mode**. Therefore a :doc:`Input/ecmwf_env` is created during the installation process.
+         ``flex_extract`` needs to be able to reach ECMWF servers in the **remote mode** and the **gateway mode**. Therefore a :doc:`Input/ecmwf_env` is created during the installation process.
 
     - :doc:`Input/templates` 
-         A number of files which are created by ``flex_extract`` are taken from templates. This makes it easy to adapt for example the jobscripts regarding its settings for the batch jobs.         
-
-
+         A number of files which are created by ``flex_extract`` are taken from templates. This makes it easy to adapt, for example, the job scripts with regard to the settings for the batch jobs.         
 
 
 
@@ -28,8 +26,8 @@ Input Data
 .. _ref-controlling:
 
 Controlling
-    The main tasks and behaviour of ``flex_extract`` are controlled by its Python scripts. There are two top-level scripts, one for installation called install_ and one for execution called submit_. 
-    They can interpret a number of command line arguments which can be seen by typing ``--help`` after the script call. Go to the root directory of ``flex_extract`` to type:
+    The main tasks and the behaviour of ``flex_extract`` are controlled by the Python scripts. There are two top-level scripts, one for installation called install_, and one for execution called submit_. 
+    They interpret a number of command-line arguments which can be seen by typing ``--help`` after the script call. Go to the root directory of ``flex_extract`` to type:
 
     .. code-block:: bash
 
@@ -37,15 +35,14 @@ Controlling
        python3 Source/Python/install.py --help
        python3 Source/Python/submit.py --help
    
-    In this new version we provide also the wrapping Shell scripts setup_ and run_, which sets the command line parameters, do some checks and execute the corresponing Python scripts ``install.py`` and ``submit.py`` respectivley. 
-     
-    It might be faster and easier for beginners. See :doc:`../quick_start` for information on how to use them.
+    With version 7.1, we provide also wrapper shell scripts setup_ and run_ which set the command-line parameters, do some checks, and execute the corresponing Python scripts ``install.py`` and ``submit.py``, respectively. 
+     It might be faster and easier for beginners if they are used. See :doc:`../quick_start` for information on how to use them.
 
-    Additionally, ``flex_extract`` creates the Korn Shell scripts :doc:`Input/compilejob` and :doc:`Input/jobscript` which will be send to the ECMWF serves in the **remote mode** and the **gateway mode** for starting batch jobs.
+    ``flex_extract`` also creates the Korn shell scripts :doc:`Input/compilejob` and :doc:`Input/jobscript` which will be sent to the ECMWF servers in the **remote mode** and the **gateway mode** for starting batch jobs.
 
-    The Fortran program will be compiled during the installation process by the :doc:`Input/fortran_makefile`. 
+    The Fortran program is compiled during the installation process using the :doc:`Input/fortran_makefile`. 
     
-    To sum up, the following scripts controls ``flex_extract``:
+    To sum up, the following scripts control ``flex_extract``:
 
     Installation 
        - :doc:`Input/setup` 
diff --git a/For_developers/Sphinx/source/Documentation/output.rst b/For_developers/Sphinx/source/Documentation/output.rst
index 40e8310376aaa02364364b18498cb9457d3a3328..d8d7586814c2563f471f5e463cbb1a98d78ab131 100644
--- a/For_developers/Sphinx/source/Documentation/output.rst
+++ b/For_developers/Sphinx/source/Documentation/output.rst
@@ -1,13 +1,13 @@
 ***********
-Output Data
+Output data
 ***********
 
-The output data of ``flex_extract`` are separated mainly into temporary files and the final ``FLEXPART`` input files:
+The output data of ``flex_extract`` can be divided into the final ``FLEXPART`` input files and  temporary files:
 
 +-----------------------------------------------+----------------------------------------------+   
 |   ``FLEXPART`` input files                    |  Temporary files (saved in debug mode)       | 
 +-----------------------------------------------+----------------------------------------------+
-| - Standard output filenames                   | - MARS request file (opt)                    | 
+| - Standard output file names                  | - MARS request file (optional)               | 
 | - Output for pure forecast                    | - flux files                                 | 
 | - Output for ensemble members                 | - VERTICAL.EC                                |
 | - Output for new precip. disaggregation       | - index file                                 | 
@@ -20,22 +20,22 @@ The output data of ``flex_extract`` are separated mainly into temporary files an
 ``FLEXPART`` input files
 ========================
 
-The final output files of ``flex_extract`` are also the meteorological ``FLEXPART`` input files.
-The naming of these files depend on the kind of data extracted by ``flex_extract``. 
+The final output files of ``flex_extract`` are the meteorological input files for ``FLEXPART``.
+The naming convention for these files depends on the kind of data extracted by ``flex_extract``. 
 
 Standard output files
 ---------------------
  
-In general, there is a file for each time step with the filename format:
+In general, there is one file for each time named:
 
 .. code-block:: bash
 
     <prefix>YYMMDDHH
     
-The ``prefix`` is by default defined as ``EN`` and can be re-defined in the ``CONTROL`` file.
-Each file contains all meteorological fields needed by ``FLEXPART`` for all selected model levels for a specific time step. 
+where YY are the last two digits of the year, MM is the month, DD the day, and HH the hour (UTC). <prefix> is by default defined as EN, and can be re-defined in the ``CONTROL`` file.
+Each file contains all meteorological fields at all levels as needed by ``FLEXPART``, valid for the time indicated in the file name. 
 
-Here is an example output which lists the meteorological fields in a single file called ``CE00010800`` where we extracted only the lowest model level for demonstration reasons:
+Here is an example output which lists the meteorological fields in a single file called ``CE00010800`` (where we extracted only the lowest model level for demonstration purposes):
 
 .. code-block:: bash
 
@@ -83,19 +83,19 @@ Here is an example output which lists the meteorological fields in a single file
 Output files for pure forecast
 ------------------------------
 
-``Flex_extract`` can retrieve forecasts which can be longer than 23 hours. To avoid collisions of time steps for forecasts of more than one day a new scheme for filenames in pure forecast mode is introduced:
+``Flex_extract`` is able to retrieve forecasts with a lead time of more than 23 hours. In order to avoid collisions of time steps names, a new scheme for filenames in pure forecast mode is introduced:
 
 .. code-block:: bash
 
     <prefix>YYMMDD.HH.<FORECAST_STEP>
 
-The ``<prefix>`` is, as in the standard output, by default ``EN`` and can be re-defined in the ``CONTROL`` file. ``YYMMDD`` is the date format and ``HH`` the forecast time which is the starting time for the forecasts. The ``FORECAST_STEP`` is a 3 digit number which represents the forecast step in hours. 
+The ``<prefix>`` is, as in the standard output, by default ``EN`` and can be re-defined in the ``CONTROL`` file. ``YYMMDD`` is the date format and ``HH`` the forecast time which is the starting time for the forecasts. The ``FORECAST_STEP`` is a 3-digit number which represents the forecast step in hours. 
     
 
 Output files for ensemble predictions
 -------------------------------------
 
-Ensembles can be retrieved and are addressed by the grib message parameter ``number``. The ensembles are saved per file and standard filenames are supplemented by the letter ``N`` and the ensemble member number in a 3 digit format.
+``Flex_extract`` is able to retrieve ensembles data; they are labelled by the grib message parameter ``number``. Each ensemble member is saved in a separate file, and standard filenames are supplemented by the letter ``N`` and the ensemble member number in a 3-digit format.
 
 .. code-block:: bash
 
@@ -105,10 +105,11 @@ Ensembles can be retrieved and are addressed by the grib message parameter ``num
 Additional fields with new precipitation disaggregation
 -------------------------------------------------------
 
-The new disaggregation method for precipitation fields produces two additional precipitation fields for each time step and precipitation type. They serve as sub-grid points in the original time interval. For details of the method see :doc:`disagg`.
-The two additional fields are marked with the ``step`` parameter in the Grib messages and are set to "1" and "2" for sub-grid point 1 and 2 respectively.
-The output filenames do not change in this case.  
-Below is an example list of precipitation fields in an output file generated with the new disaggregation method:
+The new disaggregation method for precipitation fields produces two additional precipitation fields for each time step and precipitation type (large-scale and convective). They serve as sub-grid points in the original time interval. For details of the method see :doc:`disagg`.
+The two additional fields are addressed using the ``step`` parameter in the GRIB messages, which
+is set to "1" or "2", for sub-grid points 1 and 2, respectively.
+The output file names are not altered.  
+An example of the list of precipitation fields in an output file generated with the new disaggregation method is found below:
 
 .. code-block:: bash 
 
@@ -128,13 +129,16 @@ Below is an example list of precipitation fields in an output file generated wit
 Temporary files
 ===============
 
-``Flex_extract`` works with a number of temporary data files which are usually deleted after a successful data extraction. They are only stored if the ``DEBUG`` mode is switched on (see :doc:`Input/control_params`). 
+``Flex_extract`` creates a number of temporary data files which are usually deleted at the end of a successful run. They are preserved only if the ``DEBUG`` mode is switched on (see :doc:`Input/control_params`). 
 
 MARS grib files
 ---------------
 
 ``Flex_extract`` retrieves all meteorological fields from MARS and stores them in files ending with ``.grb``.
-Since the request times and data transfer of MARS access are limited and ECMWF asks for efficiency in requesting data from MARS, ``flex_extract`` splits the overall data request in several smaller requests. Each request is stored in an extra ``.grb`` file and the file names are put together by several pieces of information:
+Since there are limits implemented by ECMWF for the time per request and data transfer from MARS, 
+and as ECMWF asks for efficient MARS retrievals, ``flex_extract`` splits the overall data request 
+into several smaller requests. Each request is stored in its own ``.grb`` file, and the file 
+names are composed of several pieces of information:
 
     .. code-block:: bash
     
@@ -143,21 +147,20 @@ Since the request times and data transfer of MARS access are limited and ECMWF a
 Description:
        
 Field type: 
-    ``AN`` - Analysis, ``FC`` - Forecast, ``4V`` - 4d variational analysis, ``CV`` - Validation forecast, ``CF`` - Control forecast, ``PF`` - Perturbed forecast
+    ``AN`` - Analysis, ``FC`` - Forecast, ``4V`` - 4D variational analysis, ``CV`` - Validation forecast, ``CF`` - Control forecast, ``PF`` - Perturbed forecast
 Grid type: 
-   ``SH`` - Spherical Harmonics, ``GG`` - Gaussian Grid, ``OG`` - Output Grid (typically lat/lon), ``_OROLSM`` - Orography parameter
+   ``SH`` - Spherical Harmonics, ``GG`` - Gaussian Grid, ``OG`` - Output Grid (typically lat / lon), ``_OROLSM`` - Orography parameter
 Temporal property:
     ``__`` - instantaneous fields, ``_acc`` - accumulated fields
 Level type: 
-    ``ML`` - Model Level, ``SL`` - Surface Level
+    ``ML`` - model level, ``SL`` - surface level
 ppid:
-    The process number of the parent process of submitted script.
+    The process number of the parent process of the script submitted.
 pid:
-    The process number of the submitted script.
+    The process number of the script submitted.
 
-The process ids should avoid mixing of fields if several ``flex_extract`` jobs are performed in parallel (which is, however, not recommended). The date format is YYYYMMDDHH.
 
-Example ``.grb`` files for a day of CERA-20C data:
+Example ``.grb`` files for one day of CERA-20C data:
 
     .. code-block:: bash
 
@@ -171,12 +174,14 @@ Example ``.grb`` files for a day of CERA-20C data:
 MARS request file 
 -----------------
 
-This file is a ``csv`` file called ``mars_requests.csv`` with a list of the actual settings of MARS request parameters (one request per line) in a flex_extract job. It is used for documenting the data which were retrieved and for testing reasons.
+This file is a ``csv`` file called ``mars_requests.csv`` listing the actual settings of the MARS 
+request (one request per line) in a flex_extract job. 
+It is used for documenting which data were retrieved, and for testing.
 
-Each request consist of the following parameters, whose meaning mainly can be taken from :doc:`Input/control_params` or :doc:`Input/run`: 
+Each request consists of the following parameters, whose meaning mostly can be taken from :doc:`Input/control_params` or :doc:`Input/run`: 
 request_number, accuracy, area, dataset, date, expver, gaussian, grid, levelist, levtype, marsclass, number, param, repres, resol, step, stream, target, time, type
   
-Example output of a one day retrieval of CERA-20c data: 
+Example output of a one-day retrieval of CERA-20C data: 
 
 .. code-block:: bash
 
@@ -191,29 +196,31 @@ Example output of a one day retrieval of CERA-20c data:
 VERTICAL.EC
 -----------
 
-The vertical discretization of model levels. This file contains the ``A`` and ``B`` parameters to calculate the model level height in meters.
+This file contains information describing the vertical discretisation (model levels) 
+in form of the ``A`` and ``B`` parameters which allow to calculate the actual pressure of a model level from the surface pressure.
 
 
 Index file
 ----------
 
-This file is usually called ``date_time_stepRange.idx``. It contains indices pointing to specific grib messages from one or more grib files. The messages are selected with a composition of grib message keywords. 
+This file is called ``date_time_stepRange.idx``. It contains indices pointing to specific grib messages from one or more grib files. The messages are selected with a composition of grib message keywords. 
+#PS NEEDS MORE DESCRIPTION
 
 
-flux files
+Flux files
 ----------
 
-The flux files contain the de-accumulated and dis-aggregated flux fields of large scale and convective precipitation, eastward turbulent surface stress, northward turbulent surface stress, surface sensible heat flux and the surface net solar radiation. 
+The flux files contain the de-accumulated and dis-aggregated flux fields of large-scale and convective precipitation, east- and northward turbulent surface stresses, the surface sensible heat flux, and the surface net solar radiation. 
 
 .. code-block:: bash
 
     flux<date>[.N<xxx>][.<xxx>]
 
-The date format is YYYYMMDDHH. The optional block ``[.N<xxx>]`` marks the ensemble forecast number, where ``<xxx>`` is the ensemble member number. The optional block ``[.<xxx>]`` marks a pure forecast with ``<xxx>`` being the forecast step.
+The date format is YYYYMMDDHH as explained before. The optional block ``[.N<xxx>]`` is used for the ensemble forecast date, where ``<xxx>`` is the ensemble member number. The optional block ``[.<xxx>]`` marks a pure forecast with ``<xxx>`` being the forecast step.
 
 .. note::
 
-    In the case of the new dis-aggregation method for precipitation, two new sub-intervals are added in between each time interval. They are identified by the forecast step parameter which is ``0`` for the original time interval and ``1`` or ``2`` for the two new intervals respectively. 
+    In the case of the new dis-aggregation method for precipitation, two new sub-intervals are added in between each time interval. They are identified by the forecast step parameter which is ``0`` for the original time interval, and ``1`` or ``2``,  respectively, for the two new intervals. 
 
     
 fort files
@@ -225,9 +232,9 @@ There are a number of input files for the ``calc_etadot`` Fortran program named
 
     fort.xx
     
-where ``xx`` is the number which defines the meteorological fields stored in these files. 
-They are generated by the Python part of ``flex_extract`` by just splitting the meteorological fields for a unique time stamp from the ``*.grb`` files into the ``fort`` files. 
-The following table defines the numbers with their corresponding content.   
+where ``xx`` is a number which defines the meteorological fields stored in these files. 
+They are generated by the Python code in ``flex_extract`` by splitting the meteorological fields for a unique time stamp from the ``*.grb`` files, storing them under the names ``fort.<XX>`` where <XX> represents some number. 
+The following table defines the numbers and the corresponding content:   
 
 .. csv-table:: Content of fort - files
     :header: "Number", "Content"
@@ -239,12 +246,12 @@ The following table defines the numbers with their corresponding content.
     "13", "divergence (optional)" 
     "16", "surface fields"
     "17", "specific humidity"
-    "18", "surface specific humidity (reduced gaussian)"
-    "19", "vertical velocity (pressure) (optional)" 
+    "18", "surface specific humidity (reduced Gaussian grid)"
+    "19", "omega (vertical velocity in pressure coordinates) (optional)" 
     "21", "eta-coordinate vertical velocity (optional)" 
-    "22", "total cloud water content (optional)"
+    "22", "total cloud-water content (optional)"
 
-Some of the fields are solely retrieved with specific settings, e.g. the eta-coordinate vertical velocity is not available in ERA-Interim datasets and the total cloud water content is an optional field for ``FLEXPART v10`` and newer. 
+Some of the fields are solely retrieved with specific settings, e. g., the eta-coordinate vertical velocity is not available in ERA-Interim datasets, and the total cloud-water content is an optional field which is useful for ``FLEXPART v10`` and newer. 
 
 The ``calc_etadot`` program saves its results in file ``fort.15`` which typically contains:
 
@@ -258,7 +265,7 @@ More details about the content of ``calc_etadot`` can be found in :doc:`vertco`.
     
 .. note::
  
-    The ``fort.4`` file is the namelist file to drive the Fortran program ``calc_etadot``. It is therefore also an input file.
+    The ``fort.4`` file is the namelist file to control the Fortran program ``calc_etadot``. It is therefore also an input file.
     
     Example of a namelist:
     
diff --git a/For_developers/Sphinx/source/Documentation/overview.rst b/For_developers/Sphinx/source/Documentation/overview.rst
index 1e9af1b4f5f8aa4bd6f92b237d8dbcbeee82a436..0409ac22fe138183d9f7c47352e9ec001bd55ef0 100644
--- a/For_developers/Sphinx/source/Documentation/overview.rst
+++ b/For_developers/Sphinx/source/Documentation/overview.rst
@@ -2,21 +2,21 @@
 Overview
 ========
 
-``Flex_extract`` is an open-source software to retrieve meteorological fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) Mars archive to serve as input files for the ``FLEXTRA``/``FLEXPART`` Atmospheric Transport Modelling system.
-``Flex_extract`` was created explicitly for ``FLEXPART`` users who wants to use meteorological data from ECMWF to drive the ``FLEXPART`` model. 
-The software retrieves the minimal number of parameters ``FLEXPART`` needs to work and provides the data in the explicity format ``FLEXPART`` understands.
+``Flex_extract`` is an open-source software to retrieve meteorological fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) MARS archive to serve as input files for the ``FLEXTRA``/``FLEXPART`` atmospheric transport modelling system.
+``Flex_extract`` was created explicitly for ``FLEXPART`` users who want to use meteorological data from ECMWF to drive the ``FLEXPART`` model. 
+The software retrieves the minimum set of parameters needed by ``FLEXPART`` to work, and provides the data in the specific format required by ``FLEXPART``.
 
-``Flex_extract`` consists of 2 main parts:
-    1. a Python part, where the reading of parameter settings, retrieving data from MARS and preparing the data for ``FLEXPART`` is done and 
-    2. a Fortran part, where the calculation of the vertical velocity is done and if necessary the conversion from spectral to regular latitude/longitude grids.
+``Flex_extract`` consists of two main parts:
+    1. a Python part which reads the parameter settings, retrieves the data from MARS, and prepares them for ``FLEXPART``, and 
+    2. a Fortran part which calculates the vertical velocity and, if necessary, converts variables from the spectral representation to regular latitude/longitude grids.
 
-Additionally, it has some Korn shell scripts which are used to set the environment and batch job features on ECMWF servers for the *gateway* and *remote* mode. See :doc:`Overview/app_modes` for information of application modes.   
+In addition, there are some Korn shell scripts to set the environment and batch job features on ECMWF servers for the *gateway* and *remote* mode. See :doc:`Overview/app_modes` for information of application modes.   
 
 A number of Shell scripts are wrapped around the software package for easy installation and fast job submission. 
 
-The software depends on a number of third-party libraries which can be found in :ref:`ref-requirements`.
+The software depends on some third-party libraries as listed in :ref:`ref-requirements`.
 
-Some details on the tasks and program worksteps are described in :doc:`Overview/prog_flow`.
+Details of the tasks and program work steps are described in :doc:`Overview/prog_flow`.
 
 
 ..  - directory structure (new diagramm!)
diff --git a/For_developers/Sphinx/source/Documentation/vertco.rst b/For_developers/Sphinx/source/Documentation/vertco.rst
index e0d1d6633392a6d31ac9f34002d86fe8b132d8b5..e750454d1955780ec15e0b404bec675824b0ab41 100644
--- a/For_developers/Sphinx/source/Documentation/vertco.rst
+++ b/For_developers/Sphinx/source/Documentation/vertco.rst
@@ -1,13 +1,13 @@
 *******************
-Vertical Coordinate
+Vertical wind
 *******************
         
-Calculation of vertical velocity and preparation of Output-files
+Calculation of vertical velocity and preparation of output files
 ================================================================
 
-``flex_extract`` has two ways to calculate the vertical velocity for ``FLEXTRA``/``FLEXPART``: 
+Two methods are provided in ``flex_extract`` for the calculation of the vertical velocity for ``FLEXTRA``/``FLEXPART``: 
     (i) from the horizontal wind field, 
-    (ii) from the MARS parameter 77, which is available for operational forecasts and analyses since September 2008 and for reanalysis datasets **ERA5** and **CERA-20C**.
+    (ii) from the MARS parameter 77, which is available for operational forecasts and analyses since September 2008 and for reanalysis datasets **ERA5** and **CERA-20C**, which contains the vertical velocity directly in the eta coordinate system of the ECMWF model.
 
 Especially for high resolution data, use of the ``MARS`` parameter 77 is recommended,
 since the computational cost (measured in ECMWF HPC units) is reduced by 90-95% at
@@ -19,39 +19,38 @@ Calculation from the horizontal wind field is still required for historical case
 **ERA-40**, **ERA-Interim** or operational data prior to September 2008.    
     
     
-Calculation of vertical velocity from horizontal wind using the continuity equation
+Calculation of the vertical velocity from the horizontal wind using the continuity equation
 ===================================================================================
 
-The vertical velocity is computed by the FORTRAN90 program ``calc_etadot`` in the ECMWF
-vertical coordinate system by applying the equation of continuity and thereby ensuring mass consistent 3D wind fields. A detailed description of ``calc_etadot`` can be found in the
+The vertical velocity in the ECMWF's eta vertical coordinate system is computed by the Fortran program ``calc_etadot``, using the continuity equation and thereby ensuring mass-consistent 3D wind fields. A detailed description of ``calc_etadot`` can be found in the
 documents v20_update_protocol.pdf, V30_update_protocol.pdf and
 V40_update_protocol.pdf. The computational demand and accuracy of ``calc_etadot`` is highly
 dependent on the specification of parameters ``GAUSS``, ``RESOL`` and ``SMOOTH``. The
 following guidance can be given for choosing the right parameters:
 
-    * For very fine output grids (0.25 degree or finer) the full resolution T799 or even T1279 of the operational model is required (``RESOL=799``, ``SMOOTH=0``). The highest available resolution (and the calculation of vertical velocity on the Gaussian grid (``GAUSS=1``) is, however, rather demanding and feasible only for resolutions up to T799. Higher resolutions are achievable on the HPC. If data retrieval at T1279  needs to be performed on *ecgate*, the computation of the vertical velocity is feasible only on the lat/lon grid (``GAUSS=0``), which also yields very good results. Please read document v20_update_protocol.pdf-v60_update_protocol.pdf to see if the errors incurred are acceptable for the planned application.
+    * For very fine output grids (0.25 degree or finer), the full resolution T799 or even T1279 of the operational model is required (``RESOL=799``, ``SMOOTH=0``). The highest available resolution (and the calculation of vertical velocity on the Gaussian grid (``GAUSS=1``) is, however, rather demanding and feasible only for resolutions up to T799. Higher resolutions are achievable on the HPC. If data retrieval at T1279  needs to be performed on *ecgate*, the computation of the vertical velocity is feasible only on the lat/lon grid (``GAUSS=0``), which also yields very good results. Please read document v20_update_protocol.pdf-v60_update_protocol.pdf to see if the errors incurred are acceptable for the planned application.
     * For lower resolution (often global) output grids, calculation of vertical velocities with lower than operational spectral resolution is recommended. For global grids the following settings appear optimal:
         - For 1.0 degree grids: ``GAUSS=1``, ``RESOL=255``, ``SMOOTH=179``
         - For 0.5 degree grids: ``GAUSS=1``, ``RESOL=399``, ``SMOOTH=359``
         - Calculation on the lat/lon grid is not recommended for less than the operational (T1279) resolution.    
-        - If ``GAUSS`` is set to 1, only the following choices are possible for ``RESOL`` on *ecgate*: 159,255,319,399,511,799, (on the HPC also 1279, 2047 in future models). This choice is restricted because a reduced Gaussian grid is defined in then ECMWF EMOSLIB only for these spectral resolutions. For ``GAUSS=0``, ``RESOL`` can be any value below the operational resolution.
-        - For ``SMOOTH`` any resolution lower than ``RESOL`` is possible. If no smoothing is desired, ``SMOOTH=0`` should be chosen. ``SMOOTH`` has no effect if vertical velocity is calculated on lat\/lon grid (``GAUSS=0``).
-    * The on demand scripts send an error message for settings where ``SMOOTH`` (if set) and ``RESOL`` are larger than 360./``GRID``/2, since in this case, the output grid cannot resolve the highest wave numbers. The scripts continue operations, however.
+        - If ``GAUSS`` is set to 1, only the following choices are possible for ``RESOL`` on *ecgate*: 159,255,319,399,511,799, (on the HPC also 1279; 2047 in future model versions). This choice is restricted because a reduced Gaussian grid is defined in the ECMWF EMOSLIB only for these spectral resolutions. For ``GAUSS=0``, ``RESOL`` can be any value below the operational resolution.
+        - For ``SMOOTH``, any resolution lower than ``RESOL`` is possible. If no smoothing is desired, ``SMOOTH=0`` should be chosen. ``SMOOTH`` has no effect if the vertical velocity is calculated on a lat\/lon grid (``GAUSS=0``).
+    * The on-demand scripts send an error message for settings where ``SMOOTH`` (if set) and ``RESOL`` are larger than 360./``GRID``/2, since in this case, the output grid cannot resolve the highest wave numbers. The scripts continue operations, however.
     * Regional grids are not cyclic in zonal directions, but global grids are. The software assumes a cyclic grid if ``RIGHT``-``LEFT`` is equal to ``GRID`` or is equal to ``GRID``-360. 
-    * Finally, model and flux data as well as the vertical velocity computed are written to files ``<prefix>yymmddhh`` for application in ATM modelling. If the parameters ``OMEGA`` or ``OMEGADIFF`` are set, also files ``OMEGAyymmddhh`` are created, containing the pressure vertical velocity (omega) and the difference between omega from ``MARS`` and the surface pressure tendency. ``OMEGADIFF`` should be zero except for debugging, since it triggers expensive calculations on the Gaussian grid.
+    * Finally, model and flux data as well as the vertical velocity computed are written to files ``<prefix>yymmddhh`` (the standard ``flex_extract`` output files) If the parameters ``OMEGA`` or ``OMEGADIFF`` are set, also files ``OMEGAyymmddhh`` are created, containing the pressure vertical velocity (omega) and the difference between omega from ``MARS`` and from the surface pressure tendency. ``OMEGADIFF`` should be set to zero except for debugging, since it triggers expensive calculations on the Gaussian grid.
     
     
-Calculation of vertical velocity from pre-calculated MARS parameter 77
+Calculation of the vertical velocity from the pre-calculated MARS parameter 77
 ======================================================================
 
-Since November 2008, the parameter 77 (deta/dt) is stored in ``MARS`` on full model levels. ``FLEXTRA``/``FLEXPART`` in its current version requires ``deta/dt`` on model half levels, multiplied by ``dp/deta``. In ``flex_extract``, the program ``calc_etadot`` assumes that this parameter is available if the ``CONTROL`` parameter ``ETA`` is set to 1. 
+Since November 2008, the parameter 77 (deta/dt) is stored in ``MARS`` on full model levels. ``FLEXTRA``/``FLEXPART`` in its current version requires ``deta/dt`` on model half levels, multiplied by ``dp/deta``. In ``flex_extract``, the program ``calc_etadot`` assumes that parameter 77 is available if the ``CONTROL`` parameter ``ETA`` is set to 1. 
 
 It is recommended to use the pre-calculated parameter 77 by setting ``ETA`` to 1 whenever possible.
 
-Setting parameter ``ETA`` to 1 normally disables calculation of vertical velocity from the horizontal wind field, which saves a lot of computational time. 
+Setting the parameter ``ETA`` to 1 disables calculation of vertical velocity from the horizontal wind field, which saves a lot of computational time. 
 
 .. note::
-   However, the calculation on the Gaussian grid are avoided only if both ``GAUSS`` and ``ETADIFF`` are set to 0. Please set ``ETADIFF`` to 1 only if you are really need it for debugging since this is a very expensive option. In this case ``ETAyymmddhh`` files are produced that contain the vertical velocity from horizontal winds and the difference to the pre-calculated vertical velocity.
+   However, the calculations on the Gaussian grid are avoided only if both ``GAUSS`` and ``ETADIFF`` are set to 0. Please set ``ETADIFF`` to 1 only if you are really need it for debugging since this is a very expensive option. In this case, ``ETAyymmddhh`` files are produced that contain the vertical velocity from horizontal winds and the difference to the pre-calculated vertical velocity.
 
 The parameters ``RESOL``, ``GRID``, ``UPPER``, ``LOWER``, ``LEFT``, ``RIGHT`` still apply. As for calculations on the Gaussian grid, the spectral resolution parameter ``RESOL`` should be compatible with the grid resolution (see previous subsection).
     
diff --git a/For_developers/Sphinx/source/Ecmwf/access.rst b/For_developers/Sphinx/source/Ecmwf/access.rst
index 6c2930e90bf32d0a9586b907fee286dc5f9b8be7..4edd8ca954282a6cff5e5020bb51d8b56fc302b9 100644
--- a/For_developers/Sphinx/source/Ecmwf/access.rst
+++ b/For_developers/Sphinx/source/Ecmwf/access.rst
@@ -1,5 +1,5 @@
 ************
-Access Modes
+Access modes
 ************
 
 .. _public datasets: https://confluence.ecmwf.int/display/WEBAPI/Available+ECMWF+Public+Datasets
@@ -7,18 +7,18 @@ Access Modes
 .. _Climate Data Store: https://cds.climate.copernicus.eu
 .. _CDS API: https://cds.climate.copernicus.eu/api-how-to
 
-Access to the ECMWF Mars archive is divided into two groups: **member state** users and **public** users.
+Access to the ECMWF MARS archive is divided into two groups: **member state** users and **public** users.
 
-**Member state user**: 
-    This access mode allows the user to work directly on the ECMWF Linux Member State Servers or via a Web Access Toolkit ``ecaccess`` through a local Member State Gateway Server. This enables the user to have direct and full access to the Mars archive. There might be some limitations in user rights such as the declined access to the latest forecasts. This has to be discussed with the `Computing Representative`_. This user group is also able to work from their local facilities without a gateway server in the same way a **public** user would. The only difference is the connection with the Web API. However, this is automatically selected by ``flex_extract``.
+**Member-state user**: 
+    This access mode allows the user to work directly on a ECMWF member-state Linux server or via the ``ecaccess`` Web-Access Toolkit through a local member-state Gateway server. This enables the user to have direct and full access to the MARS archive. There might be some limitations in user rights, such as no access to the latest forecasts. In case such data are needed, this has to be agreed upon with the national `Computing Representative`_. This user group is also able to work from their local facilities without a gateway server in the same way a **public** user would. The only difference is the connection with the Web API, which, however, is automatically selected by ``flex_extract``.
     
 
 **Public user**: 
-    This access mode allows every user to access the ECMWF `public datasets`_ from their local facilities. ``Flex_extract`` is able (tested for the use with ``FLEXPART``) to extract the re-analysis datasets such as ERA-Interim and CERA-20C. The main difference to the **member state user** is the method of access with the Web API and the availability of data. For example, in ERA-Interim there is only a 6-hourly temporal resolution instead of 3 hours. The access method is selected by providing the command line argument "public=1" and providing the MARS keyword "dataset" in the ``CONTROL`` file. Also, the user has to explicitly accept the license of the dataset to be retrieved. This can be done as described in the installation process at section :ref:`ref-licence`.   
+    This access mode allows every user to access the ECMWF `public datasets`_ from their local facilities. ``Flex_extract`` is able to extract the re-analysis datasets such as ERA-Interim and CERA-20C for use with ``FLEXPART`` (tested). The main difference to the **member-state user** is the method of access with the Web API and the availability of data. For example, in ERA-Interim,  only a 6-hourly temporal resolution is available instead of 3 hours. The access method is selected by providing the command line argument "public=1" and providing the MARS keyword "dataset" in the ``CONTROL`` file. Also, the user has to explicitly accept the license of the data set to be retrieved. This can be done as described in the installation process at section :ref:`ref-licence`.   
      
 .. note::
     
-   The availability of the public dataset *ERA5* with the ECMWF Web API was cancelled in March 2019. The oportunity of local retrieval of this dataset was moved to the `Climate Data Store`_ which uses another Web API named `CDS API`_. This Data Store stores the data on explicit webservers for faster and easier access. Unfortunately, for *ERA5* there are only surface level and pressure level data available for *public users*. In the case of a *member user* it is possible to bypass the request to the MARS archive from ECMWF to retrieve the data. ``Flex_extract`` is already modified to use this API so *member user* can already retrieve *ERA5* data while *public users* have to wait until model level are available. 
+   The availability of the public dataset *ERA5* with the ECMWF Web API was cancelled by ECWMF in March 2019. Local retrieval of this dataset now has to use the `Climate Data Store`_ (CDS) with a different Web API called `CDS API`_. CDS stores the data on dedicated web servers for faster and easier access. Unfortunately, for *ERA5*, only surface level and pressure level data are available for *public users* which is not enough to run FLEXPART. For a *member user*, it is possible to pass the request to the MARS archive to retrieve the data. ``Flex_extract`` is already modified to use this API so a *member user* can already retrieve *ERA5* data for FLEXPART while *public users* have to wait until model level are made available. 
         
 For information on how to register see :ref:`ref-registration`. 
 
diff --git a/For_developers/Sphinx/source/Ecmwf/ec-links.rst b/For_developers/Sphinx/source/Ecmwf/ec-links.rst
index 4095813c39f14a3449beb1fdd9d5d3e8fb34b6de..c596a00571e931ec6aa1e7013c0a59c5afb9975c 100644
--- a/For_developers/Sphinx/source/Ecmwf/ec-links.rst
+++ b/For_developers/Sphinx/source/Ecmwf/ec-links.rst
@@ -1,53 +1,53 @@
 ################################
-Link Collection for Quick finder
+Link collection
 ################################
 
 
-ECMWF - General Overview   
+ECMWF - General overview   
     `ECMWF Home <https://www.ecmwf.int/>`_
     
-    `ECMWF Training <https://www.ecmwf.int/en/learning>`_
+    `ECMWF training <https://www.ecmwf.int/en/learning>`_
     
-    `General User Documentation <https://software.ecmwf.int/wiki/display/UDOC/User+Documentation>`_
+    `General user documentation <https://software.ecmwf.int/wiki/display/UDOC/User+Documentation>`_
     
-    `Software Support <https://confluence.ecmwf.int/display/SUP>`_
+    `Software support <https://confluence.ecmwf.int/display/SUP>`_
 
 MARS
     `MARS user documentation <https://confluence.ecmwf.int//display/UDOC/MARS+user+documentation>`_
     
-    `MARS Keywords <https://software.ecmwf.int/wiki/display/UDOC/MARS+keywords>`_
+    `MARS keywords <https://software.ecmwf.int/wiki/display/UDOC/MARS+keywords>`_
     
-    `MARS Content <https://confluence.ecmwf.int/display/UDOC/MARS+content>`_
+    `MARS content <https://confluence.ecmwf.int/display/UDOC/MARS+content>`_
     
-    `MARS Actions <https://confluence.ecmwf.int/display/UDOC/MARS+actions>`_
+    `MARS actions <https://confluence.ecmwf.int/display/UDOC/MARS+actions>`_
     
-    `Parameter Database <https://apps.ecmwf.int/codes/grib/param-db>`_
+    `Parameter database <https://apps.ecmwf.int/codes/grib/param-db>`_
   
 Registration
-    `Contact of Computing Representative's <https://www.ecmwf.int/en/about/contact-us/computing-representatives>`_
+    `Contacts of Computing Representatives <https://www.ecmwf.int/en/about/contact-us/computing-representatives>`_
 
     `Public registration for ECMWF Web API <https://software.ecmwf.int/wiki/display/WEBAPI/Access+MARS>`_
         
-    `CDS Registration <https://cds.climate.copernicus.eu/user/register>`_
+    `CDS registration <https://cds.climate.copernicus.eu/user/register>`_
 
-Available Member State Datasets
-    `Web Interface for accessing member state datasets <http://apps.ecmwf.int/archive-catalogue/>`_
+Member-State data sets available
+    `Web interface for accessing member-state data sets <http://apps.ecmwf.int/archive-catalogue/>`_
     
-    `Available datasets for member state users <https://www.ecmwf.int/en/forecasts/datasets>`_
+    `Data sets available for member state users <https://www.ecmwf.int/en/forecasts/datasets>`_
     
-Available Public Datasets
-    `Web Interface for accessing public datasets <http://apps.ecmwf.int/datasets/>`_
+Public data sets available
+    `Web interface for accessing public data sets <http://apps.ecmwf.int/datasets/>`_
     
-    `ECMWF's public datasets <https://confluence.ecmwf.int/display/WEBAPI/Available+ECMWF+Public+Datasets>`_
+    `ECMWF's public data sets <https://confluence.ecmwf.int/display/WEBAPI/Available+ECMWF+Public+Datasets>`_
     
-    `Public dataset Licences <https://software.ecmwf.int/wiki/display/WEBAPI/Available+ECMWF+Public+Datasets>`_
+    `Public data set licences <https://software.ecmwf.int/wiki/display/WEBAPI/Available+ECMWF+Public+Datasets>`_
     
-    `ERA5 public dataset Licence <https://cds.climate.copernicus.eu/cdsapp#!/search?type=dataset>`_
+    `ERA5 public dataset licence <https://cds.climate.copernicus.eu/cdsapp#!/search?type=dataset>`_
 
 
 Datasets
     Overview
-        `Complete list of datasets <https://www.ecmwf.int/en/forecasts/datasets>`_
+        `Complete list of data sets <https://www.ecmwf.int/en/forecasts/datasets>`_
                 
         `What is climate reanalysis <https://www.ecmwf.int/en/research/climate-reanalysis>`_
         
@@ -56,7 +56,7 @@ Datasets
     Real-time (Operational)        
         `List of real_time datasets <https://www.ecmwf.int/en/forecasts/datasets/catalogue-ecmwf-real-time-products>`_
 
-        `Atmospheric model - HRES (our typical operational dataset) <https://www.ecmwf.int/en/forecasts/datasets/set-i>`_
+        `Atmospheric model - HRES (typical operational dataset) <https://www.ecmwf.int/en/forecasts/datasets/set-i>`_
         
         `Atmospheric model - ENS (15-day ensemble forecast) <https://www.ecmwf.int/en/forecasts/datasets/set-iii>`_
 
@@ -65,12 +65,12 @@ Datasets
         
         `ERA-Interim documentation <https://www.ecmwf.int/en/elibrary/8174-era-interim-archive-version-20>`_
     
-        `ERA-Interim dataset <https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim>`_
+        `ERA-Interim data set <https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim>`_
     
     CERA-20C
         `What is CERA-20C <https://software.ecmwf.int/wiki/display/CKB/What+is+CERA-20C>`_
         
-        `CERA-20C dataset <https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/cera-20c>`_
+        `CERA-20C data set <https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/cera-20c>`_
             
     ERA5
         `What is ERA5 <https://software.ecmwf.int/wiki/display/CKB/What+is+ERA5>`_
@@ -83,8 +83,8 @@ Datasets
         
         `ERA5 Documentation <https://software.ecmwf.int/wiki/display/CKB/ERA5+data+documentation>`_        
 
-Third Party Libraries
-    `ECMWF Web API Home <https://software.ecmwf.int/wiki/display/WEBAPI/ECMWF+Web+API+Home>`_
+Third-party libraries
+    `ECMWF Web API home <https://software.ecmwf.int/wiki/display/WEBAPI/ECMWF+Web+API+Home>`_
 
     `Building ECMWF software with gfortran <https://software.ecmwf.int/wiki/display/SUP/2015/05/11/Building+ECMWF+software+with+gfortran>`_
     
@@ -102,18 +102,18 @@ Plotting GRIB fields:
     `Example Python script to plot GRIB files <https://software.ecmwf.int/wiki/display/CKB/How+to+plot+GRIB+files+with+Python+and+matplotlib>`_
     
 
-Scientific Information
-    `Octahedral reduced Gaussian Grid <https://confluence.ecmwf.int/display/FCST/Introducing+the+octahedral+reduced+Gaussian+grid>`_
+Scientific information
+    `Octahedral reduced Gaussian grid <https://confluence.ecmwf.int/display/FCST/Introducing+the+octahedral+reduced+Gaussian+grid>`_
     
     `Precipitation <https://www.ecmwf.int/en/newsletter/147/meteorology/use-high-density-observations-precipitation-verification>`_
 
 
-Technical Information of ECMWF serves
+Technical information for ECMWF servers
 
-    `Introduction presentation to SLURM  <https://confluence.ecmwf.int/download/attachments/73008494/intro-slurm-2017.pdf?version=1&modificationDate=1488574096323&api=v2>`_
+    `Introductary presentation of SLURM  <https://confluence.ecmwf.int/download/attachments/73008494/intro-slurm-2017.pdf?version=1&modificationDate=1488574096323&api=v2>`_
 
-Troubleshooting
-    `ECMWF Web API Troubleshooting <https://confluence.ecmwf.int/display/WEBAPI/Web-API+Troubleshooting>`_
+Trouble-shooting
+    `ECMWF Web API trouble-shooting <https://confluence.ecmwf.int/display/WEBAPI/Web-API+Troubleshooting>`_
 
 
 
diff --git a/For_developers/Sphinx/source/Ecmwf/hintsecmwf.rst b/For_developers/Sphinx/source/Ecmwf/hintsecmwf.rst
index e601cffa1e17f1976883f8baa2d1969ad3e579cf..ac93b8ca1dca31c26c6db9810e9a5f12ab491a18 100644
--- a/For_developers/Sphinx/source/Ecmwf/hintsecmwf.rst
+++ b/For_developers/Sphinx/source/Ecmwf/hintsecmwf.rst
@@ -1,5 +1,5 @@
 ##################################
-Hints to specify dataset retrieval
+Hints for data set selection
 ##################################
 
 .. contents::
@@ -7,13 +7,13 @@ Hints to specify dataset retrieval
 
 
 
-How can I find out what data is available?
+How can I find out what data are available?
 ==========================================
 
-Go to the `Web Interface for accessing member state datasets <http://apps.ecmwf.int/archive-catalogue/>`_
+Go to the `Web Interface for accessing member-state data sets <http://apps.ecmwf.int/archive-catalogue/>`_
 and click yourself through the steps to define your set of data and see what is available to you.
 
-For public users there is  the `Web Interface for accessing public datasets <http://apps.ecmwf.int/datasets/>`_.
+For public users there is  the `Web Interface for accessing public data sets <http://apps.ecmwf.int/datasets/>`_.
 
 
 
diff --git a/For_developers/Sphinx/source/Ecmwf/msdata.rst b/For_developers/Sphinx/source/Ecmwf/msdata.rst
index da321a9807365d90f16d463601f1381c390718da..009dbc2ccea125686c1dd61d87d5bb496c0318b8 100644
--- a/For_developers/Sphinx/source/Ecmwf/msdata.rst
+++ b/For_developers/Sphinx/source/Ecmwf/msdata.rst
@@ -1,16 +1,16 @@
-#########################################
-Available Datasets for Member State users
-#########################################
+##########################################
+Available data sets for member-state users
+##########################################
 
 
 
-Model level data
+Model-level data
 ================
 
 .. figure:: ../_files/ECMWF_FPparameter_ml.png
 
 
-Surface level data
+Surface data
 ==================
 
 .. figure:: ../_files/ECMWF_FPparameter_sfc-0.png
diff --git a/For_developers/Sphinx/source/Ecmwf/pubdata.rst b/For_developers/Sphinx/source/Ecmwf/pubdata.rst
index 277b09106f662789f45d2d578b1d25390175b0d5..51a09bbcb991ebb0b8a8249f593c54f0c7447250 100644
--- a/For_developers/Sphinx/source/Ecmwf/pubdata.rst
+++ b/For_developers/Sphinx/source/Ecmwf/pubdata.rst
@@ -1,7 +1,7 @@
-Available Datasets for Public users
-***********************************
+Available data sets for public users
+************************************
 
-  IN PREPARATION
+  UNDER PREPARATION
 
 
 .. toctree::
diff --git a/For_developers/Sphinx/source/Evaluation/staticcode.rst b/For_developers/Sphinx/source/Evaluation/staticcode.rst
index 52e54138a51157463e6603d3fa443d0c7b1f580e..a90b9bfcb77af67cbb33e6d417231d064a73c5fd 100644
--- a/For_developers/Sphinx/source/Evaluation/staticcode.rst
+++ b/For_developers/Sphinx/source/Evaluation/staticcode.rst
@@ -1,5 +1,5 @@
 ********************
-Static Code Analysis
+Static code analysis
 ********************
 
    UNDER CONSTRUCTION
diff --git a/For_developers/Sphinx/source/Evaluation/testcases.rst b/For_developers/Sphinx/source/Evaluation/testcases.rst
index 4855aa703ec7129d8ee536afc7dab6f715314152..825995612f9e9199b1c52a8a4f9c5fbcb72d9b18 100644
--- a/For_developers/Sphinx/source/Evaluation/testcases.rst
+++ b/For_developers/Sphinx/source/Evaluation/testcases.rst
@@ -1,5 +1,5 @@
 ********************
-Testcases
+Test cases
 ********************
 
 
@@ -10,7 +10,7 @@ Comparison of MARS requests
 
 
 
-Comparison of grib files
+Comparison of GRIB files
 ========================
 
   UNDER CONSTRUCTION
diff --git a/For_developers/Sphinx/source/Installation/local.rst b/For_developers/Sphinx/source/Installation/local.rst
index cf86470b625d135acfea6f15db98b58daf924f0a..6a7cecca50ad076461827a9ffc7898e1370965d8 100644
--- a/For_developers/Sphinx/source/Installation/local.rst
+++ b/For_developers/Sphinx/source/Installation/local.rst
@@ -50,33 +50,34 @@ Local mode - dependencies
 
 The installation is the same for the access modes **member** and **public**.
 
-The environment on your local system has to provide these software packages
+The environment on your local system has to provide the following software packages
 and libraries, since the preparation of the extraction and the post-processing is done on the local machine:
 
-+------------------------------------------------+-----------------+
-|  Python part                                   | Fortran part    |
-+------------------------------------------------+-----------------+
-| * `Python3`_                                   | * `gfortran`_   |
-| * `numpy`_                                     | * `fftw3`_      |
-| * `genshi`_                                    | * `eccodes`_    |
-| * `eccodes for python`_                        | * `emoslib`_    |
-| * `ecmwf-api-client`_ (everything except ERA5) |                 |
-| * `cdsapi`_ (just for ERA5 and member user)    |                 |
-+------------------------------------------------+-----------------+
++-------------------------------------------------+-----------------+
+|  Python part                                    | Fortran part    |
++-------------------------------------------------+-----------------+
+| 1. `Python3`_                                   | 1. `gfortran`_  |
+| 2. `numpy`_                                     | 2. `fftw3`_     |
+| 3. `genshi`_                                    | 3. `eccodes`_   |
+| 4. `eccodes for python`_                        | 4. `emoslib`_   |
+| 5. `ecmwf-api-client`_ (everything except ERA5) |                 |
+| 6. `cdsapi`_ (just for ERA5 and member user)    |                 |
++-------------------------------------------------+-----------------+
 
 
 .. _ref-prep-local:
 
-Prepare local environment
-=========================
+Preparing the local environment
+===============================
 
-The easiest way to install all required packages is to use the package management system of your Linux distribution  which requires admin rights.
+The easiest way to install all required packages is to use the package management system of your Linux distribution which requires admin rights.
 The installation was tested on a *Debian GNU/Linux buster* and an *Ubuntu 18.04 Bionic Beaver* system.
 
 .. code-block:: sh
 
-  # On a Debian or Debian-derived sytem (e. g. Ubuntu) system you may use the following commands (or equivalent commands of your preferred package manager):
-  # (if not already available):
+  # On a Debian or Debian-derived (e. g. Ubuntu) system,
+  # you may use the following commands (or equivalent commands of your preferred package manager):
+  # (if respective packages are not already available):
    apt-get install python3 (usually already available on GNU/Linux systems)
    apt-get install python3-eccodes
    apt-get install python3-genshi
@@ -85,46 +86,47 @@ The installation was tested on a *Debian GNU/Linux buster* and an *Ubuntu 18.04
    apt-get install fftw3-dev 
    apt-get install libeccodes-dev
    apt-get install libemos-dev 
-  # Some of these packages will pull in further packages as dependencies. This is fine, and some are even needed by ``flex_extract''.
-  
+  # Some of these packages will pull in further packages as dependencies. 
+  # This is fine, and some are even needed by ``flex_extract''.
 
-  # As currently the CDS and ECMWF API packages are not available as Debian packages, they need to be installed outside of the Debian (Ubuntu etc.) package management system. The recommended way is:
+  # As currently the CDS and ECMWF API packages are not available as Debian packages,
+  # they need to be installed outside of the Debian (Ubuntu etc.) package management system. 
+  # The recommended way is:
    apt-get install pip
    pip install cdsapi 
    pip install ecmwf-api-client 
    
 .. note::
 
-    In case you would like to use Anaconda Python we recommend you follow the installation instructions of 
-    `Anaconda Python Installation for Linux <https://docs.anaconda.com/anaconda/install/linux/>`_ and then install the
-    ``eccodes`` package from ``conda`` with:
+    If you are using Anaconda Python, we recommend to follow the installation instructions of 
+    `Anaconda Python Installation for Linux <https://docs.anaconda.com/anaconda/install/linux/>`_ 
+    and then install the ``eccodes`` package from ``conda`` with:
 
     .. code-block:: bash
 
        conda install conda-forge::python-eccodes   
    
-The CDS API (cdsapi) is required for ERA5 data and the ECMWF Web API (ecmwf-api-client) for all other public datasets.   
+The CDS API (``cdsapi``) is required for ERA5 data and the ECMWF Web API (ecmwf-api-client) for all other public datasets.   
     
 .. note:: 
 
-    Since **public users** currently don't have access to the full *ERA5* dataset they can skip the installation of the ``CDS API``. 
+    Since **public users** currently don't have access to the full *ERA5* dataset, they can skip the installation of the CDS API. 
 
-Both user groups have to provide keys with their credentials for the Web API's in their home directory. Therefore, follow these instructions:
+Both user groups have to provide keys with their credentials for the Web APIs in their home directory, following these instructions:
        
 ECMWF Web API:
-   Go to `MARS access`_ website and log in with your credentials. Afterwards, on this site in section "Install ECMWF KEY" the key for the ECMWF Web API should be listed. Please follow the instructions in this section under 1 (save the key in a file `.ecmwfapirc` in your home directory). 
+   Go to the `MARS access`_ website and log in with your credentials. Afterwards, go to the section "Install ECMWF KEY", where the key for the ECMWF Web API should be listed. Please follow the instructions in this section under 1 (save the key in a file ``.ecmwfapirc`` in your home directory). 
      
 CDS API:
-   Go to `CDS API registration`_ and register there too. Log in at the `cdsapi`_ website and follow the instructions at section "Install the CDS API key" to save your credentials in a `.cdsapirc` file.
+   Go to `CDS API registration`_ and register there, too. Log in on the `cdsapi`_ website and follow the instructions in the section "Install the CDS API key" to save your credentials in file ``.cdsapirc``.
 
    
 .. _ref-test-local:
    
-Test local environment
-======================
-
-Check the availability of the python packages by typing ``python3`` in a terminal window and run the ``import`` commands in the python shell. If there are no error messages, you succeeded in setting up the environment.
+Testing the local environment
+=============================
 
+Check the availability of the python packages by typing ``python3`` in a terminal window and run the ``import`` commands in the python shell:
 .. code-block:: python
     
    # check in python3 console
@@ -134,10 +136,11 @@ Check the availability of the python packages by typing ``python3`` in a termina
    import cdsapi
    import ecmwfapi
    
+If there are no error messages, you succeeded in setting up the environment.
 
 
-Test the Web API's
-------------------
+Testing the Web APIs
+--------------------
 
 You can start very simple test retrievals for both Web APIs to be sure that everything works. This is recommended to minimise the range of possible errors using ``flex_extract`` later on.
 
@@ -147,7 +150,7 @@ ECMWF Web API
 
 
 +----------------------------------------------------------+----------------------------------------------------------+
-|Please use this piece of Python code for **Member user**: |Please use this piece of Python code for **Public user**: |
+|Please use this Python code snippet as a **Member user**: |Please use this Python code snippet as a **Public user**: |
 +----------------------------------------------------------+----------------------------------------------------------+
 |.. code-block:: python                                    |.. code-block:: python                                    |
 |                                                          |                                                          |
@@ -177,7 +180,7 @@ CDS API
 
 Extraction of ERA5 data via CDS API might take time as currently there is a high demand for ERA5 data. Therefore, as a simple test for the API just retrieve pressure-level data (even if that is NOT what we need for FLEXPART), as they are stored on disk and don't need to be retrieved from MARS (which is the time-consuming action): 
 
-Please use this piece of Python code to retrieve a small sample of *ERA5* pressure levels:
+Please use the following Python code snippet to retrieve a small sample of *ERA5* pressure level data:
 
 .. code-block:: python
 
@@ -203,7 +206,7 @@ If you know that your CDS API works, you can try to extract some data from MARS.
 
 .. **Member-state user**
 
-Please use this piece of Python code to retrieve a small *ERA5* data sample as a **member-state user**! The **Public user** do not have access to the full *ERA5* dataset!
+Please use the following Python code snippet to retrieve a small *ERA5* data sample as a **member-state user**! The **Public user** do not have access to the full *ERA5* dataset!
 
 .. code-block:: python
 
@@ -267,26 +270,22 @@ Please use this piece of Python code to retrieve a small *ERA5* data sample as a
 Local installation
 ==================
 
-First prepare the Fortran ``makefile`` for your environment and set it in the ``setup.sh`` script. (See section :ref:`Fortran Makefile <ref-convert>` for more information.)
-``flex_extract`` comes with two ``makefiles`` prepared for the ``gfortran`` compiler. One for the normal use ``makefile_fast`` and one for debugging ``makefile_debug`` which is usually only resonable for developers.
- 
-They assume that ``eccodes`` and ``emoslib`` are installed as distribution packages and can be found at ``flex_extract_vX.X/Source/Fortran``, where ``vX.X`` should be substituted with the current version number.
+First, adapt the Fortran ``makefile`` for your environment (if necessary) and insert it into ``setup.sh`` script (see :ref:`Fortran Makefile <ref-convert>` for more information).
+They can be found at ``flex_extract_vX.X/Source/Fortran/``, where ``vX.X`` should be substituted by the current flex_extract version number.
 
 .. caution::   
    It is necessary to adapt **ECCODES_INCLUDE_DIR** and **ECCODES_LIB** in these
    ``makefiles`` if other than standard paths are used.
 
-So starting from the root directory of ``flex_extract``, 
-go to the ``Fortran`` source directory and open the ``makefile`` of your 
-choice to modify with an editor of your choice. We use the ``nedit`` in this case.
+Thus, go to the ``Fortran`` source directory and open the ``makefile`` of your 
+choice, and check / modify with an editor of your choice:
 
 .. code-block:: bash 
 
    cd flex_extract_vX.X/Source/Fortran
    nedit makefile_fast
  
-Edit the paths to the ``eccodes`` library on your local machine. 
-
+Set the paths to the ``eccodes`` library on your local machine, if necessary.
 
 .. caution::
    This can vary from system to system. 
@@ -301,22 +300,22 @@ Edit the paths to the ``eccodes`` library on your local machine.
       
    to find out the path to the ``eccodes`` library.
    
-Substitute these paths in the ``makefile`` for parameters **ECCODES_INCLUDE_DIR**
-and **ECCODES_LIB** and save it.
+Assign these paths to the parameters **ECCODES_INCLUDE_DIR**
+and **ECCODES_LIB** in the makefile, and save it.
 
 .. code-block:: bash
 
-   # these are the paths on a current Debian 10 Testing system (May 2019)
+   # these are the paths on Debian Buster:
    ECCODES_INCLUDE_DIR=/usr/lib/x86_64-linux-gnu/fortran/gfortran-mod-15/
    ECCODES_LIB= -L/usr/lib -leccodes_f90 -leccodes -lm  
    
     
 The Fortran program called ``calc_etadot`` will be compiled during the 
-installation process.Therefore the name of the ``makefile`` to be used needs to be given in  ``setup.sh``.
+installation process. Therefore, the name of the ``makefile`` to be used needs to be given in  ``setup.sh``.
 
 In the root directory of ``flex_extract``, open the ``setup.sh`` script 
-and adapt the installation parameters in the section labelled with 
-"AVAILABLE COMMANDLINE ARGUMENTS TO SET" like shown below.
+with an editor and adapt the installation parameters in the section labelled with 
+"AVAILABLE COMMANDLINE ARGUMENTS TO SET" as shown below:
 
 
 .. code-block:: bash
diff --git a/For_developers/Sphinx/source/dev_guide.rst b/For_developers/Sphinx/source/dev_guide.rst
index 05cde2cccd2032d6d12935156d3b5259a7f58e2d..69f585e106015565edfcee0ddf105b02b00dd4ae 100644
--- a/For_developers/Sphinx/source/dev_guide.rst
+++ b/For_developers/Sphinx/source/dev_guide.rst
@@ -5,9 +5,9 @@ Developer Guide
     
 .. note::
 
-  This section still needs to be done.
+  This section still needs to be written.
     
-.. repository (how /who manages the code, where to get)
+.. repository (how / who manages the code, where to get)
     
     
 .. toctree::
diff --git a/For_developers/Sphinx/source/documentation.rst b/For_developers/Sphinx/source/documentation.rst
index b6f78f596f5a67e82f85ee490e9c231e9f8e07e6..c7c8fc782cb867a7705224d60087031aeba39e2b 100644
--- a/For_developers/Sphinx/source/documentation.rst
+++ b/For_developers/Sphinx/source/documentation.rst
@@ -2,21 +2,21 @@
 Documentation
 *************
         
-    Overview (Under construction)
+    Overview (under construction)
       
-    Control & Input Data 
+    Control & input data 
     
-    Output Data (Under construction)
+    Output data (under construction)
     
-    Disaggregation of Flux Data (Under construction)
+    Disaggregation of flux data (under construction)
     
-    Vertical Coordinate (Under construction)
-      - Methods (GAUSS, ETA, OMEGA)
+    Vertical coordinate (under construction)
+      - methods (GAUSS, ETA, OMEGA)
       - calc_etadot 
     
-    Auto Generated Documentation
+    Auto-generated documentation
       - Python
-      - Fortran (Under construction)
+      - Fortran (under construction)
 
     
 .. toctree::
diff --git a/For_developers/Sphinx/source/ecmwf_data.rst b/For_developers/Sphinx/source/ecmwf_data.rst
index 35ddb57bb0566255586d1cd9088d6d48b215dd71..cfaacf79fdafa1132b08baf751dee3cecec4ee04 100644
--- a/For_developers/Sphinx/source/ecmwf_data.rst
+++ b/For_developers/Sphinx/source/ecmwf_data.rst
@@ -6,29 +6,29 @@ ECMWF Data
 .. _Member States: https://www.ecmwf.int/en/about/who-we-are/member-states
 
 
-The European Centre for Medium-Range Weather Forecasts (`ECMWF`_), based in Reading, UK, is an independent intergovernmental organisation supported by 34 states. It is both a research institute and a full time operational service. It produces global numerical weather predictions and some other data which is fully available to the national meteorological services in the `Member States`_, Co-operating States and the broader community. Especially, the published re-analysis datasets are made available to the public with some limits in specific datasets.
+The European Centre for Medium-Range Weather Forecasts (`ECMWF`_), based in Reading, UK, is an independent intergovernmental organisation supported by 34 states. It is both a research institute and a 24 h / 7 d operational service. It produces global numerical weather predictions and some other data which are fully available to the national meteorological services in the `Member States`_, Co-operating States, and to some extend to the broader community. Specifically, re-analysis data sets are made available to the public, however, with some limitations for specific data sets.
 
-The amount and structure of the available data from ECMWF is very complex. The operational data changes regularly in time and spatial resolution, physics and parameter. This has to be taken into account carefully and each user has to investigate his dataset of interest carefully before selecting and retrieving it with ``flex_extract``.
-The re-analysis datasets are consistent in all the above mentioned topics over their whole period but they have each their own specialities which makes treatment with ``flex_extract`` special in some way. For example, they have different starting times for their forecasts or different parameter availability. They also have differences in time and spatial resolution and most importantly for ``flex_extract`` they are different in the way of providing the vertical coordinate. 
+There is vast amount and of data with a complex structure available from ECMWF. The operational data undergo changes with respect to temporal and spatial resolution, model physics and parameters available. This has to be taken into account carefully and every user should have a clear idea of the data set intended to be used before retrieving it with ``flex_extract``.
+Each re-analysis data set is homogeneous with respect to resolution etc., but the different re-analyses alll have specific properties which requires a corresponding treatment with ``flex_extract``. For example, the starting times of the forecasts may be different, or the availability of parameters (model output variables) may vary. They also differ in their temporal and spatial resolution, and - most importantly for ``flex_extract`` - there are differences in the way how the vertical wind component may be accessed. 
 
-There is much to learn from ECMWF and their datasets and data handling and this might be confusing at first. We therefore collected the most important information for ``flex_extract`` users. In the following sections the user can use them to get to know enough to understand how ``flex_extract`` is best used and to select the parameters of the ``CONTROL`` files. 
+As there is much to learn about ECMWF and its data sets and data handling, it might be confusing at first. Therefore, we have here collected the information which is most important for ``flex_extract`` users. Study the following sections to learn how ``flex_extract`` is best used, and to select the right parameters in the ``CONTROL`` files. 
 
 
 :doc:`Ecmwf/access`
-    Description of available access methods to the ECMWF data.
+    Description of available  methods to access the ECMWF data.
 
 :doc:`Ecmwf/msdata`
-    Information about available data and parameters for member state users which can be retrieved with ``flex_extract``
+    Information about available data and parameters for member-state users which can be retrieved with ``flex_extract``
 
 :doc:`Ecmwf/pubdata`
-    Information about available data and parameters for the public datasets which can be retrieved with ``flex_extract``
+    Information about available data and parameters for the public data sets which can be retrieved with ``flex_extract``
 
 :doc:`Ecmwf/hintsecmwf`
-    Collection of hints to best find information to define the dataset for retrievement and
-    to define the ``CONTROL`` files.
+    Collection of hints to best find information to define the data set for retrieval, and
+    to define the content of the ``CONTROL`` files.
 
 :doc:`Ecmwf/ec-links`
-    Link collection for additional and useful information as well as references to specific dataset publications.
+    Link collection for additional and useful information as well as references to publications on specific data sets.
 
 
 .. toctree::
diff --git a/For_developers/Sphinx/source/evaluation.rst b/For_developers/Sphinx/source/evaluation.rst
index 65ef79934adc72117f61376f7af16e32c0486879..2987b908c7eefad56f844ae4bb4253c5c40c2400 100644
--- a/For_developers/Sphinx/source/evaluation.rst
+++ b/For_developers/Sphinx/source/evaluation.rst
@@ -5,8 +5,8 @@ Evaluation
     
 .. note::
 
-  This section in the online documentation still needs to be done.
-  Currently, evaluation methods and information can be found in the `flex_extract discussion paper <https://www.geosci-model-dev-discuss.net/gmd-2019-358/>`_ of the Geoscientific Model Development journal.
+  This section still needs to be written.
+  Currently, evaluation methods can be found in the `flex_extract discussion paper <https://www.geosci-model-dev-discuss.net/gmd-2019-358/>`_ of the journal Geoscientific Model Development.
   
   
     
diff --git a/For_developers/Sphinx/source/quick_start.rst b/For_developers/Sphinx/source/quick_start.rst
index 97d70b4d88a68ab5cac9c1dde5094e6a51dcbaa3..64394d0b4c038a83813087d7baab0dd7b97fb5e1 100644
--- a/For_developers/Sphinx/source/quick_start.rst
+++ b/For_developers/Sphinx/source/quick_start.rst
@@ -262,8 +262,8 @@ The next level of differentiation would be the field type, level type and time p
     CONTROL_OD.OPER.FC.eta.highres  
     CONTROL_OD.OPER.FC.gauss.highres  
     CONTROL_OD.OPER.FC.operational            
-    CONTROL_OD.OPER.FC.twiceaday.1hourly
-    CONTROL_OD.OPER.FC.twiceaday.3hourly
+    CONTROL_OD.OPER.FC.twicedaily.1hourly
+    CONTROL_OD.OPER.FC.twicedaily.3hourly
     
     
 
@@ -276,21 +276,20 @@ The main differences and features in the datasets are listed in the table shown
 
 
                     
-A common problem for beginners in retrieving ECMWF datasets is the mismatch in the definition of these parameters. For example, if you would like to retrieve operational data before ``June 25th 2013`` and set the maximum level to ``137`` you will get an error because this number of levels was first introduced at this effective day. So, be cautious in the combination of space and time resolution as well as the field types which are not available all the time. 
+A common problem for beginners in retrieving ECMWF datasets is a mismatch in the choice of values for these parameters. For example, if you try to retrieve operational data for 24 June 2013 or earlier and set the maximum level to 137, you will get an error because this number of levels was introduced only on 25 June 2013. Thus, be careful in the combination of space and time resolution as well as the field types. 
 
 
 .. note::
 
-    Sometimes it might not be clear how specific parameters in the control file must be set in terms of format. Please see the description of the parameters in section `CONTROL parameters <Documentation/Input/control_params.html>`_ or have a look at the ECMWF user documentation for `MARS keywords <https://confluence.ecmwf.int/display/UDOC/MARS+keywords>`_
+    Sometimes it might not be clear how specific parameters in the control file must be set in terms of format. Please consult the description of the parameters in section `CONTROL parameters <Documentation/Input/control_params.html>`_ or have a look at the ECMWF user documentation for `MARS keywords <https://confluence.ecmwf.int/display/UDOC/MARS+keywords>`_
 
-
-In the following we shortly discuss the main retrieval opportunities of the different datasets and  categoize the ``CONTROL`` files.    
+In the following, we shortly discuss the typical retrievals for the different datasets and  point to the respective ``CONTROL`` files.    
                     
      
 Public datasets
 ---------------         
 
-The main difference in the definition of a ``CONRTOL`` file for a public dataset is the setting of the parameter ``DATASET``. This specification enables the selection of a public dataset in MARS. Otherwise the request would not find the dataset.
+The main characteristic in the definition of a ``CONTROL`` file for a public dataset is the parameter ``DATASET``. Its specification enables the selection of a public dataset in MARS. Without this parameter, the request would not find the dataset.
 For the two public datasets *CERA-20C* and *ERA-Interim* an example file with the ending ``.public`` is provided and can be used straightaway. 
 
 .. code-block:: bash
@@ -298,50 +297,50 @@ For the two public datasets *CERA-20C* and *ERA-Interim* an example file with th
     CONTROL_CERA.public  
     CONTROL_EI.public      
 
-For *CERA-20C* it seems that there are no differences in the dataset against the full dataset, while the *public ERA-Interim* has only analysis fields every 6 hour without filling forecasts in between for model levels. Therefore it is only possible to retrieve 6-hourly data for *public ERA-Interim*.
+For *CERA-20C* it seems that there are no differences compared the full dataset, whereas the *public ERA-Interim* has only 6-hourly analysis fields, without forecasts to fill in between, for model levels. Therefore, it is only possible to retrieve 6-hourly data for *public ERA-Interim*.
 
 .. note:: 
 
-    In general, *ERA5* is a public dataset. However, since the model levels are not yet publicly available, it is not possible to retrieve *ERA5* data to drive the ``FLEXPART`` model. As soon as this is possible it will be announced at the community website and per newsletter. 
+    In principle, *ERA5* is a public dataset. However, since the model levels are not yet publicly available, it is not possible to retrieve *ERA5* data to drive the ``FLEXPART`` model. As soon as this is possible it will be announced at the community website and on the FLEXPART user email list. 
                      
 
 CERA
 ----
 
-For this dataset it is important to keep in mind that the dataset is available for the period 09/1901 until 12/2010 and the temporal resolution is limited to 3-hourly fields. 
-It is also a pure ensemble data assimilation dataset and is stored under the ``enda`` stream. It has ``10`` ensemble members. The example ``CONTROL`` files will only select the first member (``number=0``). You may change this to another number or a list of numbers (e.g. ``NUMBER 0/to/10``).
-Another important difference to all other datasets is the forecast starting time which is 18 UTC. Which means that the forecast in *CERA-20C* for flux fields is  12 hours long. Since the forecast extends over a single day we need to extract one day in advance and one day subsequently. This is automatically done in ``flex_extract``. 
+For this dataset, it is important to keep in mind that it is available for the period 09/1901 until 12/2010, and that the temporal resolution is limited to 3 h. 
+It is also a pure ensemble data assimilation dataset and is stored under the ``enda`` stream. There are 10 ensemble members. The example ``CONTROL`` files will only select the first member (``number=0``). You may change this to another number or a list of numbers (e.g. ``NUMBER 0/to/10``).
+Another important difference to all other datasets is that the forecast starting time is 18 UTC. This means that forecasts for flux fields cover 12 hours. Since the forecast extends over a single day we need to extract one day in advance and one day subsequently. This is automatically done in ``flex_extract``. 
+##PS check previous para
 
 
 ERA 5
 -----
 
-This is the newest re-analysis dataset and has a temporal resolution of 1-hourly analysis fields. Up to date it is available until April 2019 with regular release of new months. 
-The original horizontal resolution is ``0.28125°`` which needs some caution in the definition of the domain, since the length of the domain in longitude or latitude direction  must be an exact multiple of the resolution. It might be easier for users to use ``0.25`` for the resolution which MARS will automatically interpolate. 
-The forecast starting time is ``06/18 UTC`` which is important for the flux data. This should be set in the ``CONTROL`` file via the ``ACCTIME 06/18`` parameter in correspondence with ``ACCMAXSTEP 12`` and ``ACCTYPE FC``. 
+This is the latest re-analysis dataset, and has a temporal resolution of 1-h (analysis fields). At the time of writing, it is available until April 2019 with regular release of new months. 
+The original horizontal resolution is 0.28125° which needs some caution in the definition of the domain, since the length of the domain in longitude or latitude direction  must be an integer multiple of the resolution. It is also possible to use ``0.25`` for the resolution; MARS will then automatically interpolate to this resolution which is still close enough to be acceptable.
+The forecast starting time is ``06/18 UTC`` which is important for the flux data. Correspondingly, one should set in the ``CONTROL`` file ``ACCTIME 06/18``, ``ACCMAXSTEP 12``, and ``ACCTYPE FC``. 
 
 .. note::
 
-    We know that *ERA5* also has an ensemble data assimilation system but this is not yet retrievable with ``flex_extract`` since the deaccumulation of the flux fields works differently in this stream. Ensemble retrieval for *ERA5* is a future ToDo.
+    *ERA5* also includes an ensemble data assimilation system but related fields are not yet retrievable with ``flex_extract`` since the deaccumulation of the flux fields works differently in this stream. Ensemble field retrieval for *ERA5* is a *to-do* for the future.
 
 
 
 ERA-Interim
 -----------
 
-This re-analysis dataset will exceed its end of production at 31st August 2019!
-It is then available from 1st January 1979 to 31st August 2019. The ``etadot`` is not available in this dataset. Therefore ``flex_extract`` must select the ``GAUSS`` parameter to retrieve the divergence field in addition. The vertical velocity is the calculated with the continuity equation in the Fortran program ``calc_etadot``. Since the analysis fields are only available for every 6th hour, the dataset can be made 3 hourly by adding forecast fields in between. No ensemble members are available.
-
+The production of this re-analysis dataset has stopped on 31 August 2019!
+It is available for the period from 1 January 1979 to 31 August 2019. The ``etadot`` parameter is not available in this dataset. Therefore, one must use the ``GAUSS`` parameter, which retrieves the divergence field in addition and calculates the vertical velocity from the continuity equation in the Fortran program ``calc_etadot``. While the analysis fields are only available for every 6th hour, the dataset can be made 3-hourly by adding forecast fields in between. No ensemble members are available.
 
     
 Operational data
 ----------------
 
-This is the real time atmospheric model in high resolution with a 10-day forecast. This means it underwent regular adaptations and improvements over the years. Hence, retrieving data from this dataset needs extra attention in selecting correct settings of parameter. See :ref:`ref-tab-dataset-cmp` for the most important parameters. 
-Nowadays, it is available 1 hourly by filling the gaps of the 6 hourly analysis fields with 1 hourly forecast fields. Since 4th June 2008 the eta coordinate is directly available so that ``ETA`` should be set to ``1`` to save computation time. The horizontal resolution can be up to ``0.1°`` and in combination with ``137`` vertical levels can lead to troubles in retrieving this high resolution dataset in terms of job duration and quota exceedence. 
-It is recommended to submit such high resolution cases for single day retrievals (see ``JOB_CHUNK`` parameter in ``run.sh`` script) to avoid job failures due to exceeding limits.   
+This data set provides the output of the real-time atmospheric model runs in high resolution, including 10-day forecasts. The model undergoes frequent adaptations and improvements. Thus, retrieving data from this dataset requires extra attention in selecting correct settings of the parameters. See :ref:`ref-tab-dataset-cmp` for the most important parameters. 
+Currently, fields can be retrieved at 1 h temporal resolution by filling the gaps between analysis fields with 1-hourly forecast fields. Since 4 June 2008, the eta coordinate vertical velocity is directly available from MARS, therefore ``ETA`` should be set to ``1`` to save computation time. The horizontal resolution can be up to ``0.1°`` and in combination with ``137`` vertical levels can lead to problems in terms of job duration and disk space quota.
+It is recommended to submit such high resolution cases as single day retrievals (see ``JOB_CHUNK`` parameter in ``run.sh`` script) to avoid job failures due to exceeding limits.   
 
-``CONTROL`` files for normal daily retrievals with a mix of analysis and forecast fields are listed below:
+``CONTROL`` files for standard retrievals with a mix of analysis and forecast fields are listed below:
 
 .. code-block:: bash
 
@@ -350,20 +349,18 @@ It is recommended to submit such high resolution cases for single day retrievals
     CONTROL_OD.OPER.FC.eta.highres  
     CONTROL_OD.OPER.FC.gauss.highres  
     
-These files defines the minimum number of parameters necessary to retrieve a daily subset. The setup of field types is optimal and should only be changed if the user understands what he does. The grid, domain and temporal resolution can be changed according to availability.      
+These files defines the minimum number of parameters necessary to retrieve a daily subset. The given settings for the TYPE parameter are already optimised, and should only be changed if you know what you are doing. Grid, domain, and temporal resolution may be changed according to availability.
     
 
-
 .. note:: 
 
-     Please see `Information about MARS retrievement <https://confluence.ecmwf.int/display/UDOC/Retrieve#Retrieve-Retrievalefficiency>`_ to get to know hints about retrieval efficiency and troubleshooting. 
-  
+     Please see `Information about MARS retrievement <https://confluence.ecmwf.int/display/UDOC/Retrieve#Retrieve-Retrievalefficiency>`_ for hints about retrieval efficiency and troubleshooting. 
     
 
 Pure forecast
-    It is possible to retrieve pure forecasts exceeding a day. The forecast period available depends on the date and forecast field type. Please use MARS catalogue to check the availability. Below are some examples for 36 hour forecast of *Forecast (FC)*, *Control forecast (CF)* and *Calibration/Validation forecast (CV)*. 
-    The *CV* field type was only available 3-hourly from 2006 up to 2016. It is recommended to use the *CF* type since this is available from 1992 (3-hourly) on up to today in 1-hourly temporal resolution. *CV* and *CF* field types belong to the *Ensemble prediction system (ENFO)* which contain 50 ensemble members. 
-    Please be aware that in this case it is necessary to set the specific type for flux fields explicitly, otherwise it could select a default value which might be different from what you expect!
+    It is possible to retrieve pure forecasts exceeding one day. The forecast period available depends on the date and forecast field type. Please use MARS catalogue to check the availability. Below are some examples for 36 hour forecasts of *Forecast (FC)*, *Control forecast (CF)* and *Calibration/Validation forecast (CV)*. 
+    The *CV* field type was only available 3-hourly from 2006 up to 2016. It is recommended to use the *CF* type since this is available from 1992 (3-hourly) on up to today (1-hourly). *CV* and *CF* field types belong to the *Ensemble prediction system (ENFO)* which currently works with 50 ensemble members. 
+    Please be aware that in this case it is necessary to set the type for flux fields explicitly, otherwise a default value might be selected, different from what you expect!
     
     .. code-block:: bash
 
@@ -372,62 +369,57 @@ Pure forecast
         CONTROL_OD.OPER.FC.36hours  
 
 
-
 Half-day retrievals
-    If a forecast for just half a day is wanted it can be done by substituting the analysis fields also by forecast fields as shown in files with ``twiceaday`` in it. They produce a full day retrieval with pure 12 hour forecasts twice a day. It is also possible to use the operational version which would get the time information from ECMWF's environmental variables and therefore get the newest forecast per day. This version uses a ``BASETIME`` parameter which tells MARS to extract the exact 12 hours upfront to the selected date. If the ``CONTROL`` file with ``basetime`` in the filename is used this can be done for any other date too.
+    If a forecast is wanted for half a day only, this can be done by substituting the analysis fields by forecast fields as shown in files with ``twicedaily`` in their name. They produce a full-day retrieval with pure 12 hour forecasts, twice a day. It is also possible to use the operational version which would obtain the time information from ECMWF's environment variables and therefore use the newest forecast for each day. This version uses a ``BASETIME`` parameter which tells MARS to extract the exact 12 hours up to the selected date. If the ``CONTROL`` file with ``basetime`` in the filename is used, this can be done for any other date, too.
     
     .. code-block:: bash
 
         CONTROL_OD.OPER.FC.eta.basetime
         CONTROL_OD.OPER.FC.operational            
-        CONTROL_OD.OPER.FC.twiceaday.1hourly
-        CONTROL_OD.OPER.FC.twiceaday.3hourly
-
-
+        CONTROL_OD.OPER.FC.twicedaily.1hourly
+        CONTROL_OD.OPER.FC.twicedaily.3hourly
 
 
 Ensemble members
-    The retrieval of ensemble members were already mentioned in the pure forecast section and for *CERA-20C* data. 
-    In this ``flex_extract`` version there is an additional possibility to retrieve the *Ensemble Long window Data Assimilation (ELDA)* stream from the real-time dataset. This model version has (up to May 2019) 25 ensemble members and a control run (``number 0``). Starting from June 2019 it has 50 ensemble members. Therefore we created the possibility to double up the 25 ensemble members (before June 2019) to 50 members by taking the original 25 members from MARS and subtracting 2 times the difference between the member value and the control value. This is done by selecting the parameter ``DOUBLEELDA`` and set it to ``1``. 
-     
+    The retrieval of ensemble members was already mentioned in the pure forecast section and for *CERA-20C* data. 
+    This ``flex_extract`` version allows to retrieve the *Ensemble Long window Data Assimilation (ELDA)* stream from the operational dataset. Until May 2019, there were 25 ensemble members and a control run (``number 0``). Starting with June 2019, the number of ensemble members has been increased to 50. Therefore, we created the option to create 25 additional "pseudo-ensemble members" for periods before June 2019. The original 25 members from MARS are taken, and the difference between the member value and the control value is subtracted twice. This is done if the parameter ``DOUBLEELDA`` is included and set it to ``1``. 
     
     .. code-block:: bash
 
         CONTROL_OD.ELDA.FC.eta.ens.double   
         CONTROL_OD.ENFO.PF.ens
-
-
     
     
 Specific features
 -----------------
 
 rrint
-    Decides if the precipitation flux data uses the old (``0``) or new (``1``) disaggregation scheme. See :doc:`Documentation/disagg` for explanaition. 
+    Selects the disaggregation scheme for precipitation flux: old (``0``) or new (``1``). See :doc:`Documentation/disagg` for explanation. 
 cwc
-    Decides if the total cloud water content will be retrieved (set to ``1``) in addition. This is the sum of cloud liquid and cloud ice water content.
+    If present and set to ``1``, the total cloud water content will be retrieved in addition. This is the sum of cloud liquid and cloud ice water content.
 addpar
-    With this parameter an additional list of 2-dimensional, non-flux parameters can be retrieved. Use format ``param1/param2/.../paramx`` to list the parameters. Please be consistent in using either the parameter IDs or the short names.
+    With this parameter. an additional list of 2-dimensional, non-flux parameters can be retrieved. Use the format ``param1/param2/.../paramx`` to list the parameters. Please be consistent in using either the parameter IDs or the short names as defined by MARS.
 doubleelda
-    Use this to double the ensemble member number by adding further disturbance to each member. 
+    Use this to double the ensemble member number by adding further disturbance to each member (to be used with 25 members). 
 debug
-    If set to ``1`` all temporary files were kept at the end. Otherwise everything except the final output files will be deleted.
+    If set to ``1``, all temporary files are preserved. Otherwise, everything except the final output files will be deleted.
 request
     This produces an extra *csv* file ``mars_requests.csv`` where the content of each mars request of the job is stored. Useful for debugging and documentation.
 mailfail
-    At default the mail is send to the mail connected with the user account. Add additional email addresses if you want. But as soon as you enter a new mail, the default will be overwritten. If you would like to keep the mail from your user account, please add ``${USER}`` to the list ( comma seperated ) or mail addresses.
-      
+    As a default, e-mails are sent to the mail address connected with the user account. It is possible to overwrite this by specifying one or more e-mail addresses (comma-separated list). In order to include the e-mail associated with the user account, add ``${USER}`` to the list.
         
         
-Hints for definition of some parameter combinations
----------------------------------------------------
+Hints for proper definition of certain parameter combinations
+-------------------------------------------------------------
 
-Field types and times
-    This combination is very important. It defines the temporal resolution and which field type is extracted per time step. 
-    The time declaration for analysis (AN) fields uses the times of the specific analysis and (forecast time) steps have to be ``0``. The forecast field types (e.g. FC, CF, CV, PF) need to declare a combination of (forescast start) times and the (forecast) steps. Both of them together defines the actual time step. It is important to know the forecast starting times for the dataset to be retrieved, since they are different. In general it is enough to give information for the exact time steps, but it is also possible to have more time step combinations of ``TYPE``, ``TIME`` and ``STEP`` because the temporal (hourly) resolution with the ``DTIME`` parameter will select the correct combinations. 
+Field type and time
+    This combination is very important. It defines the temporal resolution and which field type is extracted on each time step. 
+    The time declaration for analysis (AN) fields uses the times of the specific analysis while the (forecast time) step has to be ``0``. 
+    The forecast field types (e.g. FC, CF, CV, PF) need to declare a combination of (forescast start) time and the (forecast) step. Together they define the actual time. It is important to know the forecast starting times for the dataset to be retrieved, since they are different. In general, it is sufficient to give information for the exact time steps, but it is also possible to have more time step combinations of ``TYPE``, ``TIME`` and ``STEP`` because the temporal (hourly) resolution with the ``DTIME`` parameter will select the correct combinations. 
+# needs to be rephrased
 
     .. code-block:: bash
-       :caption: Example of a setting for the field types and temporal resolution.
+       :caption: Example of a setting for the field types and temporal resolution. It will retrieve 3-hourly fields, with analyses at 00 and 12 UTC and the corresponding forecasts inbetween.
 
         DTIME 3
         TYPE AN FC FC FC AN FC FC FC
@@ -436,10 +428,12 @@ Field types and times
     
  
 Vertical velocity           
-    The vertical velocity for ``FLEXPART`` is not directly available from MARS. Therefore it has to be calculated. There are a couple of different options. The following parameters are responsible for the selection. See :doc:`Documentation/vertco` for a detailed explanation. The ``ETADIFF``, ``OMEGA`` and ``OMEGADIFF`` versions are only recommended for debugging and testing reasons. Usually it is a decision between ``GAUSS`` and ``ETA``, where for ``GAUSS`` spectral fields of the horizontal wind fields and the divergence are to be retrieved and used with the continuity equation to calculate the vertical velocity. For ``ETA`` the latitude/longitude fields of horizontal wind fields and eta-coordinate are to be retrieved. It is recommended to use ``ETA`` where possible due to a reduced computation time.  
+    The vertical velocity for ``FLEXPART`` is not directly available from MARS and has to be calculated. 
+    There are several options for this, and the following parameters are responsible for the selection. See :doc:`Documentation/vertco` for a detailed explanation. Using ``ETADIFF 1``, ``OMEGA 1`` and ``OMEGADIFF 1`` is recommended for debugging and testing only. 
+  Usually, one has to decide between ``GAUSS 1`` and ``ETA 1``. ``GAUSS 1`` means that spectral fields of the horizontal wind fields and the divergence are retrieved and that the vertical velocity is calculate using the continuity equation. ``ETA 1`` means that horizontal wind fields etadot are retrieved on a regular lat-lon grid. It is recommended to use ``ETA 1`` where possible, as there is a substantial computational overhead for solving the continuity equation.
 
     .. code-block:: bash
-        :caption: Example setting for the vertical coordinate retrieval.
+        :caption: Example setting for the vertical coordinate retrieval (recommended if etadot fields are available).
         
         GAUSS 0
         ETA 1
@@ -450,12 +444,13 @@ Vertical velocity
         
 
 Grid resolution and domain
-    The grid and domain selection depends on each other. The grid can be defined in the format of normal degrees (e.g. ``1.``) or as in older versions by 1/1000. degrees (e.g. ``1000`` for ``1°``).
-    After selecting the grid, the domain has to be defined in a way that the length of the domain in longitude or latitude direction  must be an exact multiple of the grid. 
-    The horizontal resolution for spectral fields will be set by the parameter ``RESOL``. For information about how to select an appropriate value you can read the explanation of the MARS keyword `here <https://confluence.ecmwf.int/display/UDOC/Post-processing+keywords#Post-processingkeywords-resol>`_ and in `this table  <https://confluence.ecmwf.int/display/UDOC/Retrieve#Retrieve-Truncationbeforeinterpolation>`_.
+    The grid and domain parameters depends on each other. ``grid`` refers to the grid resolution. It can be given as decimal values (e.g., ``1.`` meaning 1.0°), or as in previous versions of flex_extract, as integer values refering to 1/1000 degrees (e.g., ``1000`` means also 1°). The code applies common sense to determine what format is to be assumed.
+    After selecting grid, the ``domain`` has to be defined. The extension in longitude or latitude direction must be an integer multiple of ``grid``. 
+#PS shouldn't we explain how to define a domain??
+    The horizontal resolution for spectral fields is set by the parameter ``RESOL``. For information about how to select an appropriate value please read the explanation of the MARS keyword RESOL as found `in this entry of the ECMWF on-line documentation <https://confluence.ecmwf.int/display/UDOC/Post-processing+keywords#Post-processingkeywords-resol>`_ and  `this table (also ECMWF documentation) <https://confluence.ecmwf.int/display/UDOC/Retrieve#Retrieve-Truncationbeforeinterpolation>`_.
     
     .. code-block:: bash
-        :caption: Example setting for a northern hemisphere domain with a grid of ``0.25°``.
+        :caption: Example setting for a domain covering the northern hemisphere domain with a grid resolution of ``0.25°``.
     
         GRID 0.25
         RESOL 799
@@ -467,17 +462,17 @@ Grid resolution and domain
     
 
 Flux data
-    The flux fields are accumulated forecast fields all the time. Since some re-analysis dataset nowadays have complete set of analysis fields in their temporal resolution it was important to define a new parameter set to define the flux fields since the information could not be taken from ``TYPE``, ``TIME`` and ``STEP`` any longer. Select a forecast field type ``ACCTYPE``, the forecast starting time ``ACCTIME`` and the maximum forecast step ``ACCMAXSTEP``. The ``DTIME`` parameter defines the temporal resolution for the whole period. 
+    Flux fields are always forecast fields and contain values of the fluxes accumulated since the start of the respective forecast. As certain re-analysis dataset cover all time steps with analysis fields, it was necessary to define a new parameter set for the definition of the flux fields. The following parameters are used specifically for flux fields, if provided. ``ACCTYPE`` is the field type (must be a type of forecast), ``ACCTIME``  the forecast starting time, and  ``ACCMAXSTEP`` the maximum forecast step;``DTIME`` the temporal resolution. ACCTYPE is assumed to be the same during the whole period given by ACCTIME and ACCMAXSTEP. 
     
     .. code-block:: bash
        :caption: Example setting for the definition of flux fields.
+#PS for which application would this be typical?
     
         DTIME 3
         ACCTYPE FC
         ACCTIME 00/12
         ACCMAXSTEP 36
 
-
     
 .. toctree::
     :hidden: