diff --git a/Jet-Cluster.md b/Jet-Cluster.md
index 012a27cf4f94db995664d4810ba1eaa437116fc7..25bb1c6429b0d387551ffeae48d4da3e320696c2 100644
--- a/Jet-Cluster.md
+++ b/Jet-Cluster.md
@@ -3,50 +3,22 @@
 
 [[_TOC_]]
 
-# Getting Started
+## Getting Started
 
 Welcome to the HPC @IMG @UNIVIE and please follow these steps to become a productive member of our department and make good use of the computer resources. 
 **Efficiency is keen.**
-1. Connect to Jet
-2. Load environment (libraries, compilers, interpreter, tools)
-3. Checkout Code, Program, Compile, Test
-4. Submit to compute Nodes
 
-[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
+Steps:
+1. Request access
+2. Connect to Jet 
+    - [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
+    - [Research Hub](https://jet01.img.univie.ac.at/)
+3. Load environment (libraries, compilers, interpreter, tools)
+4. Checkout Code, Program, Compile, Test
+5. Submit to compute Nodes using slurm ([Slurm Tutorial](https://gitlab.phaidra.org/imgw/slurm))
 
-## Jupyterhub
-<img src="https://jupyter.org/assets/hublogo.svg" width="300px">
-
-The Jet-cluster serves a [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/) that launches on the jet-cluster compute nodes and allows users to work directly on the cluster as well as submit jobs.
-
-Goto: [](https://jet01.img.univie.ac.at) from within the VPN or UNI-Network.
-Login with your jet-Credentials, choose a job and the jupyterlab will be launched.
-
-<img src="Documentation/jet-login.png" width="500px">
 
-<img src="Documentation/jet-job1.png" width="500px">
-
-<img src="Documentation/jet-job2.png" width="500px">
-
-There are several kernels available and some help can be found:
- - [Python/](Python/)
- - [Tutorial on Jet](Python/Your-First-Notebook-onJet_v2.ipynb)
-
-## User Quotas and Restrictions
-
-Memory limit / slurm requests
-
-:construction:
-
-### Network drives
-Currently there are two network drives mounted on the jet cluster. These are connected to the [SRVX8](SRVX8.md) and data can be transfered or accessed like this on the jet cluster. Be aware that the data needs to be transfered via the network before and latencies are higher.
-```
-131.130.157.8:/raid61        400T  378T   22T  95% /raid61
-131.130.157.8:/raid60        319T  309T   11T  97% /raid60
-jetfs                        1.1P   81T  975T   8% /jetfs
-```
-
-# System Information
+## System Information
 Last Update: 4.12.2020
 Node Setup
  - 2x Login Nodes
@@ -55,8 +27,6 @@ Node Setup
 ![GPFS](Documentation/GPFS-jet.png)
 
 
-## Node Information
-
 | Name | Value |
 | --- | --- |
 | Product | ThinkSystem SR650 |
@@ -69,7 +39,7 @@ Node Setup
 
 Global file system (GPFS) is present on all nodes with about 1 PB (~1000 TB) of storage.
 
-# Software
+## Software
 
 The typcial installation of a intel-cluster has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
 Major Libraries:
@@ -122,5 +92,68 @@ nco/4.9.3-gcc-8.3.1-g7o6lao
 on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
 
 
+## Jupyterhub
+<img src="Documentation/jupyterhub-logo.svg" width="100px">
+
+The Jet-cluster serves a [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/) that launches on the jet-cluster compute nodes and allows users to work directly on the cluster as well as submit jobs.
+
+Goto: [](https://jet01.img.univie.ac.at) from within the VPN or UNI-Network.
+Login with your jet-Credentials, choose a job and the jupyterlab will be launched.
+
+<img src="Documentation/jet-login.png" width="500px">
+
+<img src="Documentation/jet-job1.png" width="500px">
+
+<img src="Documentation/jet-job2.png" width="500px">
+
+There are several kernels available and some help can be found:
+ - [Python/](Python/)
+ - [Tutorial on Jet](Python/Your-First-Notebook-onJet_v2.ipynb)
+
+## User Quotas and Restrictions
+
+Please try to use the compute nodes in a responsible manner. Currently there are not restrictions on the duration or the resources you can request. However, please follow these rules of collaboration:
+
+Jobs
+- CPU, keyword: `ntasks` e.g. 1 Node == 2x20 physcial cores
+- Memory, keyword: `mem` e.g. each Node up to 754 GB
+- Runtime, keyword: `time` e.g. try to split jobs into pieces.
+
+Consider the following example. You can use one node relatively easy for more than 3 days with your jobs running, but do not use all nodes an block them for all other users for 3 days. If you need multiple nodes, split the jobs into shorter runtimes. In general it is better to have more smaller jobs that are processed in a chain. Also try not to use too much resources that get wasted. Have a look at resources used in your jobs using the `/usr/bin/time` command or look [here](https://gitlab.phaidra.org/imgw/slurm).
+
+Sample Job
+```
+#!/bin/bash
+# SLURM specific commands
+#SBATCH --job-name=test-run
+#SBATCH --output=test-run.log
+#SBATCH --ntasks=1
+#SBATCH --mem=1MB
+#SBATCH --time=05:00
+#SBATCH --mail-type=BEGIN    # first have to state the type of event to occur 
+#SBATCH --mail-user=<email@address.at>   # and then your email address
+
+# Your Code below here
+module load miniconda3
+# Execute the miniconda Python
+# use /usr/bin/time -v [program]
+# gives statistics on the resources the program uses
+# nice for testing
+/usr/bin/time -v python3 -v
+```
+
+Storage Limitations are set mainly to the HOME directory (default: 100 GB), but there are some general restrictions as well.
+
+On the Login Nodes (jet01/jet02) jobs can only use 20 GB of Memory as the rest needs to be reserved for the file system and destribution services. Jobs will be automatically killed after a safety margin.
+
+
+## Network drives
+Currently there are two network drives mounted on the jet cluster. These are connected to the [SRVX8](SRVX8.md) and data can be transfered or accessed like this on the jet cluster. Be aware that the data needs to be transfered via the network before and latencies are higher.
+```
+131.130.157.8:/mnt/scratch        400T  378T   22T  95% /mnt/scratch
+131.130.157.8:/mnt/users          319T  309T   11T  97% /mnt/users
+jetfs                             1.1P   81T  975T   8% /jetfs
+```
+
 ## Slurm
 [Slurm Tutorial on Gitlab](https://gitlab.phaidra.org/imgw/slurm) 🔒
diff --git a/README.md b/README.md
index aa870fb6b560b7e5ad6501ec6ff29178a2a344f5..78a766851bcb12bf26d3482ff13a7f3d06441f39 100644
--- a/README.md
+++ b/README.md
@@ -19,13 +19,10 @@ If you care to participate please do so. Raise an Issue or give some feedback.
 
 Missing solutions are very welcome.
 
-## Information on some Systems
-- [Jet Cluster Description](Jet-Cluster.md) 📄
-- [Jupyterhub on Jet](https://jet01.img.univie.ac.at) 🤖
-- [Teaching on SRVX1](SRVX1.md) 📄
-- [Jupyterhub on SRVX1](https://srvx1.img.univie.ac.at) 🤖
-- [Instructions for Jupyterhub on SRVX1](TeachingHub.md) 📄
-- [Research on SRVX2](SRVX2.md) 📄
-- [Research on SRVX8](SRVX8.md) 📄
-- [Vienna Scientific Cluster - Training Courses](https://vsc.ac.at/training)
-- [Vienna Scientific Cluster - Research - Private Node Information](VSC.md) 📄
+# Available Servers
+
+- [SRVX1](SRVX1.md), [SRVX8](SRVX8.md) @ UZA2
+- [Jet Cluster @ Arsenal](Jet-Cluser.md)
+- [Vienna Scientific Cluster (VSC)](VSC.md)
+    - [VSC Training](https://vsc.ac.at/training)
+    - [VSC Trainings @ IMGW](https://gitlab.phaidra.org/imgw/trainings-course)
diff --git a/SRVX1.md b/SRVX1.md
index 08272b05efe593672f0f6043b53f38e0f8f45a4d..9cf5ed77bb44473860213f6bed3cd57630eebbdd 100644
--- a/SRVX1.md
+++ b/SRVX1.md
@@ -5,7 +5,16 @@
 
 ## Getting Started
 
-[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
+Steps:
+1. Request access
+2. Access using SSH - [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
+3. Access using Teaching Hub - [How to connect using the TeachingHub](TeachingHub.md)
+4. Access Services:
+  - [Webdata](https://srvx1.img.univie.ac.at/webdata) - Hosting files
+  - [Yopass](https://srvx1.img.univie.ac.at/secure) - Encrypted message service
+  - [ECGateway](https://srvx1.img.univie.ac.at/ecmwf/ecmwf) (UNIVIE Network, [VPN](SSH-VPN-VNC/VPN.md))
+  - [Transfer.sh](https://srvx1.img.univie.ac.at/filetransfer) ()
+  - [iLibrarian](https://srvx1.img.univie.ac.at/)
 
 ## System Information
 | Name | Value |
@@ -30,7 +39,6 @@
 ----------------------------------------------
 ```
 
-
 ## Services
 The SRVX1 is the central access point to IMG services:
 goto: [srvx.img.univie.ac.at](https://srvx1.img.univie.ac.at)
@@ -38,6 +46,8 @@ Currently running:
 - TeachingHub (Jupyterhub)
 - Webdata - File hosting
 - YoPass - Password Sharing
+- iLibrarian - Paper Management (development)
+- Filetransfer.sh - File sharing service (development)
 
 ## Jupyterhub
 <img src="Documentation/jupyterhub-logo.svg" width="300px">
@@ -120,6 +130,30 @@ netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6
 ```
 on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
 
+## User services
+There is a script collection that is accessible via the `userservices` command. e.g. running 
+```bash
+$ userservices
+
+Usage: userservices [service] [Options]
+Available Services:
+------------------------------------------------------------------------
+ archive              --- Submit files/folders to ZID Archive
+ fetch-sysinfo        --- Display system information
+ filesender           --- Transfer files to ACONET filesender (requires account)
+ fix-permissions      --- fix file/directory permissions
+ home-dir-check       --- Check home directory/configuration
+ modules              --- Pretty print environment modules
+ transfersh           --- Transfer files/directories (IMGW subnet)
+ weather              --- Retrieve weather information
+ yopass               --- Send messages/small files to YoPass (encrypted)
+------------------------------------------------------------------------
+These scripts are intended to help with certain known problems.
+Report problems to: michael.blaschek@univie.ac.at
+
+```
+These are scripts in a common directory. Feel free to copy or edit as you like. Note that some services like filesender require an ACONET account (accessible via your u:account).
+
 
 ## Container Hub
 
diff --git a/SRVX8.md b/SRVX8.md
index 98be6e9e336ebc15eba1cf3e92001b23cd1b93e3..1108107331bf4e3affa88573404877b8f3661fb6 100644
--- a/SRVX8.md
+++ b/SRVX8.md
@@ -5,7 +5,9 @@
 
 ## Getting Started
 
-[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
+Steps:
+1. Request access
+2. Access using SSH - [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
 
 ## System Information
 
@@ -92,11 +94,40 @@ python/3.8.9-gcc-4.8.5
 micromamba/latest
 ```
 
+## User services
+There is a script collection that is accessible via the `userservices` command. e.g. running 
+```bash
+$ userservices
+
+Usage: userservices [service] [Options]
+Available Services:
+------------------------------------------------------------------------
+ archive              --- Submit files/folders to ZID Archive
+ fetch-sysinfo        --- Display system information
+ filesender           --- Transfer files to ACONET filesender (requires account)
+ fix-permissions      --- fix file/directory permissions
+ home-dir-check       --- Check home directory/configuration
+ modules              --- Pretty print environment modules
+ sysinfo              --- Display system information
+ transfersh           --- Transfer files/directories (IMGW subnet)
+ vnc                  --- VNC Server Setup/Launcher/Stopper
+ vnc-geometry         --- Change geometry of VNC display
+ weather              --- Retrieve weather information
+------------------------------------------------------------------------
+
+These scripts are intended to help with certain known problems.
+Report problems to: michael.blaschek@univie.ac.at
+
+```
+These are scripts in a common directory. Feel free to copy or edit as you like. Note that some services like filesender require an ACONET account (accessible via your u:account). Please note the available VNC services.
+
+
 ## Virtual Machine Hub
 
 Currently the system acts as a virtual machine host.
 Active:
  - VERA
+ - Ubuntu Geographic
 
 
 ## Container Hub
@@ -125,4 +156,4 @@ containers:
         - rtcoef
         - rthelp
         - rttest
-```
\ No newline at end of file
+```