Skip to content
Snippets Groups Projects
Commit 2c4478ff authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

documentation update, restructure

parent 826b695d
No related branches found
No related tags found
No related merge requests found
......@@ -18,11 +18,12 @@ build:
rules:
# only run pipeline when build is in the commit message
- if: $CI_COMMIT_MESSAGE =~ /.*build.*/
- if: $MY_VARIABLE
- if: $CI_COMMIT_MESSAGE =~ /.*2wolke.*/
- if: $UPDATEWOLKE
before_script:
# Install all required packages
- apt-get install -y -qq graphviz
- pip install -r requirements.txt
script:
# --strict is too strict :)
......
nav:
- README.md
- Getting Started.md
- Servers
- VSC.md
- ECMWF.md
......
# European Center for Medium Weather Forecast
![](./mkdocs/img/logo_ecmwf.png)
# European Center for Medium-Range Weather Forecast
<img src="./mkdocs/img/logo_ecmwf.png" width="400px">
[website](https://www.ecmwf.int) / [service status](https://www.ecmwf.int/en/service-status) / [confluence](https://confluence.ecmwf.int) / [support](https://support.ecmwf.int) / [accounting](https://www.ecmwf.int/user)
If you need access, talk to your supervisor to create an account for you. You will get a username and a password as well as an OTP device (hardware or smartphone). Accounts are handled via [www.ecmwf.int](https://www.ecmwf.int/user/login)
Available Services:
Available Services @ IMGW:
- [ecaccess](https://confluence.ecmwf.int/display/ECAC/ECaccess+Home)
- [srvx1.gateway](https://srvx1.img.univie.ac.at/ecmwf/ecmwf) / [gateway.docs](https://confluence.ecmwf.int/display/ECAC/Releases+-+Gateway+package) / [boaccess](https://boaccess.ecmwf.int)
......@@ -13,7 +14,7 @@ Available Services:
## Connecting to ECMWF Services
A ECMWF user can connect to the ECS/ATOS using teleport
A ECMWF user can connect to the ECS/ATOS using teleport, first load the teleport module and start the ssh-agent:
```bash title="Using teleport"
module load teleport
......@@ -27,6 +28,7 @@ startagent
# Check if it is running
ssh-add -l
```
once you have a running ssh-agent, run a browserless login via python
```bash title="Connecting to ECMWF"
# Login to the default teleport jump host (shell.ecmwf.int) Reading
......@@ -35,13 +37,9 @@ tsh status
# run ssh agent again
ssh-add -l
# now there should be two keys!!!
# Login to the jump host in Bologna
python3 -m teleport.login
# Check Status
tsh status
# ssh to the login nodes
# Login to ECAccess in Bologna
ssh -J [user]@jump.ecmwf.int [user]@ecs-login
# Login to HPC ATOS
ssh -J [user]@jump.ecmwf.int [user]@hpc-login
```
......@@ -52,7 +50,7 @@ Environment variables configuration:
- `TSH_EXEC` - The Teleport binary tsh path
- `TSH_PROXY` - The ECMWF Teleport proxy
You can set these variables in your `~/.bashrc` file to avoid typing these at every login. Please do not save your `ECMWF_PASSWORD` like this!
### SSH-agent
It is required to have an SSH-agent running in order to connect to the ECMWF servers. The teleport module includes a `startagent` function to allow to reconnect to an existing ssh-agent. Do not start too many agents!
......@@ -69,8 +67,10 @@ startagent
There is an issue with ssh-keys
```bash
# Generate a new SSH key
```bash title="ECS fix ssh-key issue"
# connect to ECS following the teleport login procedure above
ssh -J [user]@jump.ecmwf.int [user]@ecs-login
# Generate a new SSH key on ECS, no passphrase.
ssh-keygen -t ed25519
# Add the public key to your own authorized_keys on ECS/HPC
cat .ssh/id_ed25519.pub >> .ssh/authorized_keys
......@@ -78,7 +78,36 @@ cat .ssh/id_ed25519.pub >> .ssh/authorized_keys
This will solve some `ecaccess` issues.
## Connecting via ECaccess
using a local installation of ecaccess tools can be used to submit jobs and monitor jobs from a remote location.
```bash title="ECAccess module"
# load the ecaccess module
module load ecaccess-webtoolkit/6.3.1
# all available tools
ecaccess ecaccess-file-delete
ecaccess-association-delete ecaccess-file-dir
ecaccess-association-get ecaccess-file-get
ecaccess-association-list ecaccess-file-mdelete
ecaccess-association-protocol ecaccess-file-mget
ecaccess-association-put ecaccess-file-mkdir
ecaccess-certificate-create ecaccess-file-modtime
ecaccess-certificate-list ecaccess-file-move
ecaccess-cosinfo ecaccess-file-mput
ecaccess-ectrans-delete ecaccess-file-put
ecaccess-ectrans-list ecaccess-file-rmdir
ecaccess-ectrans-request ecaccess-file-size
ecaccess-ectrans-restart ecaccess-gateway-connected
ecaccess-event-clear ecaccess-gateway-list
ecaccess-event-create ecaccess-gateway-name
ecaccess-event-delete ecaccess-job-delete
ecaccess-event-grant ecaccess-job-get
ecaccess-event-list ecaccess-job-list
ecaccess-event-send ecaccess-job-restart
ecaccess-file-chmod ecaccess-job-submit
ecaccess-file-copy ecaccess-queue-list
# First get a valid certificate to get access
ecaccess-certificate-create
#
```
\ No newline at end of file
......@@ -9,7 +9,7 @@ Check out the handy References:
- [VIMDIFF Reference](VIMDIFF-Reference.pdf)
# Configs
```
```vim
filetype plugin indent on
" show existing tab with 4 spaces width
set tabstop=4
......@@ -23,7 +23,7 @@ set expandtab
Need to install powerline with pip
Config:
```
```vim
" Allow Powerline style
set rtp+=.local/lib/python3.7/site-packages/powerline/bindings/vim/
" These lines setup the environment to show graphics and colors correctly.
......@@ -34,7 +34,7 @@ set t_Co=256
# YAML
Maybe use this config:
```
```vim
" Allow YAML tab styling
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab
let g:indentLine_char_list = ['|', '¦', '?', '?']
......@@ -44,7 +44,7 @@ let g:indentLine_char_list = ['|', '¦', '?', '?']
in vim [backspace] or backwards delete leaves a wired symbol `^?` and does not delete anything?
Add this to your `~/.vimrc`:
```
```vim
" Allow backspace
noremap! <C-?> <C-h>
```
......
......@@ -11,27 +11,24 @@ There is package called `Remote Development` in VSCode which includes
- Remote WSL
- Remote Containers
Although all might be useful, we try to make use of the first one Remote-SSH. If you want to connect to `jet01.img.univie.ac.at` that is not a problem.
Works out of the box. However, using `srvx8` or `srvx1` does not. The kernel is too old and requirements are not met.
There is a fix.
We are going to use a [singularity container](https://sylabs.io/guides/3.7/user-guide/) to run a more recent version of `node` and
allow VSCode to use `srvx8` as well. It will complain a bit, but work.
Although all might be useful, we try to make use of the first one Remote-SSH.
### Kernel version is too old?
We are going to use a [singularity container](https://sylabs.io/guides/3.7/user-guide/) to run a more recent version of `node` and allow VSCode to use `srvx8` as well. It will complain a bit, but work.
Here is how:
1. Login to `srvx8` or `srvx1`
1. Login to `srvx8`
2. run `/opt/containers/vscode-server/setup.sh`
3. Look inside `$HOME/.vscode-server/singularity.bash` and make sure that path that you require are available in `SINGULARITY_BIND` variable.
Your Home directory is always available. Other system path are not.
- Maybe it is necessary to change the user shell of your account to bash. Run `chsh -s /bin/bash` on remote host (`srvx8`, `srvx1`, ...)
- Maybe it is necessary to change the user shell of your account to bash. Run `chsh -s /bin/bash` on remote host (`srvx8`, ...)
4. Switch `remote.SSH.localServerDownload` to `off` in the Remote SSH package.
5. Setup a new host in Remote SSH on your VScode and connect to `srvx8` or `srvx1`.
5. Setup a new host in Remote SSH on your VScode and connect to `srvx8`.
Configure a Python interpreter in VSCode according to the installed module:
- SRVX8 : `/home/opt/spack/opt/spack/linux-centos6-haswell/gcc-5.3.0/anaconda3-2020.07-opjqtspow2mjqthtdxvx7epz6rntkv2p/bin/python`
- SRVX1 : `/home/opt/spack/opt/spack/linux-centos6-sandybridge/gcc-5.3.0/anaconda3-2020.07-6p2eqnrd2mu4masynsdqvrbfdit7fyuk/bin/python`
## Updates
#### Updates
Whenever the vscode-server reports an error, it might have happened that it automatically updated
and a new version has been installed on the remote host. if so run `~/vscode-server/fix.sh` to make the changes to the run scripts.
Whenever the vscode-server reports an error, it might have happened that it automatically updated and a new version has been installed on the remote host. if so run `~/vscode-server/fix.sh` to make the changes to the run scripts.
Then it should work again.
# Getting Started @IMGW
# Getting Started
Welcome to the Department of Meteorology and Geophysics @ University of Vienna.
**Welcome to the Department of Meteorology and Geophysics @ University of Vienna.**
Tasks to complete for newcomers:
- [ ] Get a server account
- [ ] Change your Password via [IPA (web)]() or [login (ssh)]() to a secure Password. Preferably use a password manager. [Tips]()
Tasks to complete for newcomers, it is recommended that you print this page an tick of your steps:
- [ ] Request a server account via your supervisor
- [ ] Receive the inital user account information via mail.
- [ ] Change your initial password, with a
- browser ([https://wolke.img.univie.ac.at/ipa](https://wolke.img.univie.ac.at/ipa))
- ssh terminal `ssh [username]@srvx1.img.univie.ac.at`, follow the instructions in the command shell. Windows users install a ssh client, e.g.: Bitwise
- Optional: Setup a password manager. [tips](https://zid.univie.ac.at/en/it-worlds/it-security/it-security-tips/password-manager/)
- [ ] Install a SSH-Client (Windows), e.g.
- [ ] Optional setup ssh-key
- [ ] Familiarize yourself with the environment
## Environment
Please do the following steps to get a better idea of what is where:
Steps:
- [ ] login to srvx1 using ssh
- [ ] run `userpaths` to understand where different data resides:
- HOME, SCRATCH (personal), DATA, SHARED, WEBDATA, ?_JET
- [ ] check available modules `module av` and load anaconda3 `module load anaconda3`. This should allow you to run some python programs.
- [ ] run `userservices` to get some IMGW special tools.
## Summary of Computing Resources
The Department of Meteorology and Geophysics has access to the following computing resources:
- Teaching and Development Server ([SRVX1](Servers/SRVX1.md))
- Remote Desktop Server ([SRVX8](Servers/SRVX8.md))
- Computing Cluster ([JET](Servers/JET.md))
- Vienna Scientific Cluster ([VSC](VSC.md))
- European Center for Medium-Range Weather Forecast ([ECMWF](ECMWF.md))
<img src="./mkdocs/img/logo_uniwien.jpg" width="300px">
<img src="./mkdocs/img/logo_img2_color.png" width="300px">
<img src="./mkdocs/img/rocket.png" width="100px">
# Welcome to the Department of Meteorology and Geophysics
[@University of Vienna](https://univie.ac.at)
Find help here with your computer related problems.
......@@ -15,18 +13,4 @@ Search with the top bar or go through the directories:
- Data availability and location
- Git related problems
If you care to participate please do so. Raise an Issue or give some feedback.
Missing solutions are very welcome.
# Available Work Environments
Locations:
- Staff + Students [SRVX1](Servers/SRVX1.md)
- Staff + Remote Desktop [SRVX8](Servers/SRVX8.md)
- Staff + Remote Desktop + Jupyterhub [Jet Cluster](Servers/JET.md)
- Staff + Students, Jupyterhub called [TeachingHub](TeachingHub.md)
- Staff [Vienna Scientific Cluster (VSC)](VSC.md)
- [VSC Training](https://vsc.ac.at/training)
- [VSC Trainings @ IMGW](https://gitlab.phaidra.org/imgw/trainings-course)
If you care to participate please do so. Raise an [Issue on Gitlab](https://gitlab.phaidra.org/imgw/computer-resources/-/issues) or give some feedback ([mail](mailto:it.img-wien@univie.ac.at), [mattermost](https://discuss.phaidra.org/imgw/channels/bugs)) or write to [individual members of the department](https://img.univie.ac.at/en/about-us/staff/).
# SSH
# Secure Shell (SSH)
**From any computer in the IMGW subnet**: Log in via ssh by typing either of the following in a terminal (there are two redundant login nodes, jet01 and jet02). Replace `[USERNAME]` with your own.
## Clients
on Linux and Mac, all tools are present. on Windows use one of these:
- [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) (for the SSH tunnel)
- [MobaXterm](https://mobaxterm.mobatek.net)
- Windows subsystem Linux (WSL), [install](https://learn.microsoft.com/en-us/windows/wsl/install), then install e.g. Ubuntu and install the openssh.
- [VSCode](../Editors/vscode.md)
- Putty, Kitty, ...
## Connect
[How to connect from the Office](../Servers/README.md#how-to-connect-from-the-office) or [How to connect from abroad](../Servers/README.md#how-to-connect-from-home-or-abroad)
**Connect from the office** by typing either of the following in a terminal. Replace `[USERNAME]` with your own.
```bash title="SSH commands"
ssh -X [USERNAME]@srvx1.img.univie.ac.at
......@@ -11,11 +25,11 @@ ssh -X [USERNAME]@jet02.img.univie.ac.at
ssh -X [USERNAME]@131.130.157.216
```
The `-X` option enables X11 forwarding via ssh, i.e., permits opening graphical windows.
The `-X` option enables X11 forwarding via ssh, i.e., permits opening graphical windows. On Windows you need to enter these details to the ssh client.
Consider using a `~/.ssh/config` configuration file to allow easier access like this:
``` title="./ssh/config"
```sh title="./ssh/config"
Host *
ServerAliveInterval 60
ServerAliveCountMax 2
......@@ -58,15 +72,15 @@ Host a?-* a??-* hpc-* hpc2020-* ecs-*
User [ECMWF USERNAME]
ProxyJump jump.ecmwf.int
```
and replacing `[USERNAME]` and `[U:Account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx1` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway.
and replacing `[USERNAME]` and `[u:account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx1` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway.
Please note the special algorithms for ecaccess.
Please note the special algorithms for ecaccess and of course ECMWF uses [teleport](../ECMWF.md#connecting-to-ecmwf-services) now.
**From eduroam**: You should be able to log in as above.
**From the outer world**: First connect to the [UniVie VPN](https://zid.univie.ac.at/en/vpn/), then start up a terminal and log in as above. Consider using [connect2vpn](connect2vpn) script to do so. Set your U:Account username as environmental variable `$VPN_USER` and run `connect2vpn`, select split (only adresses within the UNINET) or full (all connections) tunnel and watch the connection information update every 10 seconds.
**From the outer world**: use the [VPN](VPN.md) or `srvx1.img.univie.ac.at` as jump host.
If you are a guest, you can apply for a [guest u:account](https://zid.univie.ac.at/uaccount/#c11096). This will give you access to eduroam and to the VPN. Your application needs to be endorsed by a staff member, who also determines the expiration date of the account.
If you are a guest, you can apply for a [guest u:account](https://zid.univie.ac.at/en/uaccount/#c14154). This will give you access to eduroam and to the VPN. Your application needs to be endorsed by a staff member, who also determines the expiration date of the account. **Please ask the sponsor first!**
## SSH Authentication with keys
......@@ -94,7 +108,7 @@ connect2jet -g [U:Account-Username]@login.univie.ac.at [Jet-Username]@jet01.img.
??? note "connect2jet"
``` bash title="Connect to Jet"
```bash title="Connect to Jet"
--8<-- "SSH-VPN-VNC/connect2jet"
```
......@@ -115,19 +129,14 @@ On Linux, start [Remmina](https://remmina.org/), then:
* Move to the "SSH Tunnel" tab, checkout "Enable SSH Tunnel", "Same server at port 22" and specify your favourite SSH authentication method.
* Save and connect.
On Windows, you can use either [SSVNC](http://www.karlrunge.com/x11vnc/ssvnc.html), or the combination of [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) (for the SSH tunnel) and the [RealVNC VNC Viewer](https://www.realvnc.com/en/connect/download/viewer/windows/).
Option 1: [SSVNC](http://www.karlrunge.com/x11vnc/ssvnc.html)
On Windows, you can use [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) (for the SSH tunnel) and the [RealVNC VNC Viewer](https://www.realvnc.com/en/connect/download/viewer/windows/) or [MobaXterm](https://mobaxterm.mobatek.net).
* Set "VNC Host Display" to `jet01.img.univie.ac.at:[DISPLAY]`
* Ser "Proxy/Gateway" to `[USERNAME]@jet01.img.univie.ac.at`
* Select "Use SSH".
* Connect. You'll be asked first for the SSH password, then for the VNC password.
Setup might be bit different for different clients, but all need these informationÖ
Option 2: [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) and RealVNC
Option Bitvise SSH Client/MobaXterm and RealVNC:
* Start the Bitvise SSH client
* Go to tab "C2S"
* Start the SSH client
* Go to tab "C2S" or SSH tunnels (port forwarding)
* Set "Listen Interface" to `127.0.0.1`
* Set "Listening Port" to `5900+[DISPLAY]`, e.g., `5905`
* Set "Destination Host" to `jet01.img.univie.ac.at`
......
......@@ -3,6 +3,8 @@
> Research Cluster for Staff
> :rocket: 🔒
<img src="../mkdocs/img/rocket.png" width="100px">
## Getting Started
Welcome to the HPC @IMG @UNIVIE and please follow these steps to become a productive member of our department and make good use of the computer resources.
......@@ -10,13 +12,11 @@ Welcome to the HPC @IMG @UNIVIE and please follow these steps to become a produc
Steps:
1. Request access
2. Connect to Jet
- [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
- [Research Hub](https://jet01.img.univie.ac.at/)
1. [Getting Started](../Getting%20Started.md)
2. Connect to Jet
3. Load environment (libraries, compilers, interpreter, tools)
4. Checkout Code, Program, Compile, Test
5. Submit to compute Nodes using slurm ([Slurm Tutorial](https://gitlab.phaidra.org/imgw/slurm))
5. Submit to compute nodes using [slurm](JET.md#slurm)
## System Information
......@@ -25,15 +25,16 @@ Last Update: 4.12.2020
Node Setup:
- 2x Login Nodes
- 7x Compute Nodes
- 2x Login Nodes (JET01, JET02)
- 7x Compute Nodes (03-09)
![GPFS](../mkdocs/img/GPFS-jet.png)
### Example of one Compute Node
| Name | Value |
| Type | Detail |
| --- | --- |
| Product | ThinkSystem SR650 |
| Product | ThinkSystem SR630 |
| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz |
| Cores | 2 CPU, 20 physical cores per CPU, total 80 logical CPU units |
| CPU Time | 350 kh |
......@@ -41,7 +42,7 @@ Node Setup:
| Memory/Core | 18.9 GB |
| Network | 40 Gbit/s (Infiniband) |
Global file system (GPFS) is present on all nodes with about 1.5 PB (~1500 TB) of storage.
All nodes are connected to a global file system (GPFS) with about 1.5 PB (~1500 TB) of storage. There is no need to copy files to the compute nodes, your HOME and SCRATCH directories will be available under the same path as on the login nodes.
## Software
......@@ -57,11 +58,12 @@ Major Libraries:
- Interpreters: Python, Julia
- Tools: cdo, ncl, nco, ncview
These software libraries are usually handled by environment modules.
These software libraries are usually handled by environment modules. *Need another library 🯄 mail to <a href="mailto:it.img-wien@univie.ac.at?subject=Missing Library on JET">IT</a>*
![](./mkdocs/img/envmodules.png)
## Currently installed modules
![](../mkdocs/img/envmodules.png)
```bash
$ module av
......@@ -144,30 +146,34 @@ on how to use environment modules go to [Using Environment Modules](Misc/Environ
## Jupyterhub
<img src="./mkdocs/img/jupyterhub-logo.svg" width="100px">
<img src="../mkdocs/img/jupyterhub-logo.svg" width="100px">
The Jet Cluster serves a [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/) that launches on the jet-cluster compute nodes and allows users to work directly on the cluster as well as submit jobs.
The Jet Cluster serves a [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/) that launches on the JET cluster compute nodes and allows users to work directly on the cluster as well as submit jobs.
Steps:
- [https://jet01.img.univie.ac.at](https://jet01.img.univie.ac.at) from within the VPN or UNI-Network.
- Login with your Jet Credentials
- Choose a job
- The jupyterlab will be launched and will be available to you until you log out or the wall time exceeds.
- The jupyterlab will be launched and will be available to you until you log out or the walltime exceeds (depends on the job you lauch).
**Please use the resources responsible. We trust that you apply a fair-share policy and collaborate with your colleagues.**
<img src="./mkdocs/img/jet-login.png" width="500px">
<img src="../mkdocs/img/jet-login.png" width="500px">
<img src="./mkdocs/img/jet-job1.png" width="500px">
<img src="../mkdocs/img/jet-job1.png" width="500px">
<img src="./mkdocs/img/jet-job2.png" width="500px">
<img src="../mkdocs/img/jet-job2.png" width="500px">
There are several kernels available and some help can be found:
There are several kernels available as modules and how to use other kernels can be found here:
- [Tutorial on Jet](Python/Your-First-Notebook-onJet_v2.ipynb)
- [Tutorial on Jet](../Python/Your-First-Notebook-onJet_v2.ipynb)
- [A conda environment](../Python/QA-003-Conda-Environment.ipynb)
- [Remote kernels](../Python/QA-005-Remote-Kernels.ipynb)
## User Quotas and Restrictions
Please try to use the compute nodes in a responsible manner. Currently there are not restrictions on the duration or the resources you can request. However, please follow these rules of collaboration:
Currently there are not restrictions on the duration or the resources you can request. On JET the nodes can be shared between jobs, whereas on VSC nodes are job exclusive. Please follow these rules of collaboration:
Jobs:
......@@ -175,10 +181,13 @@ Jobs:
- Memory, keyword: `mem` e.g. each Node up to 754 GB
- Runtime, keyword: `time` e.g. try to split jobs into pieces.
Consider the following example. You can use one node relatively easy for more than 3 days with your jobs running, but do not use all nodes an block them for all other users for 3 days. If you need multiple nodes, split the jobs into shorter runtimes. In general it is better to have more smaller jobs that are processed in a chain. Also try not to use too much resources that get wasted. Have a look at resources used in your jobs using the `/usr/bin/time` command or look [here](https://gitlab.phaidra.org/imgw/slurm).
Consider the following example:
> You can use 1 node relatively easy for more than 3 days with your jobs running, but do not use all nodes an block them for all other users for 3 days. If you need multiple nodes, split the jobs into shorter runtimes. In general it is better to have more smaller jobs that are processed in a chain. Also try not to use too much resources that are not used.
Have a look at resources used in your jobs using the `/usr/bin/time` command or look [here](https://gitlab.phaidra.org/imgw/slurm).
Sample Job
```bash
```slurm title="Slurm example on JET"
#!/bin/bash
# SLURM specific commands
#SBATCH --job-name=test-run
......@@ -200,17 +209,27 @@ module load miniconda3
Storage Limitations are set mainly to the HOME directory (default: 100 GB), but there are some general restrictions as well.
On the Login Nodes (jet01/jet02) jobs can only use 20 GB of Memory as the rest needs to be reserved for the file system and destribution services. Jobs will be automatically killed after a safety margin.
### Login nodes
On the Login Nodes (JET01/JET02) processes can only use 20 GB of Memory and a limit of 500 processes for users.
These nodes are really just for login and not anything else.
The global file system and destribution services are running on JET01. **Jobs on the login nodes that violate these limits will be automatically killed.**
## Network drives
Currently there are two network drives mounted on the jet cluster. These are connected to the [SRVX1](SRVX1.md) and data can be transfered or accessed like this on the jet cluster. Be aware that the data needs to be transfered via the network before and latencies are higher.
```
131.130.157.8:/mnt/scratch 400T 378T 22T 95% /mnt/scratch
131.130.157.8:/mnt/users 319T 309T 11T 97% /mnt/users
jetfs 1.1P 81T 975T 8% /jetfs
Transfer of files between SRVX1 and JET is **not** necessary. The file system from SRVX1 is mounted on the JET cluster and vice versa. These mounted drives need to transfer the data via the network and latencies might be higher.
```bash title="Mounted files systems"
df -h
131.130.157.11:/mnt/scratch 400T 364T 37T 91% /mnt/scratch
131.130.157.11:/mnt/users 319T 232T 88T 73% /mnt/users
jetfs 1.4P 1.2P 205T 86% /jetfs
```
## Slurm
More advanced slurm tutorial and recommended usage.
[Slurm Tutorial on Gitlab](https://gitlab.phaidra.org/imgw/slurm) 🔒
There is some more information about how to use slurm:
- [Summary](../Misc/Slurm.md)
- a more advanced [Slurm Tutorial on Gitlab (🔒 staff only)](https://gitlab.phaidra.org/imgw/slurm)
- [VSC Slurm introduction](https://wiki.vsc.ac.at/doku.php?id=doku:slurm&rev=1643751174)
- [VSC SLURM 🖻](https://vsc.ac.at/fileadmin/user_upload/vsc/online-courses/VSC-Intro/vsc_intro_slurm_basics.pdf)
- [Slurm Quick Start Guide - Manual Page](https://slurm.schedmd.com/quickstart.html)
# Servers
The Department of Meteorology and Geophysics maintains currently the following computing resources:
The Department of Meteorology and Geophysics has access to the following computing resources:
- Teaching and Service Hub (SRVX1)
- Remote Desktop (SRVX8)
- Computing Cluster (JET)
- Teaching and Development Server ([SRVX1](SRVX1.md))
- Remote Desktop Server ([SRVX8](SRVX8.md))
- Computing Cluster ([JET](JET.md))
- Vienna Scientific Cluster ([VSC](../VSC.md))
|
<img src="../mkdocs/img/chip.png" width="50px">
## Available Work Environments
Locations:
- Staff + Students [SRVX1](SRVX1.md)
- Staff + Remote Desktop [SRVX8](SRVX8.md)
- Staff + Remote Desktop + Jupyterhub [Jet Cluster](JET.md)
- Staff + Students, Jupyterhub called [TeachingHub](../TeachingHub.md)
- Staff [Vienna Scientific Cluster (VSC)](VSC.md)
- [VSC Training](https://vsc.ac.at/training)
- [VSC Trainings @ IMGW](https://gitlab.phaidra.org/imgw/trainings-course)
## How to connect from Home or Abroad?
```graphviz dot attack_plan.png
digraph Servers {
srvx1 [image="../mkdocs/img/chip.png"]
users [shape=none image="./mkdocs/img/student_sm.png" label="User" labelloc="b" height="1" imagepos="tc"]
srvx1 [shape=none image="./mkdocs/img/server_sm.png" label="SRVX1" labelloc="b" height="1" imagepos="tc"]
srvx8 [shape=none image="./mkdocs/img/server_sm.png" label="SRVX8" labelloc="b" height="1" imagepos="tc"]
jet [shape=none image="./mkdocs/img/server_sm.png" label="JET" labelloc="b" height="1" imagepos="tc"]
VPN [shape=none image="./mkdocs/img/local-area-network_sm.png" label="VPN" labelloc="b" height="1" imagepos="tc"]
vsc [shape=none image="./mkdocs/img/logo_vsc_sm.png" label="VSC" labelloc="b" height=".8" imagepos="tc"]
users -> srvx1
users -> VPN
srvx1 -> srvx8
srvx1 -> jet
srvx1 -> vsc
VPN -> jet
VPN -> vsc
VPN -> srvx8
}
```
## How to connect from the Office?
```graphviz dot attack_plan.png
digraph Servers {
users [shape=none image="./mkdocs/img/student_sm.png" label="User" labelloc="b" height="1" imagepos="tc"]
srvx1 [shape=none image="./mkdocs/img/server_sm.png" label="SRVX1" labelloc="b" height="1" imagepos="tc"]
srvx8 [shape=none image="./mkdocs/img/server_sm.png" label="SRVX8" labelloc="b" height="1" imagepos="tc"]
jet [shape=none image="./mkdocs/img/server_sm.png" label="JET" labelloc="b" height="1" imagepos="tc"]
vsc [shape=none image="./mkdocs/img/logo_vsc_sm.png" label="VSC" labelloc="b" height=".8" imagepos="tc"]
users -> srvx1
users -> srvx8
users -> jet
users -> vsc
}
```
# Containers & Services
Summary of services and containerized applications available @IMGW.
| name | type | description | url/path | host |
|---|---|---|---|---|
| teachinghub | service | Jupyterlab for Teaching & Development | [teaching-wolke.img.univie.ac.at](https://teaching-wolke.img.univie.ac.at) | srvx1 |
| | | | | |
Containerization is done via singularity/apptainer. A some what incomplete introduction is given [here](https://gitlab.phaidra.org/imgw/singularity).
\ No newline at end of file
......@@ -8,10 +8,9 @@
Steps:
1. Request access / will be done for you by the lecturer
2. As Staff, access using SSH - [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
3. As Student, access using Teaching Hub - [How to connect using the TeachingHub](TeachingHub.md)
4. Access Services [https://srvx1.img.univie.ac.at](https://srvx1.img.univie.ac.at)
1. Request access / will be done for you by your supervisor.
2. As Staff, access using SSH - [How to SSH / VNC / VPN](../SSH-VPN-VNC/README.md)
3. As Student, access using Teaching Hub - [How to connect using the TeachingHub](../TeachingHub.md)
## System Information
| Name | Value |
......@@ -37,21 +36,8 @@ Steps:
----------------------------------------------
```
## Services
The SRVX1 is the central access point to IMG services:
goto: [srvx.img.univie.ac.at](https://srvx1.img.univie.ac.at)
Currently running:
- TeachingHub (Jupyterhub)
- Webdata - File hosting
- YoPass - Password Sharing
- iLibrarian - Paper Management
- Filetransfer.sh - File sharing service
## Jupyterhub
<img src="./mkdocs/img/jupyterhub-logo.svg" width="150px">
<img src="../mkdocs/img/jupyterhub-logo.svg" width="150px">
SRVX1 serves a teaching [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/). It allows easy access for students and teachers. Access: [https://srvx1.img.univie.ac.at/hub](https://srvx1.img.univie.ac.at/hub)
......@@ -83,46 +69,82 @@ Major Libraries:
These software libraries are usually handled by environment modules.
![](./mkdocs/img/envmodules.png)
![](../mkdocs/img/envmodules.png)
## Currently installed modules
Please note that new versions might already be installed.
```bash title="available modules"
module av
--- /home/swd/spack/share/spack/modules/linux-rhel8-skylake_avx512
anaconda3/2020.11-gcc-8.5.0 matlab/R2020b-gcc-8.5.0 proj/8.1.0-gcc-8.5.0
anaconda3/2021.05-gcc-8.5.0 miniconda2/4.7.12.1-gcc-8.5.0 python/3.8.12-gcc-8.5.0
autoconf/2.69-oneapi-2021.2.0 miniconda3/4.10.3-gcc-8.5.0
autoconf/2.71-oneapi-2021.2.0 nco/4.9.3-intel-20.0.4
cdo/1.9.10-gcc-8.5.0 nco/5.0.1-gcc-8.5.0
eccodes/2.19.1-gcc-8.5.0 ncview/2.1.8-gcc-8.5.0
eccodes/2.19.1-gcc-8.5.0-MPI3.1.6 netcdf-c/4.6.3-gcc-8.5.0-MPI3.1.6
eccodes/2.19.1-intel-20.0.4 netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6
eccodes/2.19.1-intel-20.0.4-MPI3.1.6 netcdf-c/4.7.4-gcc-8.5.0
eccodes/2.21.0-gcc-8.5.0 netcdf-c/4.7.4-intel-20.0.4
eccodes/2.21.0-gcc-8.5.0-MPI3.1.6 netcdf-fortran/4.5.2-gcc-8.5.0-MPI3.1.6
eccodes/2.21.0-intel-20.0.4 netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6
gcc/8.5.0-gcc-8.5rhel8 netcdf-fortran/4.5.3-gcc-8.5.0
geos/3.8.1-gcc-8.5.0 netlib-lapack/3.9.1-gcc-8.5.0
hdf5/1.10.7-gcc-8.5.0 netlib-lapack/3.9.1-intel-20.0.4
hdf5/1.10.7-gcc-8.5.0-MPI3.1.6 netlib-lapack/3.9.1-oneapi-2021.2.0
hdf5/1.10.7-intel-20.0.4-MPI3.1.6 netlib-scalapack/2.1.0-gcc-8.5.0
hdf5/1.12.0-intel-20.0.4 netlib-scalapack/2.1.0-gcc-8.5.0-MPI3.1.6
hdf5/1.12.0-oneapi-2021.2.0 openblas/0.3.18-gcc-8.5.0
intel-mkl/2020.4.304-intel-20.0.4 openmpi/3.1.6-gcc-8.5.0
intel-oneapi-compilers/2021.2.0-oneapi-2021.2.0 openmpi/3.1.6-intel-20.0.4
intel-oneapi-dal/2021.2.0-oneapi-2021.2.0 openmpi/4.0.5-gcc-8.5.0
intel-oneapi-mkl/2021.2.0-oneapi-2021.2.0 openmpi/4.0.5-intel-20.0.4
intel-oneapi-mpi/2021.2.0-oneapi-2021.2.0 parallel-netcdf/1.12.2-gcc-8.5.0
intel-parallel-studio/composer.2020.4-intel-20.0.4 parallel-netcdf/1.12.2-gcc-8.5.0-MPI3.1.6
libemos/4.5.9-gcc-8.5.0-MPI3.1.6 perl/5.32.0-intel-20.0.4
--- /home/swd/modules
anaconda3/leo-current-gcc-8.3.1 idl/8.2-sp1 micromamba/0.15.2 pypy/7.3.5 xconv/1.94
ecaccess/4.0.2 intelpython/2021.4.0.3353 ncl/6.6.2 shpc/0.0.33
------- /home/swd/spack/share/spack/modules/linux-rhel8-skylake_avx512 -------
anaconda2/2019.10-gcc-8.5.0 proj/7.1.0-gcc-8.5.0
anaconda3/2020.11-gcc-8.5.0 proj/8.1.0-gcc-8.5.0
anaconda3/2021.05-gcc-8.5.0 python/3.8.12-gcc-8.5.0
autoconf/2.69-oneapi-2021.2.0
autoconf/2.71-oneapi-2021.2.0
cdo/1.9.10-gcc-8.5.0
eccodes/2.19.1-gcc-8.5.0
eccodes/2.19.1-gcc-8.5.0-MPI3.1.6
eccodes/2.19.1-intel-20.0.4
eccodes/2.19.1-intel-20.0.4-MPI3.1.6
eccodes/2.21.0-gcc-8.5.0
eccodes/2.21.0-gcc-8.5.0-MPI3.1.6
eccodes/2.21.0-intel-20.0.4
fftw/3.3.10-gcc-8.5.0
fftw/3.3.10-gcc-8.5.0-MPI3.1.6
gcc/8.5.0-gcc-8.5rhel8
geos/3.8.1-gcc-8.5.0
hdf5/1.10.7-gcc-8.5.0
hdf5/1.10.7-gcc-8.5.0-MPI3.1.6
hdf5/1.10.7-intel-20.0.4-MPI3.1.6
hdf5/1.12.0-intel-20.0.4
hdf5/1.12.0-oneapi-2021.2.0
intel-mkl/2020.4.304-intel-20.0.4
intel-oneapi-compilers/2021.2.0-oneapi-2021.2.0
intel-oneapi-dal/2021.2.0-oneapi-2021.2.0
intel-oneapi-mkl/2021.2.0-oneapi-2021.2.0
intel-oneapi-mpi/2021.2.0-oneapi-2021.2.0
intel-parallel-studio/composer.2020.4-intel-20.0.4
libemos/4.5.9-gcc-8.5.0-MPI3.1.6
libemos/4.5.9-intel-20.0.4
libgeotiff/1.6.0-intel-20.0.4
matlab/R2020b-gcc-8.5.0
miniconda2/4.7.12.1-gcc-8.5.0
miniconda3/4.10.3-gcc-8.5.0
nco/4.9.3-intel-20.0.4
nco/5.0.1-gcc-8.5.0
ncview/2.1.8-gcc-8.5.0
ncview/2.1.8-intel-20.0.4-MPI3.1.6
netcdf-c/4.6.3-gcc-8.5.0-MPI3.1.6
netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6
netcdf-c/4.7.4-gcc-8.5.0
netcdf-c/4.7.4-intel-20.0.4
netcdf-fortran/4.5.2-gcc-8.5.0-MPI3.1.6
netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6
netcdf-fortran/4.5.3-gcc-8.5.0
netlib-lapack/3.9.1-gcc-8.5.0
netlib-lapack/3.9.1-intel-20.0.4
netlib-lapack/3.9.1-oneapi-2021.2.0
netlib-scalapack/2.1.0-gcc-8.5.0
netlib-scalapack/2.1.0-gcc-8.5.0-MPI3.1.6
openblas/0.3.18-gcc-8.5.0
opencoarrays/2.7.1-gcc-8.5.0
opencoarrays/2.7.1-intel-20.0.4
openmpi/3.1.6-gcc-8.5.0
openmpi/3.1.6-intel-20.0.4
openmpi/4.0.5-gcc-8.5.0
openmpi/4.0.5-intel-20.0.4
parallel-netcdf/1.12.2-gcc-8.5.0
parallel-netcdf/1.12.2-gcc-8.5.0-MPI3.1.6
----------------------------- /home/swd/modules ------------------------------
anaconda3/leo-current-gcc-8.3.1 intelpython/2021.4.0.3353 pypy/7.3.5
ecaccess-webtoolkit/4.0.2 intelpython/2022.0.2.155 shpc/0.0.33
ecaccess-webtoolkit/6.3.1 micromamba/0.15.2 teleport/10.1.4
enstools/v2021.11 micromamba/0.27.0 teleport/10.3.3
idl/8.2-sp1 ncl/6.6.2 xconv/1.94
```
on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
on how to use environment modules go to [Using Environment Modules](../Misc/Environment-Modules.md)
## User services
There is a script collection that is accessible via the `userservices` command. e.g. running
......@@ -150,7 +172,7 @@ These are scripts in a common directory. Feel free to copy or edit as you like.
## Container Hub
Currently there is the possibility to run [singularity](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker.
Currently there is the possibility to run [singularity/apptainer](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker.
There are a number of prepared containers but more can be added. If you have a wish or an existing container useful for others please share.
......@@ -188,4 +210,12 @@ containers:
- eccodes:2.6.0
- libemos
- description: Use for building flexextract with working libemos dependencies.
- TEXLIVE:
- TEXLIVE: TeX Live 2022
- compiler: gcc:12.2.0
- path: /home/swd/containers/texlive/texlive-2022.sif
- os: debian 12
- singularity: 3.8.7
- packages:
- texlive from gitlab
```
# Services
Information on available services:
- [TeachingHub (Jupyterhub)]()
- [Webdata - File hosting]()
- [YoPass - Password Sharing]()
- [iLibrarian - Paper Management]()
- [Filetransfer.sh - File sharing service]()
- [Uptime - Monitoring]()
- [Gitlab Runner - Gitlab CI]()
-
- [TeachingHub (Jupyterhub)](#teachinghub)
- [Webdata - File hosting](#webdata)
- [YoPass - Password Sharing](#yopass)
- [iLibrarian - Paper Management](#ilibrarian)
- [Filetransfer.sh - File sharing service](#filetransfer)
- [Uptime - Monitoring](#uptime)
- [Gitlab Runner - Gitlab CI](#gitlab-runner)
## TeachingHub
A multi-user version of the notebook designed for companies, classrooms and research labs.
[website](https://jupyter.org/hub) access: [teachinghub](https://teaching-wolke.img.univie.ac.at)
You can use this link:
- https://teaching-wolke.img.univie.ac.at
[Documentation](../TeachingHub.md) for using the TeachingHub at IMGW.
## Webdata
A modern HTTP web server index for Apache httpd, lighttpd, and nginx.
[website](https://larsjung.de/h5ai/) access: [webdata](https://srvx1.img.univie.ac.at/webdata)
You can use two links:
- https://srvx1.img.univie.ac.at/webdata
- https://img.univie.ac.at/webdata
## YoPass
Yopass is created to reduce the amount of clear text passwords stored in email and chat conversations by encrypting and generating a short lived link which can only be viewed once.
[website](https://yopass.se) access: [secure](https://srvx1.img.univie.ac.at/secure)
## iLibrarian
I, Librarian is an online service that will organize your collection of PDF papers and office documents.
[website](https://i-librarian.net) access: [library](https://srvx1.img.univie.ac.at/library)
## Filetransfer
Easy and fast file sharing from the command-line. This code contains the server with everything you need to create your own instance.
[website](https://transfer.sh) access: [filetransfer](https://srvx1.img.univie.ac.at/filetransfer)
## Uptime
Uptime Kuma is an easy-to-use self-hosted monitoring tool.
[website](https://uptime.kuma.pet) access: [uptime](https://uptime-wolke.img.univie.ac.at)
## Gitlab Runner
GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.
[website]() access: via [gitlab](https://gitlab.phaidra.org/groups/imgw/-/runners)
## Filetransfer
\ No newline at end of file
Example project on gitlab showing how to implement a gitlab CI.
\ No newline at end of file
......@@ -8,9 +8,14 @@ Address: [https://srvx1.img.univie.ac.at/hub](https://srvx1.img.univie.ac.at/hub
Access to the Teaching Hub in a browser (mobile or desktop)
## How to get access?
### Students
Usually **your teacher** is going to **sign you up** and you will receive one or two emails containing information on how to login.
You will receive access to the Teaching Hub usually for 6 month. Can be extended for good reasons or until a teacher signs you up again.
### Staff
Request an account by writing to the [IT](mailto:it.img-wien@univie.ac.at), which will extend your existing server accout.
The email you received contains credentials for the Teaching Hub. This email contains your username, a new password (only for the Teaching Hub) and a secret (16 digit code). They look like this:
```
Dear TeachingHub User Monkey
......
......@@ -11,7 +11,7 @@ repo_name: IMGW/Computer-Resources
use_directory_urls: false
# this adds the feature to directly edit the file on gitlab
edit_uri: edit/master/
copyright: Copyright &copy; 2022 - 2022 Michael Blaschek
copyright: Copyright &copy; 2022 - 2022 IMGW, Michael Blaschek
theme:
name: material
......@@ -20,7 +20,7 @@ theme:
features:
- navigation.indexes
- navigation.top
logo: mkdocs/img/logo_img2_color.png
logo: mkdocs/img/favicon.ico
favicon: mkdocs/img/favicon.ico
custom_dir: mkdocs/overrides
plugins:
......@@ -46,7 +46,9 @@ extra:
- icon: fontawesome/solid/cloud
link: https://img.univie.ac.at
name: Institut für Meteorologie und Geophysik
- icon: fontawesome/solid/star
link: https://www.flaticon.com/free-icons
name: Icons from flaticon
extra_css:
- mkdocs/stylesheets/extra.css
......@@ -64,6 +66,3 @@ markdown_extensions:
- pymdownx.details
- pymdownx.emoji
- mkdocs_graphviz
#extra_javascript:
# - https://cdn.jsdelivr.net/gh/rod2ik/cdn@main/mkdocs/javascripts/mkdocs-graphviz.js
\ No newline at end of file
mkdocs/img/cpu.png

18.4 KiB

mkdocs/img/local-area-network.png

13.4 KiB

mkdocs/img/local-area-network_sm.png

5.54 KiB

mkdocs/img/logo_vsc.png

12.9 KiB

mkdocs/img/logo_vsc_sm.png

4.39 KiB

mkdocs/img/pdf-icon.png

1.16 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment