Skip to content
Snippets Groups Projects
Commit 4d1b2484 authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

Update Jet-Cluster.md, README.md, SRVX1.md, SRVX8.md, SRVX2.md,...

Update Jet-Cluster.md, README.md, SRVX1.md, SRVX8.md, SRVX2.md, Documentation/GPFS-jet.png, Documentation/logo_uniwien.jpg, Documentation/logo_img2_color.png, Documentation/jet-job2.png, Documentation/jet-job1.png, Documentation/jet-login.png files
parent d2bbe4b1
No related branches found
No related tags found
No related merge requests found
Documentation/GPFS-jet.png

110 KiB

Documentation/jet-job1.png

16.6 KiB

Documentation/jet-job2.png

27.8 KiB

Documentation/jet-login.png

145 KiB

Documentation/logo_img2_color.png

42.6 KiB

Documentation/logo_uniwien.jpg

55.2 KiB

# The Jet Cluster
[@IMG University of Vienna](https://jet01.img.univie.ac.at) :rocket:
[[_TOC_]]
## Getting Started
Welcome to the HPC @IMG @UNIVIE and please follow these steps to become a productive member of our department and make good use of the computer resources. Efficiency is keen.
### SSH
- [x] Terminal, Putty, ...
- [x] Network access @UNIVIE
Use a terminal or [Putty](https://www.putty.org/) to connect with SSH to the server
```bash
ssh [user]@jet01.img.univie.ac.at
```
Please replace `[user]` with the username given by the [sysadmin](mailto:michael.blaschek@univie.ac.at) and supply your password when asked for. This should give you access to the login node. If you are outside of the university network, then please goto [VPN](#VPN) below.
### VPN
- [x] `u:account`
The jet cluster is only accessible from within the virtual network of the university of Vienna. Therefore access from outside has to be granted via the [VPN-Service](https://vpn.univie.ac.at). Go there and login with your `u:account` and download the *Big-IP Edge* client for you system.
![](https://zid.univie.ac.at/fileadmin/user_upload/d_zid/zid-open/daten/datennetz/vpn/Windows/01_download_neu.png)
Links:
* [ZID-VPN](https://vpn.univie.ac.at/f5-w-68747470733a2f2f7a69642e756e697669652e61632e6174$$/vpn/)
* Linux (Ubuntu, Generic), Windows, Mac: [VPN user guides](https://vpn.univie.ac.at/f5-w-68747470733a2f2f7a69642e756e697669652e61632e6174$$/vpn/anleitungen/)
* Arch based AUR package [AUR f5fpc](https://aur.archlinux.org/packages/f5fpc/)
Follow the install instructions for Windows, Mac and Linux and make sure the software works.
![](https://zid.univie.ac.at/fileadmin/user_upload/d_zid/zid-open/daten/datennetz/vpn/Windows/08_verbinden.png)
On Windows and MAc you get a nice gui that requires you to fill in the VPN server: `vpn.univie.ac.at` and username and password from the `u:account`. On Linux execute the following:
```
f5fpc -s -t vpn.univie.ac.at -u [user]
```
The status can be checked with `f5fpc --info`.
### VNC
The login nodes (`jet01` and `jet02`) allow to run a `VNC`-Server
:construction:
### Jupyterhub
<img src="https://jupyter.org/assets/hublogo.svg" width="300px">
The Jet-cluster serves a [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/) that launches on the jet-cluster compute nodes and allows users to work directly on the cluster as well as submit jobs.
Goto: [](https://jet01.img.univie.ac.at) from within the VPN or UNI-Network.
Login with your jet-Credentials, choose a job and the jupyterlab will be launched.
![Login](Documentation/jet-login.png)
![Job](Documentation/jet-job1.png)
![Jobs](Documentation/jet-job2.png)
:construction:
### User Quotas and Restrictions
Memory limit / slurm requests
:construction:
#### Network drives
Currently there are two network drives mounted on the jet cluster. These are connected to the [SRVX8](SRVX8.md) and data can be transfered or accessed like this on the jet cluster. Be aware that the data needs to be transfered via the network before and latencies are higher.
```
131.130.157.8:/raid61 400T 378T 22T 95% /raid61
131.130.157.8:/raid60 319T 309T 11T 97% /raid60
jetfs 1.1P 81T 975T 8% /jetfs
```
## System Information
Last Update: 4.12.2020
Node Setup
- 2x Login Nodes
- 7x Compute Nodes
![GPFS](Documentation/GPFS-jet.png)
### Node Information
| Name | Value |
| --- | --- |
| Product | ThinkSystem SR650 |
| Distro | RedHatEnterprise 8.2 Ootpa |
| Kernel | 4.18.0-147.el8.x86_64 GNU/Linux |
| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz |
| Cores | 2 CPU, 20 physical cores per CPU, total 80 logical CPU units |
| BaseBoard | Lenovo -\[7X06CTO1WW\]- Intel Corporation C624 Series Chipset |
| CPU Time | 350 kh |
| Memory | 755 GB Total |
| Memory/Core | 18.9 GB |
| Network | 40 Gbit/s (Infiniband) |
Global file system (GPFS) is present on all nodes with about 1 PB (~1000 TB) of storage.
## Software
The typcial installation of a intel-cluster has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
Major Libraries:
- OpenMPI
- HDF5
- NetCDF (C, Fortran)
- ECCODES from [ECMWF](https://confluence.ecmwf.int/display/ECC)
- Math libraries e.g. intel-mkl, lapack,scalapack
- Interpreters: Python, Julia
- Tools: cdo, ncl, nco, ncview
These software libraries are usually handled by environment modules.
<img src="http://modules.sourceforge.net/modules_red.svg" width="300px">
### Currently installed modules
```bash
>>> module av
---------------------------------- /jetfs/spack/share/spack/modules/linux-rhel8-skylake_avx512 -----------------------------------
anaconda2/2019.10-gcc-8.3.1-5pou6ji ncview/2.1.8-gcc-8.3.1-s2owtzw
anaconda3/2019.10-gcc-8.3.1-tmy5mgp netcdf-c/4.7.4-gcc-8.3.1-fh4nn6k
anaconda3/2020.07-gcc-8.3.1-weugqkf netcdf-c/4.7.4-intel-20.0.2-337uqtc
cdo/1.9.8-gcc-8.3.1-ipgvzeh netcdf-fortran/4.5.3-gcc-8.3.1-kfd2vkj
eccodes/2.18.0-gcc-8.3.1-s7clum3 netcdf-fortran/4.5.3-intel-20.0.2-irdm5gq
eccodes/2.18.0-intel-20.0.2-6tadpgr netlib-lapack/3.8.0-gcc-8.3.1-ue37lic
enstools/2020.11.dev-gcc-8.3.1-fes7kgo netlib-scalapack/2.1.0-gcc-8.3.1-pbkjymd
gcc/8.3.1-gcc-8.3.1-pp3wjou openblas/0.3.10-gcc-8.3.1-ncess5c
geos/3.8.1-gcc-8.3.1-o76leir openmpi/3.1.6-gcc-8.3.1-rk5av53
hdf5/1.12.0-gcc-8.3.1-awl4atl openmpi/3.1.6-intel-20.0.2-ubasrpk
hdf5/1.12.0-intel-20.0.2-ezeotzr openmpi/4.0.5-gcc-8.3.1-773ztsv
intel-mkl/2020.3.279-gcc-8.3.1-5xeezjw openmpi/4.0.5-intel-20.0.2-4wfaaz4
intel-mkl/2020.3.279-intel-20.0.2-m7bxged parallel-netcdf/1.12.1-gcc-8.3.1-gng2jcu
intel-parallel-studio/composer.2020.2-intel-20.0.2-zuot22y parallel-netcdf/1.12.1-gcc-8.3.1-xxrhtxn
julia/1.5.2-gcc-8.3.1-3iwgkf7 parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs
libemos/4.5.9-gcc-8.3.1-h3lqu2n proj/7.1.0-gcc-8.3.1-xcjaco5
miniconda2/4.7.12.1-gcc-8.3.1-zduqggv zlib/1.2.11-gcc-8.3.1-bbbpnzp
miniconda3/4.8.2-gcc-8.3.1-3m7b6t2 zlib/1.2.11-intel-20.0.2-3h374ov
nco/4.9.3-gcc-8.3.1-jtokrle
```
Using [environment modules](https://modules.readthedocs.io/en/latest/) it is possible to have different software libraries (versions, compilers) side-by-side and ready to be loaded. Be aware that some libraries are dependent on others. It is recommended to load the highest rank library first to check what dependencies are loaded as well. e.g.:
```
>>> module load eccodes/2.18.0-intel-20.0.2-6tadpgr
```
loads the `ECCODES` library and all dependencies. e.g. intel compilers, as indicated by the naming.
```
>>> module list
Currently Loaded Modulefiles:
1) zlib/1.2.11-intel-20.0.2-3h374ov 3) hdf5/1.12.0-intel-20.0.2-ezeotzr 5) netcdf-c/4.7.4-intel-20.0.2-337uqtc
2) openmpi/4.0.5-intel-20.0.2-4wfaaz4 4) parallel-netcdf/1.12.1-intel-20.0.2-sgz3yqs 6) eccodes/2.18.0-intel-20.0.2-6tadpgr
```
`module list` shows the currently loaded modules and reports that 6 libraries need to be loaded as dependencies for `ECCODES`. Thus, it is not necessary to load the other libraries manually as they are dependencies of `ECCODES`.
## Best Practice
1. Example on jet
2. Example job
[Slurm Tutorial on Gitlab]()
:construction:
......@@ -14,4 +14,37 @@ Search with the top bar or go through the directories:
If you care to participate please do so. Raise an Issue or give some feedback.
Missing solutions are very welcome.
\ No newline at end of file
Missing solutions are very welcome.
## Information on the Jet Cluster
[Description](Jet-Cluster.md)
**Summary**
- Architecture
- Software
- [JET01-Jupyterhub](https://jet01.img.univie.ac.at)
## Information on SRVX1
Teaching and webaccess server
[Description](SRVX1.md)
**Summary**
- Architecture
- Software
- [SRVX1-Jupyterhub](https://srvx1.img.univie.ac.at)
# Information on SRVX2
Computing Node @UZA2
[Description](SRVX2.md)
**Summary**
- Architecture
- Software
# Information on SRVX8
Researchers @UZA2
[Description](SRVX8.md)
**Summary**
- Architecture
- Software
SRVX1.md 0 → 100644
# S R V X 1
[[_TOC_]]
## Getting Started
### SSH
### VPN
### Jupyterhub
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R720xd |
| Distro | CentOS 6.10 Final |
| Kernel | 2.6.32-754.29.2.el6.centos.plus.x86_64 GNU/Linux |
| Processor | Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz |
| Cores | 2 CPU, 8 physical cores per CPU, total 32 logical CPU units |
| BaseBoard | Dell Inc. 0C4Y3R Intel Corporation C600/X79 series chipset |
| CPU time | 140 kh |
| Memory | 190 GB Total |
| Memory/Core | 11.9 GB |
| Network | 10 Gbit/s |
## Software
## Best Practice
## Networking
SRVX2.md 0 → 100644
# S R V X 8
[[_TOC_]]
## Getting Started
### SSH
### VPN
### Jupyterhub
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R940 |
| Distro | CentOS 8.2.2004 Core |
| Kernel | 4.18.0-147.el8.x86_64 GNU/Linux |
| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz |
| Cores | 4 CPU, 20 physical cores per CPU, total 160 logical CPU units |
| BaseBoard | Dell Inc. 0D41HC Intel Corporation C621 Series Chipset |
| CPU time | 700 kh |
| Memory | 376 GB Total |
| Memory/Core | 9.4 GB |
## Software
## Best Practice
## Networking
SRVX8.md 0 → 100644
# S R V X 8
[[_TOC_]]
## Getting Started
### SSH
### VPN
### Jupyterhub
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R730xd |
| Distro | CentOS 6.10 Final |
| Kernel | 2.6.32-754.29.2.el6.centos.plus.x86_64 GNU/Linux |
| Processor | Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz |
| Cores | 2 CPU, 14 physical cores per CPU, total 56 logical CPU units |
| BaseBoard | Dell Inc. 0599V5 Intel Corporation C610/X99 series chipset |
| CPU time | 245 kh |
| Memory | 504 GB Total |
| Memory/Core | 18 Gb |
| Network | 10 Gbit/s |
## Software
## Best Practice
## Networking
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment