Skip to content
Snippets Groups Projects
Commit 0e460638 authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

Merged data update

parents 08fa2757 f0e4d868
No related branches found
No related tags found
No related merge requests found
Showing
with 496 additions and 29 deletions
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [3000],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// for podman as rootless
"runArgs": ["--userns=keep-id"],
"remoteUser": "vscode",
"containerUser": "vscode"
}
testing/*
*/.ipynb_checkpoints/*
*/*.log
image: python:3.9-buster
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
# Cache between jobs in the same branch
cache:
- key: $CI_COMMIT_REF_SLUG
- paths:
- .cache/pip
stages:
- build
- deploy
build:
stage: build
rules:
# only run pipeline when build is in the commit message
- if: $CI_COMMIT_MESSAGE =~ /.*build.*/
- if: $MY_VARIABLE
before_script:
# Install all required packages
- pip install -r requirements.txt
script:
# --strict is too strict :)
- mkdocs build -c --verbose
artifacts:
paths:
- mkdocs.log
cache:
key: build-cache
paths:
- site/
deploy:
stage: deploy
needs:
- build
before_script:
- apt-get update -qq && apt-get install -y -qq sshpass openssh-client rsync
script:
# - sshpass -p "$WOLKE_PASSWORD" scp -oStrictHostKeyChecking=no -r ./site/* $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general/
- sshpass -p "$WOLKE_PASSWORD" rsync -atv --delete -e "ssh -o StrictHostKeyChecking=no" ./site/ $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general
cache:
key: build-cache
paths:
- site/
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
"features": {
"ghcr.io/devcontainers-contrib/features/mkdocs:2": {}
}
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9"
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/python
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "pip3 install --user -r requirements.txt",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
"containerUser": "vscode"
}
.pages 0 → 100644
nav:
- README.md
- SRVX1.md
- SRVX8.md
- Jet-Cluster.md
- VSC.md
- ECMWF.md
- TeachingHub.md
- SSH-VPN-VNC
- ...
\ No newline at end of file
......@@ -5,10 +5,12 @@
@phone: 53715
@date: Fri Sep 25 09:15:10 CEST 2020
Data currently shared on the jet cluster:
`PATH /jetfs/shared-data`
Steps:
1. **Be careful.** Do not remove data, unless you are really certain, that this is ok for everyone.
2. Create directories with clear names. **Add permission for others**
3. Ask for help if needed.
......@@ -24,7 +26,6 @@ Typical Permissions are:
folders: 755 rwxr-xr-x allows all users to read
files: 644 rw-r--r-- allows all users to read
Change permission for all directories under that DIR:
`find [DIR] -type d -exec chmod 755 {} \;`
......
......@@ -7,9 +7,6 @@ Edit this file here or on [gitlab](https://gitlab.phaidra.org/imgw/computer-reso
Fill into the appropriate table and add a README to the directory/dataset as well. Use the Data-template.md for example.
# Reanalysis data
There are currently the following reanalysis available
| Name | Time period | Resolution | Domain | Variables | Vertical Resolution | Location | Contact | Comments | Source |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| CERA 20C | 01-1900 to 12-2010 | 2.0 deg x 2.0 deg; 3-hourly | Global | --- | 91 layers | /jetfs/shared-data/ECMWF/CERA_glob_2deg_3h | Flexpart group | the data is in Flexpart format! | extracted from ECMWF via flex_extract |
......
Documentation/GPFS-jet.png

110 KiB

Documentation/teachinghubnotebooks.png

76.9 KiB

ECMWF.md 0 → 100644
# European Center for Medium Weather Forecast
![](./mkdocs/img/logo_ecmwf.png)
[website](https://www.ecmwf.int) / [service status](https://www.ecmwf.int/en/service-status) / [confluence](https://confluence.ecmwf.int) / [support](https://support.ecmwf.int) / [accounting](https://www.ecmwf.int/user)
Available Services:
- [ecaccess](https://confluence.ecmwf.int/display/ECAC/ECaccess+Home)
- [srvx1.gateway](https://srvx1.img.univie.ac.at/ecmwf/ecmwf) / [gateway.docs](https://confluence.ecmwf.int/display/ECAC/Releases+-+Gateway+package) / [boaccess](https://boaccess.ecmwf.int)
## Connecting to ECMWF Services
A ECMWF user can connect to the ECS/ATOS using teleport
```bash title="Using teleport"
module load teleport
** INFO: Default jumphost now: jump.ecmwf.int
** INFO: Module loaded. SSH Agent required for login, run 'startagent',
** run ssh-agent -k to kill the agent.
Login using: python3 -m teleport.login and your ECMWF credentials.
# Activate the ssh-agent (required to store the key/certificate)
startagent
# Check if it is running
ssh-add -l
```
```bash title="Connecting to ECMWF"
# Login to the default teleport jump host (shell.ecmwf.int) Reading
python3 -m teleport.login
tsh status
# run ssh agent again
ssh-add -l
# now there should be two keys!!!
# Login to the jump host in Bologna
python3 -m teleport.login
# Check Status
tsh status
# ssh to the login nodes
ssh -J [user]@jump.ecmwf.int [user]@ecs-login
ssh -J [user]@jump.ecmwf.int [user]@hpc-login
```
Environment variables configuration:
- `ECMWF_USERNAME` - The ECMWF Username
- `ECMWF_PASSWORD` - The ECMWF Password
- `TSH_EXEC` - The Teleport binary tsh path
- `TSH_PROXY` - The ECMWF Teleport proxy
### SSH-agent
It is required to have an SSH-agent running in order to connect to the ECMWF servers. The teleport module includes a `startagent` function to allow to reconnect to an existing ssh-agent. Do not start too many agents!
```bash title="start ssh-agent"
# load the module
module load teleport
# start a new agent or reconnect
startagent
```
## ECMWF Access Server (ECS)
There is an issue with ssh-keys
```bash
# Generate a new SSH key
ssh-keygen -t ed25519
# Add the public key to your own authorized_keys on ECS/HPC
cat .ssh/id_ed25519.pub >> .ssh/authorized_keys
```
This will solve some `ecaccess` issues.
## Connecting via ECaccess
using a local installation of ecaccess tools can be used to submit jobs and monitor jobs from a remote location.
# Editors for development
# Recommendations
Here one can find some help on getting started with some editors or useful configurations or packages to use.
......@@ -6,7 +6,6 @@ If you have a nice addition please submit it (create an issue).
Thanks. :elephant:
[[_TOC_]]
## Vim
![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/Vimlogo.svg/64px-Vimlogo.svg.png)
......@@ -20,7 +19,7 @@ Alternative: [VIM](https://www.vim.org/)
Some useful VIM modifications in `~/.vimrc` :
```vim
```vim title="Configuration file"
filetype plugin indent on
" show existing tab with 4 spaces width
set tabstop=4
......
File added
......@@ -6,6 +6,7 @@ As a Gui version there is also `gvim`
Check out the handy References:
- [VIM Keyboard DE](VIM-Keyboard.pdf)
- [VIM Reference](VIM-Reference.pdf)
- [VIMDIFF Reference](VIMDIFF-Reference.pdf)
# Configs
```
......@@ -56,3 +57,12 @@ or maybe
# fix for vim backspace
stty erase '^?'
```
# VIM Diff
Visually compare two files, highlighting the differences. You can easily switch between files (`CTRL+w`) and move parts from one file to another (`do` do obtain or `dp` do put).
```bash
vimdiff file1 file2
```
[VIMDIFF Reference](VIMDIFF-Reference.pdf)
\ No newline at end of file
# Debugging
Please have a look at the debugging options for your compilers, which allow to add debugging information into the executable. This makes the executable larger, but for debugging purposes that allows to read the source code where it happens. Sometimes and depending on your code the compiler will change your code due to the optimization flags. Please consider removing them for debugging.
## Coredump
What is a coredump ?
*A core dump is a file containing a process's address space (memory) when the process terminates unexpectedly. Core dumps may be produced on-demand (such as by a debugger), or automatically upon termination. Core dumps are triggered by the kernel in response to program crashes, and may be passed to a helper program (such as systemd-coredump) for further processing. A core dump is not typically used by an average user, but may be passed on to developers upon request where it can be invaluable as a post-mortem snapshot of the program's state at the time of the crash, especially if the fault is hard to reliably reproduce.*
[coredump@ArchWiki](https://wiki.archlinux.org/title/Core_dump)
Most of our servers and the VSC have the coredump service available. You can check that simply by running `coredumpctl`, which should be available if it is installed.
on most systems the core dump is limited, run `ulimit -c` to see how large your core dump can be. Some systems allow to change these by the user with `ulimit -c [number]`. This needs to be set before the core file is dumped.
Core dumps are configured to persist for at least 3 days, before they are automatically cleaned.
### coredump utilities
As a user you can only access your own coredump information, available dumps can be found like this.
```bash
[user@srvx1 ~]$ coredumpctl list
TIME PID UID GID SIG COREFILE EXE
Thu 2022-08-18 09:58:55 CEST 1869359 12345 100 11 none /usr/lib64/firefox/firefox
Wed 2022-08-24 14:33:49 CEST 1603205 12345 100 6 none /jetfs/home/user/Documents/test_coredump.x
Wed 2022-08-24 14:36:11 CEST 1608700 12345 100 6 truncated /jetfs/home/user/Documents/test_coredump.x
Wed 2022-08-24 14:47:47 CEST 1640330 12345 100 6 none /jetfs/home/user/Documents/test_coredump.x
Wed 2022-08-24 14:57:01 CEST 1664822 12345 100 6 present /jetfs/home/user/Documents/test_coredump.x
```
Relevant are especially the `SIG` and the `COREFILE` column, which give you a reason why your process was killed. Please find some useful information on the Signal in the table below. If `COREFILE` is none then the system probably disabled that or the ulimit is 0. If truncated, then the ulimit is too small for your dump core. If present, then the file can be used for debugging.
![Linux Signal](linux-signals.png)
## Test a coredump
Use the following C program to create a coredump and look at it. The program does something wrong. Maybe you can figure it out.
```c
#include <stdio.h>
#include <stdlib.h>
void main(){
int x;
free(&x);
}
```
Write to a file called `test_coredump.c` and compile
<pre>
# compile (with -g for debugging information)
[user@srvx1 ~]$ gcc -g -o test_coredump.x test_coredump.c
# execute
[user@srvx1 ~]$ ./test_coredump.x
Segmentation fault (core dumped)
# check the coredump
[user@srvx1 ~]$ coredumpctl
TIME PID UID GID SIG COREFILE EXE
Wed 2022-08-24 14:09:10 CEST 512174 1234 100 11 present /home/user/test_coredump.x
# inspect the core dump
[user@srvx1 ~]$ coredumpctl info 512174
Hint: You are currently not seeing messages from other users and the system.
Users in groups 'adm', 'systemd-journal', 'wheel' can see all messages.
Pass -q to turn off this notice.
PID: 512174 (test_coredump.x)
UID: 1234 (user)
GID: 100 (users)
Signal: 6 (ABRT)
Timestamp: Wed 2022-08-24 14:57:00 CEST (9min ago)
Command Line: ./test_coredump.x
Executable: /home/user/Documents/test_coredump.x
Control Group: /user.slice/user-1234.slice/session-257306.scope
Unit: session-257306.scope
Slice: user-1234.slice
Session: 257306
Owner UID: 1234 (user)
Boot ID: 521d3ca4537d4cdb92bc4eefba12072a
Machine ID: e9055dc0f93045278fcbdde4b6828bc8
Hostname: srvx1.img.univie.ac.at
Storage: /var/lib/systemd/coredump/core.test_coredump\x2ex.1234.521d3ca4537d4cdb92bc4eefba12072a.512174.1661345820000>
Message: Process 512174 (test_coredump.x) of user 1234 dumped core.
Stack trace of thread 512174:
#0 0x00007f637fc4737f raise (libc.so.6)
#1 0x00007f637fc31db5 abort (libc.so.6)
#2 0x00007f637fc8a4e7 __libc_message (libc.so.6)
#3 0x00007f637fc915ec malloc_printerr (libc.so.6)
#4 0x00007f637fc9189c munmap_chunk (libc.so.6)
#5 0x000000000040059a main (test_coredump.x)
#6 0x00007f637fc33493 __libc_start_main (libc.so.6)
#7 0x00000000004004ce _start (test_coredump.x)
</pre>
This tells you where the core dump is and a bit of a stack trace as well.
Let's have a look at the dump file.
<pre>
# run gdb with the core dump file
[user@srvx1 ~]$ coredumpctl gdb 512174
...
This GDB was configured as "x86_64-redhat-linux-gnu".[20/29541]Type "show configuration" for configuration details.
...
Reading symbols from /home/user/Documents/test_coredump.x...done.
Core was generated by `./test_coredump.x'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007f1a84fd137f in raise () from /lib64/libc.so.6
(gdb)
# now let's have a look at where we are.
(gdb) l
1 #include <stdio.h>
2 #include <stdlib.h>
3 void main(){
4 int x;
5 free(&x);
6 }
# let's run the program and see what problems it has
(gdb) r
Starting program: /home/user/Documents/test_coredump.x
...
munmap_chunk(): invalid pointer
Program received signal SIGABRT, Aborted.
0x00007ffff7a4237f in raise () from /lib64/libc.so.6
(gdb)
# so we ask the debugger where that happens:
(gdb) where
#0 0x00007ffff7a4237f in raise () from /lib64/libc.so.6
#1 0x00007ffff7a2cdb5 in abort () from /lib64/libc.so.6
#2 0x00007ffff7a854e7 in __libc_message () from /lib64/libc.so.6
#3 0x00007ffff7a8c5ec in malloc_printerr () from /lib64/libc.so.6
#4 0x00007ffff7a8c89c in munmap_chunk () from /lib64/libc.so.6
#5 0x000000000040059a in main () at test_coredump.c:5
# and because that is not totally clear, we can do a backtrace
(gdb) bt full
#0 0x00007ffff7a4237f in raise () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007ffff7a2cdb5 in abort () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007ffff7a854e7 in __libc_message () from /lib64/libc.so.6
No symbol table info available.
#3 0x00007ffff7a8c5ec in malloc_printerr () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007ffff7a8c89c in munmap_chunk () from /lib64/libc.so.6
No symbol table info available.
#5 0x000000000040059a in main () at test_coredump.c:5
x = 0
# a x is an integer, not malloc'ated, thus no free
</pre>
Problem solved. We can not free something that is not allocated.
\ No newline at end of file
# Coding
Most often codes are written in Python or Fortran. Here some information is given on Fortran and a little bit of C as well.
Have a look at [Debugging](./Debugging.md)
# Fortran
Fortran is quite popular in Meteorology and Geophysics.
......@@ -6,39 +12,33 @@ Please find some help on solving common problems.
Get some information on Fortran:
- [Fortran Language, Learning, Compilers](https://fortran-lang.org)
# Compilers
There are a few compilers, but most commonly GNU (Gfortran) and INTEL (ifort) are used on our servers.
| | gfortran | ifort |
|----------------------------------|------------------------------------|-----------------------------------|
| | gfortran | ifort |
| double precision real | -fdefault-real-8 | -r8 |
| check array bounds | -fbounds-check | -check |
| call chain traceback | -fbacktrace | -traceback |
| convert little/big endian | -fconvert=big-endian/little-endian | -convert big_endian/little_endian |
| default optimisation | -O0 | -O2 |
| highest recommended optimisation | -O3 | -O2maybe -O3 or -fast |
| | GNU Fortran | INTEL Fortran |
|---|---|---|
| double precision real| `-fdefault-real-8`| `-r8`|
| check array bounds| `-fbounds-check`| `-check`|
| call chain traceback|` -fbacktrace`| `-traceback`|
| convert little/big endian| `-fconvert=big-endian/little-endian` | `-convert big_endian/little_endian` |
| default optimisation| `-O0`| `-O2`|
| highest recommended optimisation | `-O3`| `-O2` maybe `-O3` or `-fast`|
## Intel Compiler
from P. Seibert using ifort for the fastest code (srvx1):
```lang-mk
FFLAGS = -cpp -xAVX -ipo -O3 -no-prec-div -opt-prefetch -m64 -mcmodel=medium -I$(INCPATH)
LIBPATH = /home/tmc/TestEnv/Libraries/grib_api-1.12.3_ifort/lib
LDFLAGS = $(FFLAGS) -L$(LIBPATH) -Bstatic -lgrib_api_f90 -lgrib_api -lm -ljasper
```makefile
# get GRIP_PATH from environment modules
INCPATH = GRIP_API/include
LIBPATH = GRIP_API/lib
FFLAGS = -cpp -xAVX -ipo -O3 -no-prec-div -opt-prefetch -m64 -mcmodel=medium -I$(INCPATH)
LDFLAGS = $(FFLAGS) -L$(LIBPATH) -Bstatic -lgrib_api_f90 -lgrib_api -lm -ljasper
```
Remark: for FLEXPART, otherwise you won't need `grib_api` and `jasper`, or `mcmodel=medium`)
Remark: for FLEXPART, otherwise you won't need `grib_api` and `jasper`, or `mcmodel=medium`
Remarks
these are settings for FLEXPART
in general, you probably won't need the gribapi and jasper library
-cpp -mcmodel=medium -I$(INCPATH)
-lm includes the intel math lib; it is improtant that the linking step is something like $(FC) $(FFLAGS) *.o -o a.out $(LDFLAGS) or you may loose the math lib
## Tricky Issues
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment