Skip to content
Snippets Groups Projects
Commit 034922d8 authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

smaller update,gettingstarted

parent 2c4478ff
Branches
Tags
No related merge requests found
......@@ -4,7 +4,7 @@
{
"name": "Python 3",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/python:0-3.9",
"image": "mcr.microsoft.com/devcontainers/python:0-3.10",
// Features to add to the dev container. More info: https://containers.dev/features.
// "features": {},
......@@ -13,7 +13,7 @@
"forwardPorts": [3000],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "sudo apt-get update && sudo apt-get install -y graphviz && pip3 install --user -r requirements.txt",
"postCreateCommand": "sudo rm /etc/apt/sources.list.d/yarn.list && sudo apt-get update -y && sudo apt-get install -y graphviz && pip3 install --user -r requirements.txt",
"postStartCommand": "mkdocs serve -a localhost:3000 --dirtyreload",
// Configure tool-specific properties.
......
image: python:3.9-buster
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
# Cache between jobs in the same branch
cache:
- key: $CI_COMMIT_REF_SLUG
- paths:
- .cache/pip
- key: $CI_COMMIT_REF_SLUG
- paths:
- .cache/pip
stages:
- build
- deploy
- build
- deploy
build:
stage: build
stage: build
rules:
# only run pipeline when build is in the commit message
- if: $CI_COMMIT_MESSAGE =~ /.*2wolke.*/
- if: $UPDATEWOLKE
rules:
# only run pipeline when build is in the commit message
- if: $CI_COMMIT_MESSAGE =~ /.*2wolke.*/
- if: $UPDATEWOLKE
before_script:
# Install all required packages
- apt-get install -y -qq graphviz
- pip install -r requirements.txt
script:
# --strict is too strict :)
- mkdocs build -c --verbose
artifacts:
paths:
- mkdocs.log
cache:
key: build-cache
paths:
- site/
before_script:
# Install all required packages
- apt-get install -y -qq graphviz
- pip install -r requirements.txt
script:
# --strict is too strict :)
- mkdocs build -c --verbose
artifacts:
paths:
- mkdocs.log
cache:
key: build-cache
paths:
- site/
deploy:
stage: deploy
needs:
- build
before_script:
- apt-get update -qq && apt-get install -y -qq sshpass openssh-client rsync
script:
# - sshpass -p "$WOLKE_PASSWORD" scp -oStrictHostKeyChecking=no -r ./site/* $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general/
- sshpass -p "$WOLKE_PASSWORD" rsync -atv --delete -e "ssh -o StrictHostKeyChecking=no" ./site/ $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general
cache:
key: build-cache
paths:
- site/
stage: deploy
needs:
- build
before_script:
- apt-get update -qq && apt-get install -y -qq sshpass openssh-client rsync
script:
# - sshpass -p "$WOLKE_PASSWORD" scp -oStrictHostKeyChecking=no -r ./site/* $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general/
- sshpass -p "$WOLKE_PASSWORD" rsync -atv --delete -e "ssh -o StrictHostKeyChecking=no" ./site/ $WOLKE_USER@wolke.img.univie.ac.at:/var/www/html/documentation/general
cache:
key: build-cache
paths:
- site/
......@@ -2,17 +2,25 @@
**Welcome to the Department of Meteorology and Geophysics @ University of Vienna.**
Tasks to complete for newcomers, it is recommended that you print this page an tick of your steps:
🥳
Tasks to complete for newcomers, it is recommended that you print this page an tick off your steps:
- [ ] Request a server account via your supervisor
- [ ] Receive the inital user account information via mail.
- [ ] Change your initial password, with a
- browser ([https://wolke.img.univie.ac.at/ipa](https://wolke.img.univie.ac.at/ipa))
- ssh terminal `ssh [username]@srvx1.img.univie.ac.at`, follow the instructions in the command shell. Windows users install a ssh client, e.g.: Bitwise
- Optional: Setup a password manager. [tips](https://zid.univie.ac.at/en/it-worlds/it-security/it-security-tips/password-manager/)
- [ ] Install a SSH-Client (Windows), e.g.
- [ ] Optional setup ssh-key
- [ ] Familiarize yourself with the environment
- [ ] Change your initial password, via
- browser ([https://wolke.img.univie.ac.at/ipa/ui](https://wolke.img.univie.ac.at/ipa/ui))
- ssh terminal `ssh [username]@srvx1.img.univie.ac.at`, follow the instructions in the command shell. Windows users install a ssh client
- Optional: Setup a password manager. [tips](https://zid.univie.ac.at/en/it-worlds/it-security/it-security-tips/password-manager/), e.g. Bitwarden
- [ ] (optional windows) Install a SSH-Client, e.g. Bitwise, MobaXterm, Putty,...
- [ ] (optional) setup ssh-key via [IPA](https://wolke.img.univie.ac.at/ipa/ui)
- [ ] Familiarize yourself with the shell environment @SRVX1
- [ ] Apply for your first VSC Training course
- Login in to all servers:
* [ ] srvx1 .img.univie.ac.at
* [ ] srvx8
* [ ] jet01/jet02
* [ ] (optional) connect to vsc5.vsc.ac.at
## Environment
......@@ -20,12 +28,13 @@ Please do the following steps to get a better idea of what is where:
Steps:
- [ ] login to srvx1 using ssh
- [ ] run `userpaths` to understand where different data resides:
- [ ] login to srvx1 using ssh: `ssh [user]@srvx1.img.univie.ac.at` :earth_africa:
- [ ] run: `userpaths` to understand where different data resides. E.g.
- HOME, SCRATCH (personal), DATA, SHARED, WEBDATA, ?_JET
- [ ] check available modules `module av` and load anaconda3 `module load anaconda3`. This should allow you to run some python programs.
- [ ] run `userservices` to get some IMGW special tools.
- [ ] check available modules by running: `module av` and load anaconda3 module by running: `module load anaconda3`. This should allow you to run some python programs.
- [ ] run: `userservices` to get some IMGW special tools. Maybe check the weather!
Please find a useful summary of commands [here](./mkdocs/imgw-cheatsheet.pdf)
## Summary of Computing Resources
......@@ -36,3 +45,5 @@ The Department of Meteorology and Geophysics has access to the following computi
- Computing Cluster ([JET](Servers/JET.md))
- Vienna Scientific Cluster ([VSC](VSC.md))
- European Center for Medium-Range Weather Forecast ([ECMWF](ECMWF.md))
Please read about access, hardware and quotas at these different resources. A good starting point is [here](./Servers/)
\ No newline at end of file
......@@ -17,6 +17,7 @@ What you need to get started:
# What to use?
**5 Reasons for Latex**
1. Professional PDF with printing quality
2. Lots of mathemetical, physical, chemical or other equations. Need to be in Latex anyway.
3. Need to use a fixed layout given to you by the university or journal
......@@ -24,6 +25,7 @@ What you need to get started:
5. Readable and shareable
**5 Reasons for Markdown**
1. Markdown is much easier to learn and visual
2. Readable and shareable, easy to collaborate
3. Markdown can be converted into HTML, PDF, DOCX, ...
......@@ -34,7 +36,9 @@ What you need to get started:
# Markdown and Latex
An easier solution to start is to use Markdown, which is a yet another plain text markdown language that supports a lot of nice features:
* latex rendering
* code highlighting
* Gitlab, GitHub integration
......@@ -44,6 +48,7 @@ It is easy to write, share and version control. Markdown files can be converted
You can find an example ...
What you need to get started to convert to PDF:
1. [Pandoc](https://pandoc.org/installing.html)
2. Latex (Note tips when installing pandoc)
* fonts-extra
......@@ -55,6 +60,7 @@ pandoc -d eisvogel.tex -o MyPaper.pdf
```
Links:
- [Jabref - Reference manager (opensource, all platforms)](https://www.jabref.org)
- [Zotero - Reference manager (all platforms)](https://www.zotero.org)
- [Herbert Voß - Latex Referenz der Umgebungen (Makros, Längen, Zähler)](http://www.lehmanns.de/pdf/latexreferenz.pdf)
......
......@@ -7,10 +7,16 @@ Find help here with your computer related problems.
Search with the top bar or go through the directories:
- Python related Problems
- SSH, VNC, VPN related Problems
- Editors and remote connection
- Data availability and location
- Git related problems
- [Python related Problems](./Python/)
- [SSH, VNC, VPN related Problems](./SSH-VPN-VNC/)
- [Editors](./Editors/) and [remote connection](./SSH-VPN-VNC/)
- [Data availability and location](./Data/)
- [Git related problems](https://gitlab.phaidra.org/imgw/computer-resources/-/issues)
If you care to participate please do so. Raise an [Issue on Gitlab](https://gitlab.phaidra.org/imgw/computer-resources/-/issues) or give some feedback ([mail](mailto:it.img-wien@univie.ac.at), [mattermost](https://discuss.phaidra.org/imgw/channels/bugs)) or write to [individual members of the department](https://img.univie.ac.at/en/about-us/staff/).
for new employees or students, you could start with the [Getting Started](./Getting%20Started.md) section
If you care to participate please do so.
- Raise an [Issue on Gitlab](https://gitlab.phaidra.org/imgw/computer-resources/-/issues) :earth_africa:
- Give some feedback ([mail](mailto:it.img-wien@univie.ac.at), [mattermost](https://discuss.phaidra.org/imgw/channels/bugs)) :snowman:
- Write to [individual members of the department](https://img.univie.ac.at/en/about-us/staff/). :frog:
......@@ -20,6 +20,8 @@ Locations:
- [VSC Training](https://vsc.ac.at/training)
- [VSC Trainings @ IMGW](https://gitlab.phaidra.org/imgw/trainings-course)
*Note: Please take a look at the training course @VSC. There might be a beginners course about Linux or Clusters or Python!* It is highly recommended to use these existing courses to get up to speed. Most of these are available as pdfs right away, but still taking the course is the prefered way.
## How to connect from Home or Abroad?
......
......@@ -5,21 +5,21 @@
> part of EuroCC
>
![vsc](mkdocs/img/logo_vsc.png)
Links:
- [VSC](https://vsc.ac.at/home/)
- [VSC-Wiki](https://wiki.vsc.ac.at)
- [EuroCC - Austria](https://eurocc-austria.at)
We have the privilege to be part of the VSC and have private nodes at VSC-5 (since 2022), VSC-4 (since 2020) and VSC-3 (since 2014), which is retired by 2022.
Access is primarily via SSH:
``` bash
```bash title='ssh to VSC'
$ ssh user@vsc5.vsc.ac.at
$ ssh user@vsc4.vsc.ac.at
$ ssh user@vsc3.vsc.ac.at (old does not work anymore)
```
Please follow some connection instruction on the [wiki](https://wiki.vsc.ac.at) which is similar to all other servers (e.g. [SRVX1](Servers/SRVX1.md)).
......@@ -31,7 +31,7 @@ We have private nodes at our disposal and in order for you to use these you need
If you want you can use some shared shell scripts that provide information for users about the VSC system.
```bash
```bash title='Load IMGW environment settings'
# run the install script, that just appends to your PATH variable.
/gpfs/data/fs71386/imgw/install_imgw.sh
```
......@@ -45,7 +45,8 @@ Please find the following commands available:
Please find a shared folder in `/gpfs/data/fs71386/imgw/shared` and add data there that needs to be used by multiple people. Please make sure that things are removed again as soon as possible. Thanks.
## Node Informaiton VSC-5
```
```txt title='VSC-5 Compute Node'
CPU model: AMD EPYC 7713 64-Core Processor
1 CPU, 64 physical cores per CPU, total 128 logical CPU units
......@@ -54,12 +55,12 @@ CPU model: AMD EPYC 7713 64-Core Processor
We have access to 11 private Nodes of that kind. We also have access to 1 GPU node with Nvidia A100 accelerators. Find the partition information with:
```bash
```txt title='VSC-5 Quality of Service'
$ sqos
qos_name total used free walltime priority partitions
qos_name total used free walltime priority partitions
=========================================================================
p71386_0512 11 0 11 10-00:00:00 100000 zen3_0512
p71386_a100dual 1 0 0 10-00:00:00 100000 gpu_a100_dual
p71386_0512 11 0 11 10-00:00:00 100000 zen3_0512
p71386_a100dual 0 0 0 10-00:00:00 100000 zen3_0512_a100x2
```
## Storage on VSC-5
......@@ -67,7 +68,8 @@ the HOME and DATA partition are the same as on [VSC-4](#storage-on-vsc-4).
## Node Information VSC-4
```
```txt title='VSC-4 Compute Node'
CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
2 CPU, 24 physical cores per CPU, total 96 logical CPU units
......@@ -76,7 +78,7 @@ CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
We have access to 5 private Nodes of that kind. We also have access to the jupyterhub on VSC. Check with
```bash
```txt title='VSC-4 Quality of Service'
$ sqos
qos_name total used free walltime priority partitions
=========================================================================
......@@ -99,7 +101,7 @@ All quotas are **shared between all** IMGW/Project users:
Check quotas running the following commands yourself, including your PROJECTID or use the `imgw-quota` command as from the [imgw shell extensions](#imgw-customizations-in-the-shell)
```bash
```bash title='Check VSC-4 IMGW quotas'
$ mmlsquota --block-size auto -j data_fs71386 data
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
......@@ -109,7 +111,6 @@ $ mmlsquota --block-size auto -j home_fs71386 home
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
home FILESET 62.17G 100G 100G 207.8M none | 631852 1000000 1000000 287 none vsc-storage.vsc4.opa
```
## Other Storage
......@@ -121,52 +122,40 @@ These datasets can be found directly via `/eodc/products/`.
We are given a private data storage location (`/eodc/private/uniwien`), where we can store up to 22 TB on VSC-4. However, that might change in the future.
## Node Information VSC-3
```
CPU model: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
2 CPU, 8 physical cores per CPU, total 32 logical CPU units
126 GB Memory
```
## Storage on VSC-3
All quotas are shared between users:
- `$HOME` (up to 5 TB, all home directories)
- `$GLOBAL` (up to 12 TB)
- `$BINFL` (up to 10 GB, fast scratch)
- `$BINFS` (up to 2 GB, SSD fast)
Check quotas:
```bash
$ beegfs-ctl --getquota --cfgFile=/etc/beegfs/global3.d/beegfs-client.conf --gid 70653
user/group || size || chunk files
name | id || used | hard || used | hard
--------------|------||------------|------------||---------|---------
p70653| 70653|| 5.62 TiB| 12.00 TiB|| 175886| 1000000
```
## Run time limits
On VSC-3 we have a max runtime of 10 days for the private Queue. The normal queues have 3 days. the devel only 10 min (for testing)
VSC-5 queues and limits:
<pre>
$ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
Name Priority GrpNodes MaxWall Descr
```bash title='VSC-4 Queues'
$ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
Name Priority GrpNodes MaxWall Descr
-------------------- ---------- -------- ----------- ----------------------------------------
p70653_0128 100000 10 10-00:00:00 priv nodes univie meteorologie
normal_0064 2000 1286 3-00:00:00 all user
normal_0256 2000 6 3-00:00:00 all user
normal_0128 2000 11 3-00:00:00 all user
devel_0128 5000000 10 00:10:00 for developing and testing codes
</pre>
normal 0 1-00:00:00 Normal QOS default
idle_0096 1 1-00:00:00 vsc-4 idle nodes
idle_0384 1 1-00:00:00 vsc-4 idle nodes
idle_0768 1 1-00:00:00 vsc-4 idle nodes
jupyter 1000 3-00:00:00 nodes for jupyterhub on vsc4
long 1000 10-00:00:00 long running jobs on vsc-4
p71386_0384 100000 10-00:00:00 private nodes haimberger
idle_0512 1 1-00:00:00 vsc-5 idle nodes
idle_1024 1 1-00:00:00 vsc5 idle nodes
idle_2048 1 1-00:00:00 vsc5 idle nodes
zen2_0256_a40x2 2000 3-00:00:00 24 x a40 nodes with 32 cores each
zen2_0256_a40x2_tra+ 1000000 1-00:00:00 qos for training on a40 gpu nodes
zen3_0512_a100x2 1000 3-00:00:00 public qos for a100 gpu nodes
zen3_0512_a100x2_tr+ 1000000 1-00:00:00 qos for training on a100 gpu nodes
cascadelake_0384 2000 3-00:00:00 intel cascadelake nodes on vsc-4
zen3_0512 1000 3-00:00:00 vsc-5 regular cpu nodes with 512 gb of +
zen3_0512_devel 5000000 00:10:00 fast short qos for dev jobs
zen3_1024 1000 3-00:00:00 vsc-5 regular cpu nodes with 1024 gb of+
zen3_2048 1000 3-00:00:00 vsc-5 regular cpu nodes with 2048 gb of+
```
on VSC-4 accordingly:
VSC-4 queues and limits:
<pre>
```bash title='VSC-5 Queues'
$ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
Name Priority GrpNodes MaxWall Descr
-------------------- ---------- -------- ----------- ----------------------------------------
......@@ -179,7 +168,7 @@ $ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
mem_0096 1000 3-00:00:00 vsc-4 regular nodes with 96 gb of memory
mem_0384 1000 3-00:00:00 vsc-4 regular nodes with 384 gb of memo+
mem_0768 1000 3-00:00:00 vsc-4 regular nodes with 768 gb of memo+
</pre>
```
SLURM allows for setting a run time limit below the default QOS's run time limit. After the specified time is elapsed, the job is killed:
......@@ -195,25 +184,15 @@ Acceptable time formats include `minutes`, `minutes:seconds`, `hours:minutes:sec
- [VSC Wiki private Queue](https://wiki.vsc.ac.at/doku.php?id=doku:vsc3_queue)
### Example Job on VSC-3
We have 16 CPUs per Node. In order to fill:
### Example Job on VSC
We have to use the following keywords to make sure that the correct partitions are used:
- `--partition=mem_xxxx` (per email)
- `--qos=xxxxxx` (see below)
- `--account=xxxxxx` (see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-3 our account is called:
<pre>
$ sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s
User Def Acct Account QOS Def QOS
---------- ---------- ---------- ---------------------------------------- --------------------
71633 p70653 p70653 devel_0128,p70653_0128 p70653_0128
</pre>
Put this in the Job file:
Put this in the Job file (e.g. VSC-5 Nodes)
```bash
#!/bin/bash
......@@ -224,9 +203,9 @@ Put this in the Job file:
#SBATCH --ntasks-per-core=1
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=mem_xxxx
#SBATCH --qos=p70653_0128
#SBATCH --account=p70653
#SBATCH --partition=zen3_0512
#SBATCH --qos=p71386_0512
#SBATCH --account=p71386
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
......@@ -246,58 +225,10 @@ Put this in the Job file:
* **--mail-user** sends an email to this address
```bash
$ sbatch check.slrm # to submit the job
$ squeue -u `whoami` # to check the status of own jobs
$ scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
```
### Example Job on VSC-4
We have 48 CPUs per Node. In order to fill:
- `--partition=mem_xxxx` (per email)
- `--qos=xxxxxx` (see below)
- `--account=xxxxxx` (see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-4 our account is called:
<pre>
$ sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s
User Def Acct Account QOS Def QOS
---------- ---------- ---------- ---------------------------------------- --------------------
74101 p71386 p71386 p71386_0384 p71386_0384
</pre>
Put this in the Job file:
```bash
#!/bin/bash
#
#SBATCH -J TEST_JOB
#SBATCH -N 2
#SBATCH --ntasks-per-node=48
#SBATCH --ntasks-per-core=1
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=mem_xxxx
#SBATCH --qos=p71386_0384
#SBATCH --account=p71386
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
<srun -l -N2 -n32 a.out >
# or
<mpirun -np 32 a.out>
```
submit the job
```bash
$ sbatch check.slrm # to submit the job
$ squeue -u `whoami` # to check the status of own jobs
$ scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
sbatch check.slrm # to submit the job
squeue -u `whoami` # to check the status of own jobs
scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
```
## Software
......@@ -305,15 +236,14 @@ $ scancel JOBID # for premature removal, where JOBID
The VSC use the same software system as Jet and have environmental modules available to the user:
- [VSC Wiki Software](https://wiki.vsc.ac.at/doku.php?id=doku:software)
- VSC-3 has an `anaconda3` module
- VSC-4 has `miniconda3` modules for GNU and INTEL ;)
```bash
$ module avail # lists the **available** Application-Software,
module avail # lists the **available** Application-Software,
# Compilers, Parallel-Environment, and Libraries
$ module list # shows currently loaded package of your session
$ module unload <xyz> # unload a particular package <xyz> from your session
$ module load <xyz> # load a particular package <xyz> into your session
module list # shows currently loaded package of your session
module unload <xyz> # unload a particular package <xyz> from your session
module load <xyz> # load a particular package <xyz> into your session
```
will load the intel compiler suite and add variables to your environment.
......@@ -321,14 +251,28 @@ will load the intel compiler suite and add variables to your environment.
on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
### Containers
### Import user-site packages
It is possible to install user site packages into your `.local/lib/python3.*` directory:
```bash
# installing a user site package
/gpfs/data/fs71386/imgw/run.sh pip install --user [package]
```
```python
import sys, site
sys.path.append(site.site.getusersitepackages())
# This will add the correct path.
```
Then you will be able to load all packages that are located in the user site.
## Containers
We can use complex software that is contained in [singularity](https://singularity.hpcng.org/) containers [(doc)](https://singularity.hpcng.org/user-docs/master/) and can be executed on VSC-4. Please consider using one of the following containers:
- `py3centos7anaconda3-2020-07-dev`
located in the `$DATA` directory of IMGW: `/gpfs/data/fs71386/imgw`
#### How to use?
### How to use?
Currently there is only one container with a run script.
```bash
# The directory of the containers
......@@ -341,8 +285,10 @@ Currently there is only one container with a run script.
/gpfs/data/fs71386/imgw/run.sh python analyis.py
```
#### Understanding the container
### Understanding the container
In principle, a run script needs to do only 3 things:
1. load the module `singularity`
2. set `SINGULARITY_BIND` environment variable
3. execute the container with your arguments
......@@ -369,11 +315,11 @@ ls: cannot access /gpfs/data/fs71386/mblasch: No such file or directory
```
#### What is inside the container?
### What is inside the container?
In principe you can check what is inside by using
```bash
$ module load singularity
$ singularity inspect py3centos7anaconda3-2020-07-dev.sif
module load singularity
singularity inspect py3centos7anaconda3-2020-07-dev.sif
author: M.Blaschek
dist: anaconda2020.07
glibc: 2.17
......@@ -821,19 +767,7 @@ which shows something like this
zstd 1.5.0 ha95c52a_0 conda-forge
```
### Import user-site packages
It is possible to install user site packages into your `.local/lib/python3.*` directory:
```bash
# installing a user site package
/gpfs/data/fs71386/imgw/run.sh pip install --user [package]
```
```python
import sys, site
sys.path.append(site.site.getusersitepackages())
# This will add the correct path.
```
Then you will be able to load all packages that are located in the user site.
## Debugging on VSC-4
......@@ -841,12 +775,12 @@ Currently (6.2021) there is no development queue on VSC-4 and the support sugges
```bash
# Request resources from slurm (-N 1, a full Node)
$ salloc -N 1 -p mem_0384 --qos p71386_0384 --no-shell
salloc -N 1 -p mem_0384 --qos p71386_0384 --no-shell
# Once the node is assigned / job is running
# Check with
$ squeue -u $USER
squeue -u $USER
# connect to the Node with ssh
$ ssh [Node]
ssh [Node]
# test and debug the model there.
```
......@@ -18,7 +18,7 @@ theme:
palette:
scheme: uniwien
features:
- navigation.indexes
- navigation.indexes
- navigation.top
logo: mkdocs/img/favicon.ico
favicon: mkdocs/img/favicon.ico
......@@ -27,7 +27,7 @@ plugins:
- same-dir
- search
- mkdocs-jupyter:
include_source: True
include_source: true
- git-revision-date-localized:
enable_creation_date: true
- awesome-pages
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment