
- Vienna Scientific Cluster
- IMGW customizations in the shell
- Node Informaiton VSC-5
- Storage on VSC-5
- Node Information VSC-4
- Storage on VSC-4
- Other Storage
- Node Information VSC-3
- Storage on VSC-3
- Run time limits
- Example Job
- Example Job on VSC-3
- Example Job on VSC-4
- Software
- Containers
- How to use?
- Understanding the container
- What is inside the container?
- Import user-site packages
- Debugging on VSC-4
Vienna Scientific Cluster
High Performance Computing available to Staff Austrian HPC effort part of EuroCC
Links:
We have the privilege to be part of the VSC and have private nodes at VSC-5 (since 2022), VSC-4 (since 2020) and VSC-3 (since 2014), which is retired by 2022.
Access is primarily via SSH:
$ ssh user@vsc5.vsc.ac.at
$ ssh user@vsc4.vsc.ac.at
$ ssh user@vsc3.vsc.ac.at (old does not work anymore)
Please follow some connection instruction on the wiki which is similar to all other servers (e.g. SRVX1). The VSC is only available from within the UNINET (VPN, ...). Authentication requires a mobile phone.
We have private nodes at our disposal and in order for you to use these you need to specify the correct account in the jobs you submit to the queueing system (SLURM). The correct information will be given to you in the registration email.
IMGW customizations in the shell
If you want you can use some shared shell scripts that provide information for users about the VSC system.
# run the install script, that just appends to your PATH variable.
/gpfs/data/fs71386/imgw/install_imgw.sh
Please find the following commands available:
-
imgw-quota
shows the current quota on VSC for both HOME and DATA -
imgw-container
singularity/apptainer container run script, see below -
imgw-transfersh
Transfer-sh service on srvx1, easily share small files. -
imgw-cpuinfo
Show CPU information
Please find a shared folder in /gpfs/data/fs71386/imgw/shared
and add data there that needs to be used by multiple people. Please make sure that things are removed again as soon as possible. Thanks.
Node Informaiton VSC-5
CPU model: AMD EPYC 7713 64-Core Processor
1 CPU, 64 physical cores per CPU, total 128 logical CPU units
512 GB Memory
We have access to 11 private Nodes of that kind. We also have access to 1 GPU node with Nvidia A100 accelerators. Find the partition information with:
$ sqos
qos_name total used free walltime priority partitions
=========================================================================
p71386_0512 11 0 11 10-00:00:00 100000 zen3_0512
p71386_a100dual 1 0 0 10-00:00:00 100000 gpu_a100_dual
Storage on VSC-5
the HOME and DATA partition are the same as on VSC-4.
Node Information VSC-4
CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
2 CPU, 24 physical cores per CPU, total 96 logical CPU units
378 GB Memory
We have access to 5 private Nodes of that kind. We also have access to the jupyterhub on VSC. Check with
$ sqos
qos_name total used free walltime priority partitions
=========================================================================
jupyter 20 1 19 3-00:00:00 1000 jupyter
p71386_0384 5 0 5 10-00:00:00 100000 mem_0384
Storage on VSC-4
All quotas are shared between all IMGW/Project users:
-
$HOME
(up to 100 GB, all home directories) -
$DATA
(up to 10 TB, backed up) -
$BINFL
(up to 1TB, fast scratch), will be retired -
$BINFS
(up to 2GB, SSD fast), will be retired -
$TMPDIR
(50% of main memory, deletes after job finishes) -
/local
(Compute Nodes, 480 GB SSD, deletes after Job finishes)
Check quotas running the following commands yourself, including your PROJECTID or use the imgw-quota
command as from the imgw shell extensions
$ mmlsquota --block-size auto -j data_fs71386 data
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
data FILESET 4.027T 9.766T 9.766T 482.4M none | 176664 1000000 1000000 65 none vsc-storage.vsc4.opa
$ mmlsquota --block-size auto -j home_fs71386 home
Block Limits | File Limits
Filesystem type blocks quota limit in_doubt grace | files quota limit in_doubt grace Remarks
home FILESET 62.17G 100G 100G 207.8M none | 631852 1000000 1000000 287 none vsc-storage.vsc4.opa
Other Storage
We have access to the Earth Observation Data Center EODC, where one can find primarily the following data sets:
- Sentinel-1, 2, 3
- Wegener Center GPS RO
These datasets can be found directly via /eodc/products/
.
We are given a private data storage location (/eodc/private/uniwien
), where we can store up to 22 TB on VSC-4. However, that might change in the future.
Node Information VSC-3
CPU model: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
2 CPU, 8 physical cores per CPU, total 32 logical CPU units
126 GB Memory
Storage on VSC-3
All quotas are shared between users:
-
$HOME
(up to 5 TB, all home directories) -
$GLOBAL
(up to 12 TB) -
$BINFL
(up to 10 GB, fast scratch) -
$BINFS
(up to 2 GB, SSD fast)
Check quotas:
$ beegfs-ctl --getquota --cfgFile=/etc/beegfs/global3.d/beegfs-client.conf --gid 70653
user/group || size || chunk files
name | id || used | hard || used | hard
--------------|------||------------|------------||---------|---------
p70653| 70653|| 5.62 TiB| 12.00 TiB|| 175886| 1000000
Run time limits
On VSC-3 we have a max runtime of 10 days for the private Queue. The normal queues have 3 days. the devel only 10 min (for testing)
$ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s Name Priority GrpNodes MaxWall Descr -------------------- ---------- -------- ----------- ---------------------------------------- p70653_0128 100000 10 10-00:00:00 priv nodes univie meteorologie normal_0064 2000 1286 3-00:00:00 all user normal_0256 2000 6 3-00:00:00 all user normal_0128 2000 11 3-00:00:00 all user devel_0128 5000000 10 00:10:00 for developing and testing codes
on VSC-4 accordingly:
$ sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s Name Priority GrpNodes MaxWall Descr -------------------- ---------- -------- ----------- ---------------------------------------- p71386_0384 100000 10-00:00:00 private nodes haimberger long 1000 10-00:00:00 long running jobs on vsc-4 fast_vsc4 1000000 3-00:00:00 high priority access idle_0096 1 1-00:00:00 vsc-4 idle nodes idle_0384 1 1-00:00:00 vsc-4 idle nodes idle_0768 1 1-00:00:00 vsc-4 idle nodes mem_0096 1000 3-00:00:00 vsc-4 regular nodes with 96 gb of memory mem_0384 1000 3-00:00:00 vsc-4 regular nodes with 384 gb of memo+ mem_0768 1000 3-00:00:00 vsc-4 regular nodes with 768 gb of memo+
SLURM allows for setting a run time limit below the default QOS's run time limit. After the specified time is elapsed, the job is killed:
#SBATCH --time=<time>
Acceptable time formats include minutes
, minutes:seconds
, hours:minutes:seconds
, days-hours
, days-hours:minutes
and days-hours:minutes:seconds
.
Example Job
Example Job on VSC-3
We have 16 CPUs per Node. In order to fill:
-
--partition=mem_xxxx
(per email) -
--qos=xxxxxx
(see below) -
--account=xxxxxx
(see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-3 our account is called:
$ sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s User Def Acct Account QOS Def QOS ---------- ---------- ---------- ---------------------------------------- -------------------- 71633 p70653 p70653 devel_0128,p70653_0128 p70653_0128
Put this in the Job file:
#!/bin/bash
#
#SBATCH -J TEST_JOB
#SBATCH -N 2
#SBATCH --ntasks-per-node=16
#SBATCH --ntasks-per-core=1
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=mem_xxxx
#SBATCH --qos=p70653_0128
#SBATCH --account=p70653
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
<srun -l -N2 -n32 a.out >
# or
<mpirun -np 32 a.out>
- -J job name
- -N number of nodes requested (16 cores per node available)
- -n, --ntasks= specifies the number of tasks to run,
- --ntasks-per-node number of processes run in parallel on a single node
- --ntasks-per-core number of tasks a single core should work on
- srun is an alternative command to mpirun. It provides direct access to SLURM inherent variables and settings.
- -l adds task-specific labels to the beginning of all output lines.
- --mail-type sends an email at specific events. The SLURM doku lists the following valid mail-type values: "BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL and REQUEUE), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple type values may be specified in a comma separated list." cited from the SLURM doku
- --mail-user sends an email to this address
$ sbatch check.slrm # to submit the job
$ squeue -u `whoami` # to check the status of own jobs
$ scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
Example Job on VSC-4
We have 48 CPUs per Node. In order to fill:
-
--partition=mem_xxxx
(per email) -
--qos=xxxxxx
(see below) -
--account=xxxxxx
(see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-4 our account is called:
$ sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s User Def Acct Account QOS Def QOS ---------- ---------- ---------- ---------------------------------------- -------------------- 74101 p71386 p71386 p71386_0384 p71386_0384
Put this in the Job file:
#!/bin/bash
#
#SBATCH -J TEST_JOB
#SBATCH -N 2
#SBATCH --ntasks-per-node=48
#SBATCH --ntasks-per-core=1
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=mem_xxxx
#SBATCH --qos=p71386_0384
#SBATCH --account=p71386
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
<srun -l -N2 -n32 a.out >
# or
<mpirun -np 32 a.out>
submit the job
$ sbatch check.slrm # to submit the job
$ squeue -u `whoami` # to check the status of own jobs
$ scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
Software
The VSC use the same software system as Jet and have environmental modules available to the user:
- VSC Wiki Software
- VSC-3 has an
anaconda3
module - VSC-4 has
miniconda3
modules for GNU and INTEL ;)
$ module avail # lists the **available** Application-Software,
# Compilers, Parallel-Environment, and Libraries
$ module list # shows currently loaded package of your session
$ module unload <xyz> # unload a particular package <xyz> from your session
$ module load <xyz> # load a particular package <xyz> into your session
will load the intel compiler suite and add variables to your environment. Please do not forget to add the module load statements to your jobs.
on how to use environment modules go to Using Environment Modules
Containers
We can use complex software that is contained in singularity containers (doc) and can be executed on VSC-4. Please consider using one of the following containers:
py3centos7anaconda3-2020-07-dev
located in the $DATA
directory of IMGW: /gpfs/data/fs71386/imgw
How to use?
Currently there is only one container with a run script.
# The directory of the containers
/gpfs/data/fs71386/imgw/run.sh [arguments]
# executing the python inside
/gpfs/data/fs71386/imgw/run.sh python
# or ipython
/gpfs/data/fs71386/imgw/run.sh ipython
# with other arguments
/gpfs/data/fs71386/imgw/run.sh python analyis.py
Understanding the container
In principle, a run script needs to do only 3 things:
- load the module
singularity
- set
SINGULARITY_BIND
environment variable - execute the container with your arguments
It is necessary to set the SINGULARITY_BIND
because the $HOME
and $DATA
or $BINFS
path are no standard linux paths, therefore the container linux does not know about these and accessing files from within the container is not possible. In the future if you have problems with accessing other paths, adding them to the SINGULARITY_BIND
might fix the issue.
In principe one can execute the container like this:
# check if the module is loaded
module load singularity
# just run the container initiating the builting runscript (running ipython):
[mblasch@l44 imgw]$ /gpfs/data/fs71386/imgw/py3centos7anaconda3-2020-07-dev.sif
Python 3.8.3 (default, Jul 2 2020, 16:21:59)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
In [2]: %env DATA
Out[2]: '/gpfs/data/fs71386/mblasch'
In [3]: ls /gpfs/data/fs71386/mblasch
ls: cannot access /gpfs/data/fs71386/mblasch: No such file or directory
# Please note here that the path is not available, because we did not use the SINGULARITY_BIND
What is inside the container?
In principe you can check what is inside by using
$ module load singularity
$ singularity inspect py3centos7anaconda3-2020-07-dev.sif
author: M.Blaschek
dist: anaconda2020.07
glibc: 2.17
org.label-schema.build-arch: amd64
org.label-schema.build-date: Thursday_7_October_2021_14:37:23_CEST
org.label-schema.schema-version: 1.0
org.label-schema.usage.singularity.deffile.bootstrap: docker
org.label-schema.usage.singularity.deffile.from: centos:7
org.label-schema.usage.singularity.deffile.stage: final
org.label-schema.usage.singularity.version: 3.8.1-1.el8
os: centos7
python: 3.8
which shows you some information on the container, e.g. Centos 7 is installed, python 3.8, and glibc 2.17.
But you can also check the applications inside
# List all executables inside the container
py3centos7anaconda3-2020-07-dev.sif ls /opt/view/bin
# or using conda for the environment
py3centos7anaconda3-2020-07-dev.sif conda info
# for the package list
py3centos7anaconda3-2020-07-dev.sif conda list
which shows something like this
??? note "anaconda environment list"
```
# packages in environment at /opt/software/linux-centos7-haswell/gcc-4.8.5/anaconda3-2020.07-xl53rxqkccbjdufemaupvtuhs3wsj5d2:
#
# Name Version Build Channel
_anaconda_depends 2020.07 py38_0
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_gnu conda-forge
alabaster 0.7.12 py_0
alsa-lib 1.2.3 h516909a_0 conda-forge
anaconda custom py38_1
anaconda-client 1.7.2 py38_0
anaconda-navigator 1.9.12 py38_0
anaconda-project 0.8.4 py_0
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
argh 0.26.2 py38_0
asciitree 0.3.3 py_2 conda-forge
asn1crypto 1.3.0 py38_0
astroid 2.4.2 py38_0
astropy 4.0.1.post1 py38h7b6447c_1
atomicwrites 1.4.0 py_0
attrs 19.3.0 py_0
autopep8 1.5.3 py_0
babel 2.8.0 py_0
backcall 0.2.0 py_0
backports 1.0 py_2
backports.functools_lru_cache 1.6.1 py_0
backports.shutil_get_terminal_size 1.0.0 py38_2
backports.tempfile 1.0 py_1
backports.weakref 1.0.post1 py_1
beautifulsoup4 4.9.1 py38_0
bitarray 1.4.0 py38h7b6447c_0
bkcharts 0.2 py38_0
blas 1.0 mkl
bleach 3.1.5 py_0
blosc 1.21.0 h9c3ff4c_0 conda-forge
bokeh 2.1.1 py38_0
boto 2.49.0 py38_0
bottleneck 1.3.2 py38heb32a55_1
brotlipy 0.7.0 py38h7b6447c_1000
bzip2 1.0.8 h7b6447c_0
c-ares 1.17.2 h7f98852_0 conda-forge
ca-certificates 2021.7.5 h06a4308_1
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
cairo 1.16.0 h6cf1ce9_1008 conda-forge
cartopy 0.20.0 py38hf9a4893_2 conda-forge
cdo 1.9.10 h25e7f74_6 conda-forge
certifi 2021.5.30 py38h578d9bd_0 conda-forge
cffi 1.14.0 py38he30daa8_1
cftime 1.5.1 py38h6c62de6_0 conda-forge
chardet 3.0.4 py38_1003
click 7.1.2 py_0
cloudpickle 1.5.0 py_0
clyent 1.2.2 py38_1
colorama 0.4.3 py_0
conda 4.10.3 py38h578d9bd_2 conda-forge
conda-build 3.18.11 py38_0
conda-env 2.6.0 1
conda-package-handling 1.6.1 py38h7b6447c_0
conda-verify 3.4.2 py_1
contextlib2 0.6.0.post1 py_0
cryptography 2.9.2 py38h1ba5d50_0
curl 7.79.1 h2574ce0_1 conda-forge
cycler 0.10.0 py38_0
cython 0.29.21 py38he6710b0_0
cytoolz 0.10.1 py38h7b6447c_0
dask 2.20.0 py_0
dask-core 2.20.0 py_0
dbus 1.13.16 hb2f20db_0
decorator 4.4.2 py_0
defusedxml 0.6.0 py_0
diff-match-patch 20200713 py_0
distributed 2.20.0 py38_0
docutils 0.16 py38_1
eccodes 2.23.0 h11d1a29_2 conda-forge
entrypoints 0.3 py38_0
et_xmlfile 1.0.1 py_1001
expat 2.4.1 h9c3ff4c_0 conda-forge
fastcache 1.1.0 py38h7b6447c_0
fasteners 0.16.3 pyhd3eb1b0_0
fftw 3.3.10 nompi_hcdd671c_101 conda-forge
filelock 3.0.12 py_0
flake8 3.8.3 py_0
flask 1.1.2 py_0
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.13.1 hba837de_1005 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
freeglut 3.2.1 h9c3ff4c_2 conda-forge
freetype 2.10.4 h0708190_1 conda-forge
fribidi 1.0.10 h516909a_0 conda-forge
fsspec 0.7.4 py_0
future 0.18.2 py38_1
geos 3.9.1 h9c3ff4c_2 conda-forge
get_terminal_size 1.0.0 haa9412d_0
gettext 0.21.0 hf68c758_0
gevent 20.6.2 py38h7b6447c_0
glib 2.68.4 h9c3ff4c_0 conda-forge
glib-tools 2.68.4 h9c3ff4c_0 conda-forge
glob2 0.7 py_0
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py38hd5f6e3b_3
graphite2 1.3.14 h23475e2_0
greenlet 0.4.16 py38h7b6447c_0
gst-plugins-base 1.18.5 hf529b03_0 conda-forge
gstreamer 1.18.5 h76c114f_0 conda-forge
h5netcdf 0.11.0 pyhd8ed1ab_0 conda-forge
h5py 3.4.0 nompi_py38hfbb2109_101 conda-forge
harfbuzz 3.0.0 h83ec7ef_1 conda-forge
hdf4 4.2.15 h10796ff_3 conda-forge
hdf5 1.12.1 nompi_h2750804_101 conda-forge
heapdict 1.0.1 py_0
html5lib 1.1 py_0
icu 68.1 h58526e2_0 conda-forge
idna 2.10 py_0
imageio 2.9.0 py_0
imagesize 1.2.0 py_0
importlib-metadata 1.7.0 py38_0
importlib_metadata 1.7.0 0
importlib_resources 5.2.2 pyhd8ed1ab_0 conda-forge
intel-openmp 2020.1 217
intervaltree 3.0.2 py_1
ipykernel 5.3.2 py38h5ca1d4c_0
ipython 7.16.1 py38h5ca1d4c_0
ipython_genutils 0.2.0 py38_0
ipywidgets 7.5.1 py_0
isort 4.3.21 py38_0
itsdangerous 1.1.0 py_0
jasper 2.0.14 ha77e612_2 conda-forge
jbig 2.1 hdba287a_0
jdcal 1.4.1 py_0
jedi 0.17.1 py38_0
jeepney 0.4.3 py_0
jinja2 2.11.2 py_0
joblib 0.16.0 py_0
jpeg 9d h516909a_0 conda-forge
json5 0.9.5 py_0
jsonschema 3.2.0 py38_0
jupyter 1.0.0 py38_7
jupyter_client 6.1.6 py_0
jupyter_console 6.1.0 py_0
jupyter_core 4.6.3 py38_0
jupyterlab 2.1.5 py_0
jupyterlab_server 1.2.0 py_0
keyring 21.2.1 py38_0
kiwisolver 1.2.0 py38hfd86e86_0
krb5 1.19.2 hcc1bbae_0 conda-forge
lazy-object-proxy 1.4.3 py38h7b6447c_0
lcms2 2.11 h396b838_0
ld_impl_linux-64 2.33.1 h53a641e_7
lerc 2.2.1 h9c3ff4c_0 conda-forge
libaec 1.0.6 h9c3ff4c_0 conda-forge
libarchive 3.5.2 hccf745f_1 conda-forge
libblas 3.9.0 1_h86c2bf4_netlib conda-forge
libcblas 3.9.0 5_h92ddd45_netlib conda-forge
libclang 11.1.0 default_ha53f305_1 conda-forge
libcurl 7.79.1 h2574ce0_1 conda-forge
libdeflate 1.7 h7f98852_5 conda-forge
libedit 3.1.20191231 h14c3975_1
libev 4.33 h516909a_1 conda-forge
libevent 2.1.10 h9b69904_4 conda-forge
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1d223b6_9 conda-forge
libgfortran-ng 11.2.0 h69a702a_9 conda-forge
libgfortran5 11.2.0 h5c6108e_9 conda-forge
libglib 2.68.4 h3e27bee_0 conda-forge
libglu 9.0.0 he1b5a44_1001 conda-forge
libgomp 11.2.0 h1d223b6_9 conda-forge
libiconv 1.16 h516909a_0 conda-forge
liblapack 3.9.0 5_h92ddd45_netlib conda-forge
liblief 0.10.1 he6710b0_0
libllvm11 11.1.0 hf817b99_2 conda-forge
libllvm9 9.0.1 h4a3c616_1
libnetcdf 4.8.1 nompi_hb3fd0d9_101 conda-forge
libnghttp2 1.43.0 h812cca2_1 conda-forge
libogg 1.3.5 h27cfd23_1
libopus 1.3.1 h7f98852_1 conda-forge
libpng 1.6.37 hbc83047_0
libpq 13.3 hd57d9b9_0 conda-forge
libsodium 1.0.18 h7b6447c_0
libsolv 0.7.16 h8b12597_0 conda-forge
libspatialindex 1.9.3 he6710b0_0
libssh2 1.10.0 ha56f1ee_2 conda-forge
libstdcxx-ng 11.2.0 he4da1e4_9 conda-forge
libtiff 4.3.0 hf544144_1 conda-forge
libtool 2.4.6 h7b6447c_5
libuuid 2.32.1 h14c3975_1000 conda-forge
libvorbis 1.3.7 he1b5a44_0 conda-forge
libwebp-base 1.2.1 h7f98852_0 conda-forge
libxcb 1.14 h7b6447c_0
libxkbcommon 1.0.3 he3ba5ed_0 conda-forge
libxml2 2.9.12 h72842e0_0 conda-forge
libxslt 1.1.33 h15afd5d_2 conda-forge
libzip 1.8.0 h4de3113_1 conda-forge
libzlib 1.2.11 h36c2ea0_1013 conda-forge
llvmlite 0.33.0 py38hc6ec683_1
locket 0.2.0 py38_1
lxml 4.6.3 py38hf1fe3a4_0 conda-forge
lz4-c 1.9.3 h9c3ff4c_1 conda-forge
lzo 2.10 h7b6447c_2
magics 4.9.1 hb6e17df_1 conda-forge
magics-python 1.5.6 pyhd8ed1ab_0 conda-forge
mamba 0.5.1 py38h6fd9b40_0 conda-forge
markupsafe 1.1.1 py38h7b6447c_0
matplotlib 3.4.3 py38h578d9bd_1 conda-forge
matplotlib-base 3.4.3 py38hf4fb855_0 conda-forge
mccabe 0.6.1 py38_1
metpy 1.1.0 pyhd8ed1ab_0 conda-forge
mistune 0.8.4 py38h7b6447c_1000
mkl 2020.1 217
mkl-service 2.3.0 py38he904b0f_0
mkl_fft 1.1.0 py38h23d657b_0
mkl_random 1.1.1 py38h0573a6f_0
mock 4.0.2 py_0
more-itertools 8.4.0 py_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.1.0 py38_0
msgpack-python 1.0.0 py38hfd86e86_1
multipledispatch 0.6.0 py38_0
mysql-common 8.0.25 ha770c72_2 conda-forge
mysql-libs 8.0.25 hfa10184_2 conda-forge
navigator-updater 0.2.1 py38_0
nbconvert 5.6.1 py38_0
nbformat 5.0.7 py_0
ncurses 6.2 he6710b0_1
netcdf4 1.5.7 nompi_py38h2823cc8_103 conda-forge
networkx 2.4 py_1
nltk 3.5 py_0
nose 1.3.7 py38_2
notebook 6.0.3 py38_0
nspr 4.30 h9c3ff4c_0 conda-forge
nss 3.69 hb5efdd6_1 conda-forge
numba 0.50.1 py38h0573a6f_1
numcodecs 0.9.1 py38h709712a_0 conda-forge
numexpr 2.7.1 py38h423224d_0
numpy 1.19.2 py38h54aff64_0
numpy-base 1.19.2 py38hfa32c7d_0
numpydoc 1.1.0 py_0
olefile 0.46 py_0
openpyxl 3.0.4 py_0
openssl 1.1.1l h7f98852_0 conda-forge
ossuuid 1.6.2 hf484d3e_1000 conda-forge
packaging 20.4 py_0
pandas 1.0.5 py38h0573a6f_0
pandoc 2.10 0
pandocfilters 1.4.2 py38_1
pango 1.48.10 h54213e6_2 conda-forge
parso 0.7.0 py_0
partd 1.1.0 py_0
patchelf 0.11 he6710b0_0
path 13.1.0 py38_0
path.py 12.4.0 0
pathlib2 2.3.5 py38_0
pathtools 0.1.2 py_1
patsy 0.5.1 py38_0
pcre 8.45 h9c3ff4c_0 conda-forge
pep8 1.7.1 py38_0
pexpect 4.8.0 py38_0
pickleshare 0.7.5 py38_1000
pillow 7.2.0 py38hb39fc2d_0
pint 0.17 pyhd8ed1ab_1 conda-forge
pip 20.1.1 py38_1
pixman 0.40.0 h7b6447c_0
pkginfo 1.5.0.1 py38_0
pluggy 0.13.1 py38_0
ply 3.11 py38_0
pooch 1.5.1 pyhd8ed1ab_0 conda-forge
proj 8.1.1 h277dcde_2 conda-forge
prometheus_client 0.8.0 py_0
prompt-toolkit 3.0.5 py_0
prompt_toolkit 3.0.5 0
psutil 5.7.0 py38h7b6447c_0
ptyprocess 0.6.0 py38_0
py 1.9.0 py_0
py-lief 0.10.1 py38h403a769_0
pycodestyle 2.6.0 py_0
pycosat 0.6.3 py38h7b6447c_1
pycparser 2.20 py_2
pycurl 7.43.0.5 py38h1ba5d50_0
pydocstyle 5.0.2 py_0
pyflakes 2.2.0 py_0
pygments 2.6.1 py_0
pylint 2.5.3 py38_0
pyodbc 4.0.30 py38he6710b0_0
pyopenssl 19.1.0 py_1
pyparsing 2.4.7 py_0
pyproj 3.2.1 py38h80797bf_2 conda-forge
pyqt 5.12.3 py38h578d9bd_7 conda-forge
pyqt-impl 5.12.3 py38h7400c14_7 conda-forge
pyqt5-sip 4.19.18 py38h709712a_7 conda-forge
pyqtchart 5.12 py38h7400c14_7 conda-forge
pyqtwebengine 5.12.1 py38h7400c14_7 conda-forge
pyrsistent 0.16.0 py38h7b6447c_0
pyshp 2.1.3 pyh44b312d_0 conda-forge
pysocks 1.7.1 py38_0
pytables 3.6.1 py38hdb04529_4 conda-forge
pytest 5.4.3 py38_0
python 3.8.3 hcff3b4d_2
python-dateutil 2.8.1 py_0
python-jsonrpc-server 0.3.4 py_1
python-language-server 0.34.1 py38_0
python-libarchive-c 2.9 py_0
python_abi 3.8 2_cp38 conda-forge
pytz 2020.1 py_0
pywavelets 1.1.1 py38h7b6447c_0
pyxdg 0.26 py_0
pyyaml 5.3.1 py38h7b6447c_1
pyzmq 19.0.1 py38he6710b0_1
qdarkstyle 2.8.1 py_0
qt 5.12.9 hda022c4_4 conda-forge
qtawesome 0.7.2 py_0
qtconsole 4.7.5 py_0
qtpy 1.9.0 py_0
readline 8.1 h46c0cb4_0 conda-forge
regex 2020.6.8 py38h7b6447c_0
requests 2.24.0 py_0
ripgrep 11.0.2 he32d670_0
rope 0.17.0 py_0
rtree 0.9.4 py38_1
ruamel_yaml 0.15.87 py38h7b6447c_1
scikit-image 0.16.2 py38h0573a6f_0
scikit-learn 0.23.1 py38h423224d_0
scipy 1.7.1 py38h56a6a73_0 conda-forge
seaborn 0.10.1 py_0
secretstorage 3.1.2 py38_0
send2trash 1.5.0 py38_0
setuptools 49.2.0 py38_0
shapely 1.7.1 py38hb7fe4a8_5 conda-forge
simplegeneric 0.8.1 py38_2
simplejson 3.17.5 py38h497a2fe_0 conda-forge
singledispatch 3.4.0.3 py38_0
sip 4.19.13 py38he6710b0_0
six 1.15.0 py_0
snappy 1.1.8 he6710b0_0
snowballstemmer 2.0.0 py_0
sortedcollections 1.2.1 py_0
sortedcontainers 2.2.2 py_0
soupsieve 2.0.1 py_0
sphinx 3.1.2 py_0
sphinxcontrib 1.0 py38_1
sphinxcontrib-applehelp 1.0.2 py_0
sphinxcontrib-devhelp 1.0.2 py_0
sphinxcontrib-htmlhelp 1.0.3 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.3 py_0
sphinxcontrib-serializinghtml 1.1.4 py_0
sphinxcontrib-websupport 1.2.3 py_0
spyder 4.1.4 py38_0
spyder-kernels 1.9.2 py38_0
sqlalchemy 1.3.18 py38h7b6447c_0
sqlite 3.36.0 h9cd32fc_2 conda-forge
statsmodels 0.11.1 py38h7b6447c_0
sympy 1.6.1 py38_0
tbb 2020.0 hfd86e86_0
tblib 1.6.0 py_0
terminado 0.8.3 py38_0
testpath 0.4.4 py_0
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 hbc83047_0
toml 0.10.1 py_0
toolz 0.10.0 py_0
tornado 6.0.4 py38h7b6447c_1
tqdm 4.47.0 py_0
traitlets 4.3.3 py38_0 conda-forge
typing_extensions 3.7.4.2 py_0
udunits2 2.2.27.27 hc3e0081_2 conda-forge
ujson 1.35 py38h7b6447c_0
unicodecsv 0.14.1 py38_0
unixodbc 2.3.7 h14c3975_0
urllib3 1.25.9 py_0
watchdog 0.10.3 py38_0
wcwidth 0.2.5 py_0
webencodings 0.5.1 py38_1
werkzeug 1.0.1 py_0
wheel 0.34.2 py38_0
widgetsnbextension 3.5.1 py38_0
wrapt 1.11.2 py38h7b6447c_0
wurlitzer 2.0.1 py38_0
xarray 0.19.0 pyhd8ed1ab_1 conda-forge
xlrd 1.2.0 py_0
xlsxwriter 1.2.9 py_0
xlwt 1.3.0 py38_0
xmltodict 0.12.0 py_0
xorg-fixesproto 5.0 h14c3975_1002 conda-forge
xorg-inputproto 2.3.2 h14c3975_1002 conda-forge
xorg-kbproto 1.0.7 h14c3975_1002 conda-forge
xorg-libice 1.0.10 h516909a_0 conda-forge
xorg-libsm 1.2.3 hd9c2040_1000 conda-forge
xorg-libx11 1.7.2 h7f98852_0 conda-forge
xorg-libxau 1.0.9 h14c3975_0 conda-forge
xorg-libxext 1.3.4 h7f98852_1 conda-forge
xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge
xorg-libxi 1.7.10 h7f98852_0 conda-forge
xorg-libxrender 0.9.10 h7f98852_1003 conda-forge
xorg-renderproto 0.11.1 h14c3975_1002 conda-forge
xorg-xextproto 7.3.0 h14c3975_1002 conda-forge
xorg-xproto 7.0.31 h14c3975_1007 conda-forge
xz 5.2.5 h7b6447c_0
yaml 0.2.5 h7b6447c_0
yapf 0.30.0 py_0
zarr 2.10.1 pyhd8ed1ab_0 conda-forge
zeromq 4.3.2 he6710b0_2
zict 2.0.0 py_0
zipp 3.1.0 py_0
zlib 1.2.11 h36c2ea0_1013 conda-forge
zope 1.0 py38_1
zope.event 4.4 py38_0
zope.interface 4.7.1 py38h7b6447c_0
zstd 1.5.0 ha95c52a_0 conda-forge
```
Import user-site packages
It is possible to install user site packages into your .local/lib/python3.*
directory:
# installing a user site package
/gpfs/data/fs71386/imgw/run.sh pip install --user [package]
import sys, site
sys.path.append(site.site.getusersitepackages())
# This will add the correct path.
Then you will be able to load all packages that are located in the user site.
Debugging on VSC-4
Currently (6.2021) there is no development queue on VSC-4 and the support suggested to do the following:
# Request resources from slurm (-N 1, a full Node)
$ salloc -N 1 -p mem_0384 --qos p71386_0384 --no-shell
# Once the node is assigned / job is running
# Check with
$ squeue -u $USER
# connect to the Node with ssh
$ ssh [Node]
# test and debug the model there.