diff --git a/Documentation/screen-cheatsheet.png b/Documentation/screen-cheatsheet.png new file mode 100644 index 0000000000000000000000000000000000000000..08c3225987814a82657f11d1de420b8a43f387cc Binary files /dev/null and b/Documentation/screen-cheatsheet.png differ diff --git a/Editors/Vim.md b/Editors/Vim.md index 6d23e9e75b8fd032bbaf057eccee6f4d13cb12a3..f6f76c24c2e3d723be3648b15a5779e2f54c2ae6 100644 --- a/Editors/Vim.md +++ b/Editors/Vim.md @@ -45,3 +45,10 @@ noremap! <C-?> <C-h> ``` The source of this error relates to stty and maybe VNC. [on stackoverflow](https://stackoverflow.com/questions/9701366/vim-backspace-leaves) + +or maybe +```bash +# in .bashrc +# fix for vim backspace +stty erase '^?' +``` \ No newline at end of file diff --git a/Fortran/Compilers.md b/Fortran/Compilers.md new file mode 100644 index 0000000000000000000000000000000000000000..8d4aa29e7cf5b39ce1e031fc2a89a8dc19e1d107 --- /dev/null +++ b/Fortran/Compilers.md @@ -0,0 +1,23 @@ +# Compiling & Building + +under development :) + +## Makefile + + +### Environmental Modules & Makefile + +It is quite handy to use environmental modules and load different version of libraries, but how to make use of these ever changing **PATHs**. Take a look at the following examples to help with making your `Makefile` ready for modules. + +``` +# use the environmental variable $INCLUDE +# split the paths separated by : +INC = $(subst :, ,$(INCLUDE)) +# add a -I/path/to/include +INC := $(INC:%=-I%) +# use the environmental variable $LIBRARY_PATH +LIBS = $(subst :, ,$(LIBRARY_PATH)) +LIBS := $(LIBS:%=-L%) +``` + +With this code snippet in your Makefile you should be able to use environmental variables such as `$INCLUDE` or `$LIBRARY_PATH` efficiently. These paths adapt to your loaded modules. diff --git a/SRVX1.md b/SRVX1.md index 8f24efc2bdf92f18f52bb1d958f1361f3f34a8ff..754e1a855b9aa6dfa42c2fe4cd1300e7ac6ca1f0 100644 --- a/SRVX1.md +++ b/SRVX1.md @@ -7,35 +7,58 @@ [How to SSH / VNC / VPN](SSH-VPN-VNC/README.md) -### Jupyterhub +## System Information +| Name | Value | +| --- | --- | +| Product | PowerEdge R940 | +| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | +| Cores | 4 CPU, 20 physical cores per CPU, total 160 logical CPU units | +| CPU time | 700 kh | +| Memory | 754 GB Total | +| Memory/Core | 9.4 GB | + +``` +---------------------------------------------- + 131.130.157.11 _ . , . . + * / \_ * / \_ _ * * /\'_ + / \ / \, (( . _/ / + . /\/\ /\/ :' __ \_ ` _^/ ^/ + / \/ \ _/ \-'\ * /.' ^_ \ + /\ .- `. \/ \ /==~=-=~=-=-;. _/ \ - + / `-.__ ^ / .-'.--\ =-=~_=-=~=^/ _ `--./ +/SRVX1 `. / / `.~-^=-=~=^=.-' ' +---------------------------------------------- +``` + + +## Services +The SRVX1 is the central access point to IMG services: +goto: [srvx.img.univie.ac.at](https://srvx1.img.univie.ac.at) +Currently running: +- TeachingHub +- ResearchHub +- Webdata + +## Jupyterhub <img src="https://jupyter.org/assets/hublogo.svg" width="300px"> SRVX1 serves a teaching [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/). It allows easy access for students and teachers. -Goto: [](https://srvx1.img.univie.ac.at) +Goto: [](https://srvx1.img.univie.ac.at/hub) Signup is only granted by teachers and requires a srvx1 user account. A new password is needed and a TOTP (time base one-time password) will be created. Download/Use any of these required Authenticator Apps: + - [2FAS (Mobile, recommended)](https://2fas.com/) + - [KeepassX (Desktop)](https://www.keepassx.org/) - [Authy (Mobile, Desktop)](https://authy.com/download/) - [FreeOTP (Mobile)](https://freeotp.github.io/) - [Google Auth (Mobile)](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2) After registering the teacher/admin has to grant you access and you can login. -## System Information -| Name | Value | -| --- | --- | -| Product | PowerEdge R720xd | -| Processor | Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz | -| Cores | 2 CPU, 8 physical cores per CPU, total 32 logical CPU units | -| CPU time | 140 kh | -| Memory | 190 GB Total | -| Memory/Core | 11.9 GB | -| Network | 10 Gbit/s | - ## Software -The typcial installation of a intel-cluster has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software. +The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`, `intel-oneapi`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software. Major Libraries: - OpenMPI (3.1.6, 4.0.5) - HDF5 @@ -50,15 +73,77 @@ These software libraries are usually handled by environment modules.  ## Currently installed modules +Please note that new versions might already be installed. ```bash $ module av --------------------------------- /home/opt/spack/share/spack/modules/linux-centos6-sandybridge -------------------------------- -anaconda3/2020.07-gcc-5.3.0 gcc/5.3.0-gcc-5.3.0 netcdf-c/4.7.4-gcc-5.3.0 zlib/1.2.11-gcc-5.3.0 -cdo/1.9.9-gcc-5.3.0 git/2.29.0-gcc-5.3.0 netcdf-fortran/4.5.3-gcc-5.3.0 -eccodes/2.18.0-gcc-5.3.0 hdf5/1.10.7-gcc-5.3.0 openmpi/3.1.6-gcc-5.3.0 -enstools/2020.11.dev-gcc-5.3.0 miniconda3/4.8.2-gcc-5.3.0 proj/7.1.0-gcc-5.3.0 +--------------- /home/swd/spack/share/spack/modules/linux-rhel8-skylake_avx512 ---------------- +anaconda2/2019.10-gcc-8.4.1 netcdf-fortran/4.5.3-gcc-8.4.1 +anaconda3/2020.07-gcc-8.4.1 netlib-lapack/3.9.1-gcc-8.4.1 +anaconda3/2020.11-gcc-8.4.1 netlib-lapack/3.9.1-intel-20.0.4 +anaconda3/2021.05-gcc-8.4.1 netlib-lapack/3.9.1-oneapi-2021.2.0 +autoconf/2.69-oneapi-2021.2.0 netlib-scalapack/2.1.0-gcc-8.4.1 +autoconf/2.71-oneapi-2021.2.0 netlib-scalapack/2.1.0-gcc-8.4.1-MPI3.1.6 +eccodes/2.19.1-gcc-8.4.1 openblas/0.3.17-gcc-8.4.1 +eccodes/2.19.1-intel-20.0.4 openmpi/3.1.6-gcc-8.4.1 +eccodes/2.21.0-gcc-8.4.1 openmpi/3.1.6-intel-20.0.4 +eccodes/2.21.0-intel-20.0.4 openmpi/4.0.5-gcc-8.4.1 +geos/3.9.1-gcc-8.4.1 openmpi/4.0.5-intel-20.0.4 +hdf5/1.10.7-gcc-8.4.1-MPI3.1.6 perl/5.32.0-intel-20.0.4 +hdf5/1.10.7-intel-20.0.4-MPI3.1.6 proj/8.1.0-gcc-8.4.1 +hdf5/1.12.0-gcc-8.4.1 python/3.8.9-gcc-8.4.1 +hdf5/1.12.0-intel-20.0.4 +hdf5/1.12.0-intel-20.0.4-MPI3.1.6 +hdf5/1.12.0-oneapi-2021.2.0 +intel-oneapi-compilers/2021.2.0-oneapi-2021.2.0 +intel-oneapi-dal/2021.2.0-oneapi-2021.2.0 +intel-oneapi-mkl/2021.2.0-oneapi-2021.2.0 +intel-oneapi-mpi/2021.2.0-oneapi-2021.2.0 +intel-parallel-studio/composer.2020.4-intel-20.0.4 +libemos/4.5.9-gcc-8.4.1 +libemos/4.5.9-intel-20.0.4 +matlab/R2020b-gcc-8.4.1 +miniconda2/4.7.12.1-gcc-8.4.1 +miniconda3/4.10.3-gcc-8.4.1 +ncl/6.5.0-gcc-8.4.1-MPI3.1.6 +ncl/6.6.2-gcc-8.4.1-MPI3.1.6 +nco/4.9.3-gcc-8.4.1 +nco/4.9.3-intel-20.0.4 +ncview/2.1.8-gcc-8.4.1 +netcdf-c/4.6.3-gcc-8.4.1-MPI3.1.6 +netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6 +netcdf-c/4.7.4-gcc-8.4.1 +netcdf-c/4.7.4-intel-20.0.4 +netcdf-fortran/4.5.2-gcc-8.4.1-MPI3.1.6 +netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6 ``` on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md) +## Container Hub + +Currently there is the possibility to run [singularity](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker. + +There are a number of prepared containers but more can be added. If you have a wish or an existing container useful for others please share. +```yaml +containers: + - root: /home/swd/containers + - available: + - RTTOV: + - RTTOV: 12.3 + - compiler: gcc:7.3.0 (anaconda) + - path: /home/swd/containers/rttov-jupyter/jup3rttov.sif + - os: centos:6.10 + - python: 3.7.4 + - singularity: 3.5.2 + - packages: + - anaconda3 + - jupyter jupyterlab numpy matplotlib pandas xarray bottleneck dask numba scipy netcdf4 cartopy h5netcdf nc-time-axis cfgrib eccodes nodejs + - apps: + - atlas + - lab + - notebook + - rtcoef + - rthelp + - rttest +``` \ No newline at end of file diff --git a/SRVX2.md b/SRVX2.md deleted file mode 100644 index 8056ca3adadf0e6f42aaa572e79ed99e32367701..0000000000000000000000000000000000000000 --- a/SRVX2.md +++ /dev/null @@ -1,51 +0,0 @@ - -# S R V X 2 - -[[_TOC_]] - -## Getting Started - -[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md) - -## System Information -| Name | Value | -| --- | --- | -| Product | PowerEdge R940 | -| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | -| Cores | 4 CPU, 20 physical cores per CPU, total 160 logical CPU units | -| CPU time | 700 kh | -| Memory | 376 GB Total | -| Memory/Core | 9.4 GB | - -## Software - -The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software. -Major Libraries: - - OpenMPI (3.1.6, 4.0.5) - - HDF5 - - NetCDF (C, Fortran) - - ECCODES from [ECMWF](https://confluence.ecmwf.int/display/ECC) - - Math libraries e.g. intel-mkl, lapack,scalapack - - Interpreters: Python, Julia - - Tools: cdo, ncl, nco, ncview - -These software libraries are usually handled by environment modules. - - - -## Currently installed modules -```bash -$ module av ------------------------------- /home/spack-root/share/spack/modules/linux-centos8-skylake_avx512 ------------------------------ -anaconda3/2020.07-gcc-8.3.1 intel-parallel-studio/composer.2020.2-intel-20.0.2 netcdf-fortran/4.5.3-gcc-8.3.1 -eccodes/2.18.0-intel-20.0.2 libemos/4.5.9-gcc-8.3.1 netcdf-fortran/4.5.3-intel-20.0.2 -eccodes/2.19.1-gcc-8.3.1 miniconda3/4.8.2-gcc-8.3.1 netlib-lapack/3.8.0-gcc-8.3.1 -eccodes/2.19.1-intel-20.0.2 miniconda3/4.9.2-gcc-8.3.1 netlib-scalapack/2.1.0-gcc-8.3.1 -eccodes/2.21.0-intel-20.0.2 ncl/6.6.2-gcc-8.3.1 openblas/0.3.12-gcc-8.3.1 -hdf5/1.10.7-gcc-8.3.1 netcdf-c/4.7.4-gcc-8.3.1 openmpi/3.1.6-gcc-8.3.1 -hdf5/1.10.7-intel-20.0.2 netcdf-c/4.7.4-intel-20.0.2 openmpi/3.1.6-intel-20.0.2 -``` - -on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md) - - diff --git a/SRVX8.md b/SRVX8.md index 03ecd349fccf75113c12f52c68918bbf3b7e3da9..98be6e9e336ebc15eba1cf3e92001b23cd1b93e3 100644 --- a/SRVX8.md +++ b/SRVX8.md @@ -17,11 +17,112 @@ | CPU time | 245 kh | | Memory | 504 GB Total | | Memory/Core | 18 Gb | -| Network | 10 Gbit/s | + +``` +---------------------------------------------- + _ + (` ). + ( ). +) _( SRVX8 '`. + .=(`( . ) .-- + (( (..__.:'-' .+( ) +`. `( ) ) ( . ) + ) ` __.:' ) ( ( )) +) ) ( ) --' `- __.' +.-' (_.' .') + (_ ) + 131.130.157.8 +--..,___.--,--'`,---..-.--+--.,,-,,.-..-._.-.- +---------------------------------------------- +``` ## Software -Software is installed in numerous places. This is a legency system with no software controller. -A reinstall is planed for summer 2021. +The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`, `intel-oneapi`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software. +Major Libraries: + - OpenMPI (3.1.6, 4.0.5) + - HDF5 + - NetCDF (C, Fortran) + - ECCODES from [ECMWF](https://confluence.ecmwf.int/display/ECC) + - Math libraries e.g. intel-mkl, lapack,scalapack + - Interpreters: Python, Julia + - Tools: cdo, ncl, nco, ncview + +These software libraries are usually handled by environment modules. + + + +## Currently installed modules on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md) +```bash +$ module av +------------------- /home/swd/spack/share/spack/modules/linux-rhel7-haswell ------------------- +anaconda2/2019.10-gcc-8.4.0 +anaconda3/2021.05-gcc-8.4.0 +eccodes/2.21.0-gcc-8.4.0 +gcc/8.4.0-gcc-4.8.5 +git/1.8.3.1-gcc-8.4.0 +git/2.31.1-gcc-8.4.0 +hdf5/1.10.7-gcc-8.4.0 +hdf5/1.12.0-gcc-8.4.0 +intel-oneapi-compilers/2021.3.0-oneapi-2021.3.0 +intel-oneapi-mkl/2021.3.0-oneapi-2021.3.0 +intel-oneapi-mpi/2021.3.0-oneapi-2021.3.0 +miniconda2/4.7.12.1-gcc-8.4.0 +miniconda3/4.10.3-gcc-8.4.0 +ncl/6.5.0-gcc-8.4.0 +ncl/6.6.2-gcc-8.4.0 +nco/4.9.3-gcc-8.4.0 +ncview/2.1.8-gcc-8.4.0 +netcdf-c/4.6.3-gcc-8.4.0 +netcdf-c/4.7.4-gcc-8.4.0 +netcdf-fortran/4.5.2-gcc-8.4.0 +netcdf-fortran/4.5.3-gcc-8.4.0 +netlib-lapack/3.9.1-gcc-8.4.0 +netlib-scalapack/2.1.0-gcc-8.4.0 +openblas/0.3.17-gcc-8.4.0 +openmpi/3.1.6-gcc-8.4.0 +openmpi/4.0.5-gcc-8.4.0 +openmpi/4.1.1-gcc-8.4.0 +proj/8.1.0-gcc-8.4.0 +python/3.8.9-gcc-4.8.5 + +-------------------------------------- /home/swd/modules -------------------------------------- +micromamba/latest +``` + +## Virtual Machine Hub + +Currently the system acts as a virtual machine host. +Active: + - VERA + + +## Container Hub + +Currently there is the possibility to run [singularity](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker. + +There are a number of prepared containers but more can be added. If you have a wish or an existing container useful for others please share. +```yaml +containers: + - root: /home/swd/containers + - available: + - RTTOV: + - RTTOV: 12.3 + - compiler: gcc:7.3.0 (anaconda) + - path: /home/swd/containers/rttov-jupyter/jup3rttov.sif + - os: centos:6.10 + - python: 3.7.4 + - singularity: 3.5.2 + - packages: + - anaconda3 + - jupyter jupyterlab numpy matplotlib pandas xarray bottleneck dask numba scipy netcdf4 cartopy h5netcdf nc-time-axis cfgrib eccodes nodejs + - apps: + - atlas + - lab + - notebook + - rtcoef + - rthelp + - rttest +``` \ No newline at end of file diff --git a/SSH-VPN-VNC/README.md b/SSH-VPN-VNC/README.md index e6a9cc5907c1680c30c296ffac507b26ef515134..0c8896925a8b77bb04e04c406f95c29e5e82f247 100644 --- a/SSH-VPN-VNC/README.md +++ b/SSH-VPN-VNC/README.md @@ -26,10 +26,20 @@ This starts a new session $ screen -S longjob ``` -You can detach from this session with `CTRL + A + D` and reconnect again with `screen -r`. +You can detach from this session with `CTRL + A D` and reconnect again with `screen -r`. Multiple Sessions can be created and the output saved (`-L` Option). + + +## Tmux +[Tmux](https://wiki.ubuntuusers.de/tmux/) is a terminal multiplexer, that allows to open more consoles and allows to detach the session. It is much more complex and powerful compared to screen. +```bash +$ tmux +``` +Launches a new virtual terminal, with `CTRL + B D` it can bed detached and with `tmux a` it can be reconnected. + + ## Questions and Answers - [Q: How to use ssh-key authentication?](Questions.md#q-how-to-use-ssh-key-authentication) - [Q: How to use an ssh-agent?](Questions.md#q-how-to-use-an-ssh-agent) @@ -37,3 +47,9 @@ Multiple Sessions can be created and the output saved (`-L` Option). - [Q: How to connect to Jet, SRVX8, SRVX2?](Questions.md#q-how-to-connect-to-jet-srvx8-srvx2) - [Q: How to mount a remote file system on Linux (MAC)?](Questions.md#q-how-to-mount-a-remote-file-system-on-Linux-mac) +## Tools +Please find some useful tools for connecting to IMGW servers and University of Vienna VPN. +- BASH script using SSH to connect via a gateway, [SSH](SSH.md#connect-script) [connect2jet](connect2jet) +- BASH script for 5fpc tools, [VPN](VPN.md#connect-script) [connect2vpn](connect2vpn) +- Change VNC resolution, [VNC](VNC.md#xrandr) [add_xrandr_resolution](add_xrandr_resolution.sh) +- Mount Server directories via sshfs, [SSHFS](SSH.md#sshfs) \ No newline at end of file diff --git a/SSH-VPN-VNC/SSH.md b/SSH-VPN-VNC/SSH.md index 7a8dcbf04127978a55499b550602b2de8d262cde..c501b23ad77161660db9f6e6a08bc6a78963be4d 100644 --- a/SSH-VPN-VNC/SSH.md +++ b/SSH-VPN-VNC/SSH.md @@ -20,21 +20,21 @@ Host * Host srvx1 HostName srvx1.img.univie.ac.at User [USERNAME] - -Host srvx2 - HostName srvx2.img.univie.ac.at - User [USERNAME] Host srvx8 HostName srvx8.img.univie.ac.at User [USERNAME] Host jet HostName jet01.img.univie.ac.at User [USERNAME] +Host srvx2jet + HostName jet01.img.univie.ac.at + User [USERNAME] + ProxyJump srvx1.img.univie.ac.at Host login HostName login.univie.ac.at User [U:Account USERNAME] ``` -and replacing `[USERNAME]` and `[U:Account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx2` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway. +and replacing `[USERNAME]` and `[U:Account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx1` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway. If you want to use ssh-keys you can also use different keys in `.ssh/config` per server with `IdentityFile ~/.ssh/id_rsa_for_server`. @@ -99,3 +99,15 @@ Option 2: [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) and * Set "Destination Host" to `jet01.img.univie.ac.at` * Set "Destination Port" to `5900+[DISPLAY]` * Now start VncViewer and connect to `127.0.0.1:5900+[DISPLAY]` + +## SSHFS +It is possible to mount your home directory to your personal computer on Linux via `sshfs` or using of course a dedicated remote file browser like: Filezilla, Cyberduck, ... + +on Linux you need to install `fuse2` and `sshfs`, the names might vary between distributions, but are all in the default repos. +```bash +# connect to srvx1 using your home directory and a srvx1 directory on your local computer +# mountserver [host] [remotedir] [localdir] +mkdir -p $HOME/srvx1 +mountserver [USER]@srvx1.img.univie.ac.at /users/staff/[USER] $HOME/srvx1 +``` +Note the directories might vary, depending on your membership (staff, external, students). diff --git a/SSH-VPN-VNC/VNC.md b/SSH-VPN-VNC/VNC.md index 6e1e708f5daeed8c59853f1e6f12ad1ed9bdc372..29181177ce52529d011c86be7f92a9f03b3f40e7 100644 --- a/SSH-VPN-VNC/VNC.md +++ b/SSH-VPN-VNC/VNC.md @@ -2,13 +2,48 @@ **Be aware! Everyone with the VNC password will get access to your account** -It is recommended not to use VNC. Use **jupyterhub** or **screen** instead. +It is recommended not to use VNC. Use **jupyterhub** or **screen** or **tmux** instead. However, for GUI applications there is no other way. The VNC (Virtual Network Computing) allows to view a graphical user interface (GUI) from a remote server in an viewer application. This can be used to launch GUI programs on the servers. Xvnc is the Unix VNC server. Applications can display themselves on Xvnc as if it were a normal display, but they will appear on any connected VNC viewers rather than on a physical screen. The VNC protocol uses the TCP/IP ports 5900+N, where N is the display number. -### Setup +Currently VNC is installed on: +- SRVX8, mainly Staff +- JET01, mainly Researchers + +## Userservices +It is highly recommended to use the userservices scripts available on all IMGW Servers to make configurations for VNC. +```bash +$ userservices vnc -h +################################################################################ +User Services - VNC Server Setup/Launcher/Stopper + + vnc -h -s -x -d -l + +Options: + -h Help + -c Check for vnc server(s) running + -s Stop vnc server(s) + -x Write xstartup in /home/spack/.vnc + -d Prepare vncserver Service + -p [] Port: 1 - 99 + -l Launch vnc server/service + -w [] Desktop Session: icewm, xfce +################################################################################ +Author: MB +Date: 25.01.2021 +Path: /home/swd/userservices/userservices.d +################################################################################ +Installed Desktops: icewm-session +################################################################################ +``` +Running the script without any options will run all necessary steps. In case of error try removing your `.vnc` directory, as older configurations might be in the way. There shall be at least two desktop options: icewm and xfce. You can specify this directly with the `-w [DESKTOP]` option. + + +## Setup - Manual +Please consider using the `userservices vnc` script to do this setup. + First of all check if a VNC server is already running or not. Depending on the results you have two options: 1. Use an existing. (Note the Port/Display Number) 2. Stop all and start a new VNC server @@ -28,7 +63,7 @@ vncserver -kill :[DISPLAY] vncserver ``` -#### Jet Cluser +### Jet Cluser on Jet there are the user services available to you: ```bash # Help information on VNC userservice @@ -70,11 +105,12 @@ vncconfig -iconic & xterm -geometry -sb -sl 500 -fn 9x15bold -title "$VNCDESKTOP Desktop" & icewm & ``` +Some information on what could be put into `.Xresources` is given [here](https://wiki.archlinux.org/title/x_resources). It might be possible to replace `icewm` here with `startxfce4` to choose XFCE Desktop environment. ### VNC as a Service This is only here for reference, on SRVX2 and Jet use the `userservices vnc`. -Setup, replace `[DISPLAY]` with an appropriate number: +Setup, replace `[DISPLAY]` with an appropriate number, e.g. `3`: ```bash mkdir -p ~/.config/systemd/user cp /usr/lib/systemd/user/vncserver@.service ~/.config/systemd/user/ @@ -115,14 +151,24 @@ systemctl --user status vncserver.slice ... ``` -### Change the resolution of your VNC Session +## Change the resolution of your VNC Session + +`xrandr` gives you a list of available resolutions, that can be use. Requires a `$DISPLAY` variable to be set, using your VNC display number does the trick, e.g. `:3`. -`xrandr` gives you a list of available resolutions, that can be used. +```bash +# Change VNC display resolution [width x height] +$ userservices vnc-geometry 1920x1080 +``` Change the resolution to e.g. 1920x1080 (HD): ```bash xrandr -s 1920x1080 -d $DISPLAY ``` -Adding resolutions according to your display's resolution have a look here: [add_xrandr_resolution.sh](add_xrandr_resolution.sh) +Adding resolutions according to your display's resolution have a look here: [add_xrandr_resolution.sh](add_xrandr_resolution.sh) +```bash +# running the script and adding a resolution you require, in pixel +$ add_xrandr_resolution [width] [height] +``` Note: `$DISPLAY` is an environment variable that is usually set to your VNC server port. + diff --git a/SSH-VPN-VNC/VPN.md b/SSH-VPN-VNC/VPN.md index 6553ca8877b91e21095c4e2a0196153608d1402f..226187173e6183aad6ec2411a0c010561c61aa07 100644 --- a/SSH-VPN-VNC/VPN.md +++ b/SSH-VPN-VNC/VPN.md @@ -19,4 +19,17 @@ On Windows and Mac you get a nice gui that requires you to fill in the VPN serve ``` f5fpc -s -t vpn.univie.ac.at -u [user] ``` -The status can be checked with `f5fpc --info`. \ No newline at end of file +The status can be checked with `f5fpc --info`. + +## Connect script + +One can use the commands above or use the [connect2vpn](connect2vpn) script to connect to the University VPN service. Especially in Linux the interface is much more primitive than on Mac or Windows. + +```bash +$ connect2vpn [u:account username] +[VPN] Using [u:account username] as username +[VPN] BIG-IP Edge Command Line Client version 7213.2021.0526.1 +[VPN] Full (1) or split (None) tunnel? (1/None): +``` +Continue and wait until you get a response that it's connected. +The status stays visible. \ No newline at end of file diff --git a/SSH-VPN-VNC/connect2vpn.desktop b/SSH-VPN-VNC/connect2vpn.desktop index 2b08097ba86dbf3d74fcb8992fe58e30c4ea87b4..09fc88131f2bf462c4c0c0eb81878ed1f6a040fb 100644 --- a/SSH-VPN-VNC/connect2vpn.desktop +++ b/SSH-VPN-VNC/connect2vpn.desktop @@ -1,5 +1,5 @@ [Desktop Entry] -Exec=gnome-terminal --class univpn --name univpn -t UNIVPN -x bash -c "connect2vpn; exec $SHELL" +Exec=gnome-terminal --class univpn --name univpn -t UNIVPN -x bash -c "connect2vpn" Name=vpn.univie Icon=../Documention/logo_uniwien.png Type=Application diff --git a/VSC.md b/VSC.md index 0d65fcea4ddd9a6950147b47578a20e2bf2cbb27..c2da056a2a200e6c4e36ca0e1400b4dafde46a46 100644 --- a/VSC.md +++ b/VSC.md @@ -260,6 +260,505 @@ will load the intel compiler suite and add variables to your environment. on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md) +### Containers + +We can use complex software that is contained in [singularity](https://singularity.hpcng.org/) containers [(doc)](https://singularity.hpcng.org/user-docs/master/) and can be executed on VSC-4. Please consider using one of the following containers: +- `py3centos7anaconda3-2020-07-dev` + +located in the `$DATA` directory of IMGW: `/gpfs/data/fs71386/imgw` + +#### How to use? +Currently there is only one container with a run script. +```bash +# The directory of the containers +/gpfs/data/fs71386/imgw/run.sh [arguments] +# executing the python inside +/gpfs/data/fs71386/imgw/run.sh python +# or ipython +/gpfs/data/fs71386/imgw/run.sh ipython +# with other arguments +/gpfs/data/fs71386/imgw/run.sh python analyis.py +``` + +#### Understanding the container +In principle, a run script needs to do only 3 things: +1. load the module `singularity` +2. set `SINGULARITY_BIND` environment variable +3. execute the container with your arguments + +It is necessary to set the `SINGULARITY_BIND` because the `$HOME` and `$DATA` or `$BINFS` path are no standard linux paths, therefore the container linux does not know about these and accessing files from within the container is not possible. In the future if you have problems with accessing other paths, adding them to the `SINGULARITY_BIND` might fix the issue. + +In principe one can execute the container like this: +```bash +# check if the module is loaded +module load singularity +# just run the container initiating the builting runscript (running ipython): +[mblasch@l44 imgw]$ /gpfs/data/fs71386/imgw/py3centos7anaconda3-2020-07-dev.sif +Python 3.8.3 (default, Jul 2 2020, 16:21:59) +Type 'copyright', 'credits' or 'license' for more information +IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help. + +In [1]: +In [2]: %env DATA +Out[2]: '/gpfs/data/fs71386/mblasch' + +In [3]: ls /gpfs/data/fs71386/mblasch +ls: cannot access /gpfs/data/fs71386/mblasch: No such file or directory +# Please note here that the path is not available, because we did not use the SINGULARITY_BIND + +``` + +#### What is inside the container? +In principe you can check what is inside by using +```bash +$ module load singularity +$ singularity inspect py3centos7anaconda3-2020-07-dev.sif +author: M.Blaschek +dist: anaconda2020.07 +glibc: 2.17 +org.label-schema.build-arch: amd64 +org.label-schema.build-date: Thursday_7_October_2021_14:37:23_CEST +org.label-schema.schema-version: 1.0 +org.label-schema.usage.singularity.deffile.bootstrap: docker +org.label-schema.usage.singularity.deffile.from: centos:7 +org.label-schema.usage.singularity.deffile.stage: final +org.label-schema.usage.singularity.version: 3.8.1-1.el8 +os: centos7 +python: 3.8 +``` +which shows you some information on the container, e.g. Centos 7 is installed, python 3.8, and glibc 2.17. + +But you can also check the applications inside + +```bash +# List all executables inside the container +py3centos7anaconda3-2020-07-dev.sif ls /opt/view/bin + +# or using conda for the environment +py3centos7anaconda3-2020-07-dev.sif conda info +# for the package list +py3centos7anaconda3-2020-07-dev.sif conda list +``` + +which shows something like this + +``` +# packages in environment at /opt/software/linux-centos7-haswell/gcc-4.8.5/anaconda3-2020.07-xl53rxqkccbjdufemaupvtuhs3wsj5d2: +# +# Name Version Build Channel +_anaconda_depends 2020.07 py38_0 +_ipyw_jlab_nb_ext_conf 0.1.0 py38_0 +_libgcc_mutex 0.1 conda_forge conda-forge +_openmp_mutex 4.5 1_gnu conda-forge +alabaster 0.7.12 py_0 +alsa-lib 1.2.3 h516909a_0 conda-forge +anaconda custom py38_1 +anaconda-client 1.7.2 py38_0 +anaconda-navigator 1.9.12 py38_0 +anaconda-project 0.8.4 py_0 +appdirs 1.4.4 pyh9f0ad1d_0 conda-forge +argh 0.26.2 py38_0 +asciitree 0.3.3 py_2 conda-forge +asn1crypto 1.3.0 py38_0 +astroid 2.4.2 py38_0 +astropy 4.0.1.post1 py38h7b6447c_1 +atomicwrites 1.4.0 py_0 +attrs 19.3.0 py_0 +autopep8 1.5.3 py_0 +babel 2.8.0 py_0 +backcall 0.2.0 py_0 +backports 1.0 py_2 +backports.functools_lru_cache 1.6.1 py_0 +backports.shutil_get_terminal_size 1.0.0 py38_2 +backports.tempfile 1.0 py_1 +backports.weakref 1.0.post1 py_1 +beautifulsoup4 4.9.1 py38_0 +bitarray 1.4.0 py38h7b6447c_0 +bkcharts 0.2 py38_0 +blas 1.0 mkl +bleach 3.1.5 py_0 +blosc 1.21.0 h9c3ff4c_0 conda-forge +bokeh 2.1.1 py38_0 +boto 2.49.0 py38_0 +bottleneck 1.3.2 py38heb32a55_1 +brotlipy 0.7.0 py38h7b6447c_1000 +bzip2 1.0.8 h7b6447c_0 +c-ares 1.17.2 h7f98852_0 conda-forge +ca-certificates 2021.7.5 h06a4308_1 +cached-property 1.5.2 hd8ed1ab_1 conda-forge +cached_property 1.5.2 pyha770c72_1 conda-forge +cairo 1.16.0 h6cf1ce9_1008 conda-forge +cartopy 0.20.0 py38hf9a4893_2 conda-forge +cdo 1.9.10 h25e7f74_6 conda-forge +certifi 2021.5.30 py38h578d9bd_0 conda-forge +cffi 1.14.0 py38he30daa8_1 +cftime 1.5.1 py38h6c62de6_0 conda-forge +chardet 3.0.4 py38_1003 +click 7.1.2 py_0 +cloudpickle 1.5.0 py_0 +clyent 1.2.2 py38_1 +colorama 0.4.3 py_0 +conda 4.10.3 py38h578d9bd_2 conda-forge +conda-build 3.18.11 py38_0 +conda-env 2.6.0 1 +conda-package-handling 1.6.1 py38h7b6447c_0 +conda-verify 3.4.2 py_1 +contextlib2 0.6.0.post1 py_0 +cryptography 2.9.2 py38h1ba5d50_0 +curl 7.79.1 h2574ce0_1 conda-forge +cycler 0.10.0 py38_0 +cython 0.29.21 py38he6710b0_0 +cytoolz 0.10.1 py38h7b6447c_0 +dask 2.20.0 py_0 +dask-core 2.20.0 py_0 +dbus 1.13.16 hb2f20db_0 +decorator 4.4.2 py_0 +defusedxml 0.6.0 py_0 +diff-match-patch 20200713 py_0 +distributed 2.20.0 py38_0 +docutils 0.16 py38_1 +eccodes 2.23.0 h11d1a29_2 conda-forge +entrypoints 0.3 py38_0 +et_xmlfile 1.0.1 py_1001 +expat 2.4.1 h9c3ff4c_0 conda-forge +fastcache 1.1.0 py38h7b6447c_0 +fasteners 0.16.3 pyhd3eb1b0_0 +fftw 3.3.10 nompi_hcdd671c_101 conda-forge +filelock 3.0.12 py_0 +flake8 3.8.3 py_0 +flask 1.1.2 py_0 +font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge +font-ttf-inconsolata 3.000 h77eed37_0 conda-forge +font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge +font-ttf-ubuntu 0.83 hab24e00_0 conda-forge +fontconfig 2.13.1 hba837de_1005 conda-forge +fonts-conda-ecosystem 1 0 conda-forge +fonts-conda-forge 1 0 conda-forge +freeglut 3.2.1 h9c3ff4c_2 conda-forge +freetype 2.10.4 h0708190_1 conda-forge +fribidi 1.0.10 h516909a_0 conda-forge +fsspec 0.7.4 py_0 +future 0.18.2 py38_1 +geos 3.9.1 h9c3ff4c_2 conda-forge +get_terminal_size 1.0.0 haa9412d_0 +gettext 0.21.0 hf68c758_0 +gevent 20.6.2 py38h7b6447c_0 +glib 2.68.4 h9c3ff4c_0 conda-forge +glib-tools 2.68.4 h9c3ff4c_0 conda-forge +glob2 0.7 py_0 +gmp 6.1.2 h6c8ec71_1 +gmpy2 2.0.8 py38hd5f6e3b_3 +graphite2 1.3.14 h23475e2_0 +greenlet 0.4.16 py38h7b6447c_0 +gst-plugins-base 1.18.5 hf529b03_0 conda-forge +gstreamer 1.18.5 h76c114f_0 conda-forge +h5netcdf 0.11.0 pyhd8ed1ab_0 conda-forge +h5py 3.4.0 nompi_py38hfbb2109_101 conda-forge +harfbuzz 3.0.0 h83ec7ef_1 conda-forge +hdf4 4.2.15 h10796ff_3 conda-forge +hdf5 1.12.1 nompi_h2750804_101 conda-forge +heapdict 1.0.1 py_0 +html5lib 1.1 py_0 +icu 68.1 h58526e2_0 conda-forge +idna 2.10 py_0 +imageio 2.9.0 py_0 +imagesize 1.2.0 py_0 +importlib-metadata 1.7.0 py38_0 +importlib_metadata 1.7.0 0 +importlib_resources 5.2.2 pyhd8ed1ab_0 conda-forge +intel-openmp 2020.1 217 +intervaltree 3.0.2 py_1 +ipykernel 5.3.2 py38h5ca1d4c_0 +ipython 7.16.1 py38h5ca1d4c_0 +ipython_genutils 0.2.0 py38_0 +ipywidgets 7.5.1 py_0 +isort 4.3.21 py38_0 +itsdangerous 1.1.0 py_0 +jasper 2.0.14 ha77e612_2 conda-forge +jbig 2.1 hdba287a_0 +jdcal 1.4.1 py_0 +jedi 0.17.1 py38_0 +jeepney 0.4.3 py_0 +jinja2 2.11.2 py_0 +joblib 0.16.0 py_0 +jpeg 9d h516909a_0 conda-forge +json5 0.9.5 py_0 +jsonschema 3.2.0 py38_0 +jupyter 1.0.0 py38_7 +jupyter_client 6.1.6 py_0 +jupyter_console 6.1.0 py_0 +jupyter_core 4.6.3 py38_0 +jupyterlab 2.1.5 py_0 +jupyterlab_server 1.2.0 py_0 +keyring 21.2.1 py38_0 +kiwisolver 1.2.0 py38hfd86e86_0 +krb5 1.19.2 hcc1bbae_0 conda-forge +lazy-object-proxy 1.4.3 py38h7b6447c_0 +lcms2 2.11 h396b838_0 +ld_impl_linux-64 2.33.1 h53a641e_7 +lerc 2.2.1 h9c3ff4c_0 conda-forge +libaec 1.0.6 h9c3ff4c_0 conda-forge +libarchive 3.5.2 hccf745f_1 conda-forge +libblas 3.9.0 1_h86c2bf4_netlib conda-forge +libcblas 3.9.0 5_h92ddd45_netlib conda-forge +libclang 11.1.0 default_ha53f305_1 conda-forge +libcurl 7.79.1 h2574ce0_1 conda-forge +libdeflate 1.7 h7f98852_5 conda-forge +libedit 3.1.20191231 h14c3975_1 +libev 4.33 h516909a_1 conda-forge +libevent 2.1.10 h9b69904_4 conda-forge +libffi 3.3 he6710b0_2 +libgcc-ng 11.2.0 h1d223b6_9 conda-forge +libgfortran-ng 11.2.0 h69a702a_9 conda-forge +libgfortran5 11.2.0 h5c6108e_9 conda-forge +libglib 2.68.4 h3e27bee_0 conda-forge +libglu 9.0.0 he1b5a44_1001 conda-forge +libgomp 11.2.0 h1d223b6_9 conda-forge +libiconv 1.16 h516909a_0 conda-forge +liblapack 3.9.0 5_h92ddd45_netlib conda-forge +liblief 0.10.1 he6710b0_0 +libllvm11 11.1.0 hf817b99_2 conda-forge +libllvm9 9.0.1 h4a3c616_1 +libnetcdf 4.8.1 nompi_hb3fd0d9_101 conda-forge +libnghttp2 1.43.0 h812cca2_1 conda-forge +libogg 1.3.5 h27cfd23_1 +libopus 1.3.1 h7f98852_1 conda-forge +libpng 1.6.37 hbc83047_0 +libpq 13.3 hd57d9b9_0 conda-forge +libsodium 1.0.18 h7b6447c_0 +libsolv 0.7.16 h8b12597_0 conda-forge +libspatialindex 1.9.3 he6710b0_0 +libssh2 1.10.0 ha56f1ee_2 conda-forge +libstdcxx-ng 11.2.0 he4da1e4_9 conda-forge +libtiff 4.3.0 hf544144_1 conda-forge +libtool 2.4.6 h7b6447c_5 +libuuid 2.32.1 h14c3975_1000 conda-forge +libvorbis 1.3.7 he1b5a44_0 conda-forge +libwebp-base 1.2.1 h7f98852_0 conda-forge +libxcb 1.14 h7b6447c_0 +libxkbcommon 1.0.3 he3ba5ed_0 conda-forge +libxml2 2.9.12 h72842e0_0 conda-forge +libxslt 1.1.33 h15afd5d_2 conda-forge +libzip 1.8.0 h4de3113_1 conda-forge +libzlib 1.2.11 h36c2ea0_1013 conda-forge +llvmlite 0.33.0 py38hc6ec683_1 +locket 0.2.0 py38_1 +lxml 4.6.3 py38hf1fe3a4_0 conda-forge +lz4-c 1.9.3 h9c3ff4c_1 conda-forge +lzo 2.10 h7b6447c_2 +magics 4.9.1 hb6e17df_1 conda-forge +magics-python 1.5.6 pyhd8ed1ab_0 conda-forge +mamba 0.5.1 py38h6fd9b40_0 conda-forge +markupsafe 1.1.1 py38h7b6447c_0 +matplotlib 3.4.3 py38h578d9bd_1 conda-forge +matplotlib-base 3.4.3 py38hf4fb855_0 conda-forge +mccabe 0.6.1 py38_1 +metpy 1.1.0 pyhd8ed1ab_0 conda-forge +mistune 0.8.4 py38h7b6447c_1000 +mkl 2020.1 217 +mkl-service 2.3.0 py38he904b0f_0 +mkl_fft 1.1.0 py38h23d657b_0 +mkl_random 1.1.1 py38h0573a6f_0 +mock 4.0.2 py_0 +more-itertools 8.4.0 py_0 +mpc 1.1.0 h10f8cd9_1 +mpfr 4.0.2 hb69a4c5_1 +mpmath 1.1.0 py38_0 +msgpack-python 1.0.0 py38hfd86e86_1 +multipledispatch 0.6.0 py38_0 +mysql-common 8.0.25 ha770c72_2 conda-forge +mysql-libs 8.0.25 hfa10184_2 conda-forge +navigator-updater 0.2.1 py38_0 +nbconvert 5.6.1 py38_0 +nbformat 5.0.7 py_0 +ncurses 6.2 he6710b0_1 +netcdf4 1.5.7 nompi_py38h2823cc8_103 conda-forge +networkx 2.4 py_1 +nltk 3.5 py_0 +nose 1.3.7 py38_2 +notebook 6.0.3 py38_0 +nspr 4.30 h9c3ff4c_0 conda-forge +nss 3.69 hb5efdd6_1 conda-forge +numba 0.50.1 py38h0573a6f_1 +numcodecs 0.9.1 py38h709712a_0 conda-forge +numexpr 2.7.1 py38h423224d_0 +numpy 1.19.2 py38h54aff64_0 +numpy-base 1.19.2 py38hfa32c7d_0 +numpydoc 1.1.0 py_0 +olefile 0.46 py_0 +openpyxl 3.0.4 py_0 +openssl 1.1.1l h7f98852_0 conda-forge +ossuuid 1.6.2 hf484d3e_1000 conda-forge +packaging 20.4 py_0 +pandas 1.0.5 py38h0573a6f_0 +pandoc 2.10 0 +pandocfilters 1.4.2 py38_1 +pango 1.48.10 h54213e6_2 conda-forge +parso 0.7.0 py_0 +partd 1.1.0 py_0 +patchelf 0.11 he6710b0_0 +path 13.1.0 py38_0 +path.py 12.4.0 0 +pathlib2 2.3.5 py38_0 +pathtools 0.1.2 py_1 +patsy 0.5.1 py38_0 +pcre 8.45 h9c3ff4c_0 conda-forge +pep8 1.7.1 py38_0 +pexpect 4.8.0 py38_0 +pickleshare 0.7.5 py38_1000 +pillow 7.2.0 py38hb39fc2d_0 +pint 0.17 pyhd8ed1ab_1 conda-forge +pip 20.1.1 py38_1 +pixman 0.40.0 h7b6447c_0 +pkginfo 1.5.0.1 py38_0 +pluggy 0.13.1 py38_0 +ply 3.11 py38_0 +pooch 1.5.1 pyhd8ed1ab_0 conda-forge +proj 8.1.1 h277dcde_2 conda-forge +prometheus_client 0.8.0 py_0 +prompt-toolkit 3.0.5 py_0 +prompt_toolkit 3.0.5 0 +psutil 5.7.0 py38h7b6447c_0 +ptyprocess 0.6.0 py38_0 +py 1.9.0 py_0 +py-lief 0.10.1 py38h403a769_0 +pycodestyle 2.6.0 py_0 +pycosat 0.6.3 py38h7b6447c_1 +pycparser 2.20 py_2 +pycurl 7.43.0.5 py38h1ba5d50_0 +pydocstyle 5.0.2 py_0 +pyflakes 2.2.0 py_0 +pygments 2.6.1 py_0 +pylint 2.5.3 py38_0 +pyodbc 4.0.30 py38he6710b0_0 +pyopenssl 19.1.0 py_1 +pyparsing 2.4.7 py_0 +pyproj 3.2.1 py38h80797bf_2 conda-forge +pyqt 5.12.3 py38h578d9bd_7 conda-forge +pyqt-impl 5.12.3 py38h7400c14_7 conda-forge +pyqt5-sip 4.19.18 py38h709712a_7 conda-forge +pyqtchart 5.12 py38h7400c14_7 conda-forge +pyqtwebengine 5.12.1 py38h7400c14_7 conda-forge +pyrsistent 0.16.0 py38h7b6447c_0 +pyshp 2.1.3 pyh44b312d_0 conda-forge +pysocks 1.7.1 py38_0 +pytables 3.6.1 py38hdb04529_4 conda-forge +pytest 5.4.3 py38_0 +python 3.8.3 hcff3b4d_2 +python-dateutil 2.8.1 py_0 +python-jsonrpc-server 0.3.4 py_1 +python-language-server 0.34.1 py38_0 +python-libarchive-c 2.9 py_0 +python_abi 3.8 2_cp38 conda-forge +pytz 2020.1 py_0 +pywavelets 1.1.1 py38h7b6447c_0 +pyxdg 0.26 py_0 +pyyaml 5.3.1 py38h7b6447c_1 +pyzmq 19.0.1 py38he6710b0_1 +qdarkstyle 2.8.1 py_0 +qt 5.12.9 hda022c4_4 conda-forge +qtawesome 0.7.2 py_0 +qtconsole 4.7.5 py_0 +qtpy 1.9.0 py_0 +readline 8.1 h46c0cb4_0 conda-forge +regex 2020.6.8 py38h7b6447c_0 +requests 2.24.0 py_0 +ripgrep 11.0.2 he32d670_0 +rope 0.17.0 py_0 +rtree 0.9.4 py38_1 +ruamel_yaml 0.15.87 py38h7b6447c_1 +scikit-image 0.16.2 py38h0573a6f_0 +scikit-learn 0.23.1 py38h423224d_0 +scipy 1.7.1 py38h56a6a73_0 conda-forge +seaborn 0.10.1 py_0 +secretstorage 3.1.2 py38_0 +send2trash 1.5.0 py38_0 +setuptools 49.2.0 py38_0 +shapely 1.7.1 py38hb7fe4a8_5 conda-forge +simplegeneric 0.8.1 py38_2 +simplejson 3.17.5 py38h497a2fe_0 conda-forge +singledispatch 3.4.0.3 py38_0 +sip 4.19.13 py38he6710b0_0 +six 1.15.0 py_0 +snappy 1.1.8 he6710b0_0 +snowballstemmer 2.0.0 py_0 +sortedcollections 1.2.1 py_0 +sortedcontainers 2.2.2 py_0 +soupsieve 2.0.1 py_0 +sphinx 3.1.2 py_0 +sphinxcontrib 1.0 py38_1 +sphinxcontrib-applehelp 1.0.2 py_0 +sphinxcontrib-devhelp 1.0.2 py_0 +sphinxcontrib-htmlhelp 1.0.3 py_0 +sphinxcontrib-jsmath 1.0.1 py_0 +sphinxcontrib-qthelp 1.0.3 py_0 +sphinxcontrib-serializinghtml 1.1.4 py_0 +sphinxcontrib-websupport 1.2.3 py_0 +spyder 4.1.4 py38_0 +spyder-kernels 1.9.2 py38_0 +sqlalchemy 1.3.18 py38h7b6447c_0 +sqlite 3.36.0 h9cd32fc_2 conda-forge +statsmodels 0.11.1 py38h7b6447c_0 +sympy 1.6.1 py38_0 +tbb 2020.0 hfd86e86_0 +tblib 1.6.0 py_0 +terminado 0.8.3 py38_0 +testpath 0.4.4 py_0 +threadpoolctl 2.1.0 pyh5ca1d4c_0 +tk 8.6.10 hbc83047_0 +toml 0.10.1 py_0 +toolz 0.10.0 py_0 +tornado 6.0.4 py38h7b6447c_1 +tqdm 4.47.0 py_0 +traitlets 4.3.3 py38_0 conda-forge +typing_extensions 3.7.4.2 py_0 +udunits2 2.2.27.27 hc3e0081_2 conda-forge +ujson 1.35 py38h7b6447c_0 +unicodecsv 0.14.1 py38_0 +unixodbc 2.3.7 h14c3975_0 +urllib3 1.25.9 py_0 +watchdog 0.10.3 py38_0 +wcwidth 0.2.5 py_0 +webencodings 0.5.1 py38_1 +werkzeug 1.0.1 py_0 +wheel 0.34.2 py38_0 +widgetsnbextension 3.5.1 py38_0 +wrapt 1.11.2 py38h7b6447c_0 +wurlitzer 2.0.1 py38_0 +xarray 0.19.0 pyhd8ed1ab_1 conda-forge +xlrd 1.2.0 py_0 +xlsxwriter 1.2.9 py_0 +xlwt 1.3.0 py38_0 +xmltodict 0.12.0 py_0 +xorg-fixesproto 5.0 h14c3975_1002 conda-forge +xorg-inputproto 2.3.2 h14c3975_1002 conda-forge +xorg-kbproto 1.0.7 h14c3975_1002 conda-forge +xorg-libice 1.0.10 h516909a_0 conda-forge +xorg-libsm 1.2.3 hd9c2040_1000 conda-forge +xorg-libx11 1.7.2 h7f98852_0 conda-forge +xorg-libxau 1.0.9 h14c3975_0 conda-forge +xorg-libxext 1.3.4 h7f98852_1 conda-forge +xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge +xorg-libxi 1.7.10 h7f98852_0 conda-forge +xorg-libxrender 0.9.10 h7f98852_1003 conda-forge +xorg-renderproto 0.11.1 h14c3975_1002 conda-forge +xorg-xextproto 7.3.0 h14c3975_1002 conda-forge +xorg-xproto 7.0.31 h14c3975_1007 conda-forge +xz 5.2.5 h7b6447c_0 +yaml 0.2.5 h7b6447c_0 +yapf 0.30.0 py_0 +zarr 2.10.1 pyhd8ed1ab_0 conda-forge +zeromq 4.3.2 he6710b0_2 +zict 2.0.0 py_0 +zipp 3.1.0 py_0 +zlib 1.2.11 h36c2ea0_1013 conda-forge +zope 1.0 py38_1 +zope.event 4.4 py38_0 +zope.interface 4.7.1 py38h7b6447c_0 +zstd 1.5.0 ha95c52a_0 conda-forge +``` + + ## Debugging on VSC-4 Currently (6.2021) there is no development queue on VSC-4 and the support suggested to do the following: