Skip to content
Snippets Groups Projects
Commit 3cd958e3 authored by Michael Blaschek's avatar Michael Blaschek :bicyclist:
Browse files

server update, userservices

parent 1bfb04ac
Branches
Tags
No related merge requests found
Documentation/screen-cheatsheet.png

133 KiB

......@@ -45,3 +45,10 @@ noremap! <C-?> <C-h>
```
The source of this error relates to stty and maybe VNC.
[on stackoverflow](https://stackoverflow.com/questions/9701366/vim-backspace-leaves)
or maybe
```bash
# in .bashrc
# fix for vim backspace
stty erase '^?'
```
\ No newline at end of file
......@@ -7,35 +7,58 @@
[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
### Jupyterhub
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R940 |
| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz |
| Cores | 4 CPU, 20 physical cores per CPU, total 160 logical CPU units |
| CPU time | 700 kh |
| Memory | 754 GB Total |
| Memory/Core | 9.4 GB |
```
----------------------------------------------
131.130.157.11 _ . , . .
* / \_ * / \_ _ * * /\'_
/ \ / \, (( . _/ /
. /\/\ /\/ :' __ \_ ` _^/ ^/
/ \/ \ _/ \-'\ * /.' ^_ \
/\ .- `. \/ \ /==~=-=~=-=-;. _/ \ -
/ `-.__ ^ / .-'.--\ =-=~_=-=~=^/ _ `--./
/SRVX1 `. / / `.~-^=-=~=^=.-' '
----------------------------------------------
```
## Services
The SRVX1 is the central access point to IMG services:
goto: [srvx.img.univie.ac.at](https://srvx1.img.univie.ac.at)
Currently running:
- TeachingHub
- ResearchHub
- Webdata
## Jupyterhub
<img src="https://jupyter.org/assets/hublogo.svg" width="300px">
SRVX1 serves a teaching [jupyterhub](https://jupyterhub.readthedocs.io/en/stable/) with a [jupyterlab](https://jupyterlab.readthedocs.io/en/stable/). It allows easy access for students and teachers.
Goto: [](https://srvx1.img.univie.ac.at)
Goto: [](https://srvx1.img.univie.ac.at/hub)
Signup is only granted by teachers and requires a srvx1 user account. A new password is needed and a TOTP (time base one-time password) will be created.
Download/Use any of these required Authenticator Apps:
- [2FAS (Mobile, recommended)](https://2fas.com/)
- [KeepassX (Desktop)](https://www.keepassx.org/)
- [Authy (Mobile, Desktop)](https://authy.com/download/)
- [FreeOTP (Mobile)](https://freeotp.github.io/)
- [Google Auth (Mobile)](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2)
After registering the teacher/admin has to grant you access and you can login.
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R720xd |
| Processor | Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz |
| Cores | 2 CPU, 8 physical cores per CPU, total 32 logical CPU units |
| CPU time | 140 kh |
| Memory | 190 GB Total |
| Memory/Core | 11.9 GB |
| Network | 10 Gbit/s |
## Software
The typcial installation of a intel-cluster has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`, `intel-oneapi`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
Major Libraries:
- OpenMPI (3.1.6, 4.0.5)
- HDF5
......@@ -50,15 +73,77 @@ These software libraries are usually handled by environment modules.
![](https://upload.wikimedia.org/wikipedia/en/thumb/0/0a/Environment_Modules_logo.svg/320px-Environment_Modules_logo.svg.png)
## Currently installed modules
Please note that new versions might already be installed.
```bash
$ module av
-------------------------------- /home/opt/spack/share/spack/modules/linux-centos6-sandybridge --------------------------------
anaconda3/2020.07-gcc-5.3.0 gcc/5.3.0-gcc-5.3.0 netcdf-c/4.7.4-gcc-5.3.0 zlib/1.2.11-gcc-5.3.0
cdo/1.9.9-gcc-5.3.0 git/2.29.0-gcc-5.3.0 netcdf-fortran/4.5.3-gcc-5.3.0
eccodes/2.18.0-gcc-5.3.0 hdf5/1.10.7-gcc-5.3.0 openmpi/3.1.6-gcc-5.3.0
enstools/2020.11.dev-gcc-5.3.0 miniconda3/4.8.2-gcc-5.3.0 proj/7.1.0-gcc-5.3.0
--------------- /home/swd/spack/share/spack/modules/linux-rhel8-skylake_avx512 ----------------
anaconda2/2019.10-gcc-8.4.1 netcdf-fortran/4.5.3-gcc-8.4.1
anaconda3/2020.07-gcc-8.4.1 netlib-lapack/3.9.1-gcc-8.4.1
anaconda3/2020.11-gcc-8.4.1 netlib-lapack/3.9.1-intel-20.0.4
anaconda3/2021.05-gcc-8.4.1 netlib-lapack/3.9.1-oneapi-2021.2.0
autoconf/2.69-oneapi-2021.2.0 netlib-scalapack/2.1.0-gcc-8.4.1
autoconf/2.71-oneapi-2021.2.0 netlib-scalapack/2.1.0-gcc-8.4.1-MPI3.1.6
eccodes/2.19.1-gcc-8.4.1 openblas/0.3.17-gcc-8.4.1
eccodes/2.19.1-intel-20.0.4 openmpi/3.1.6-gcc-8.4.1
eccodes/2.21.0-gcc-8.4.1 openmpi/3.1.6-intel-20.0.4
eccodes/2.21.0-intel-20.0.4 openmpi/4.0.5-gcc-8.4.1
geos/3.9.1-gcc-8.4.1 openmpi/4.0.5-intel-20.0.4
hdf5/1.10.7-gcc-8.4.1-MPI3.1.6 perl/5.32.0-intel-20.0.4
hdf5/1.10.7-intel-20.0.4-MPI3.1.6 proj/8.1.0-gcc-8.4.1
hdf5/1.12.0-gcc-8.4.1 python/3.8.9-gcc-8.4.1
hdf5/1.12.0-intel-20.0.4
hdf5/1.12.0-intel-20.0.4-MPI3.1.6
hdf5/1.12.0-oneapi-2021.2.0
intel-oneapi-compilers/2021.2.0-oneapi-2021.2.0
intel-oneapi-dal/2021.2.0-oneapi-2021.2.0
intel-oneapi-mkl/2021.2.0-oneapi-2021.2.0
intel-oneapi-mpi/2021.2.0-oneapi-2021.2.0
intel-parallel-studio/composer.2020.4-intel-20.0.4
libemos/4.5.9-gcc-8.4.1
libemos/4.5.9-intel-20.0.4
matlab/R2020b-gcc-8.4.1
miniconda2/4.7.12.1-gcc-8.4.1
miniconda3/4.10.3-gcc-8.4.1
ncl/6.5.0-gcc-8.4.1-MPI3.1.6
ncl/6.6.2-gcc-8.4.1-MPI3.1.6
nco/4.9.3-gcc-8.4.1
nco/4.9.3-intel-20.0.4
ncview/2.1.8-gcc-8.4.1
netcdf-c/4.6.3-gcc-8.4.1-MPI3.1.6
netcdf-c/4.6.3-intel-20.0.4-MPI3.1.6
netcdf-c/4.7.4-gcc-8.4.1
netcdf-c/4.7.4-intel-20.0.4
netcdf-fortran/4.5.2-gcc-8.4.1-MPI3.1.6
netcdf-fortran/4.5.2-intel-20.0.4-MPI3.1.6
```
on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
## Container Hub
Currently there is the possibility to run [singularity](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker.
There are a number of prepared containers but more can be added. If you have a wish or an existing container useful for others please share.
```yaml
containers:
- root: /home/swd/containers
- available:
- RTTOV:
- RTTOV: 12.3
- compiler: gcc:7.3.0 (anaconda)
- path: /home/swd/containers/rttov-jupyter/jup3rttov.sif
- os: centos:6.10
- python: 3.7.4
- singularity: 3.5.2
- packages:
- anaconda3
- jupyter jupyterlab numpy matplotlib pandas xarray bottleneck dask numba scipy netcdf4 cartopy h5netcdf nc-time-axis cfgrib eccodes nodejs
- apps:
- atlas
- lab
- notebook
- rtcoef
- rthelp
- rttest
```
\ No newline at end of file
# S R V X 2
[[_TOC_]]
## Getting Started
[How to SSH / VNC / VPN](SSH-VPN-VNC/README.md)
## System Information
| Name | Value |
| --- | --- |
| Product | PowerEdge R940 |
| Processor | Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz |
| Cores | 4 CPU, 20 physical cores per CPU, total 160 logical CPU units |
| CPU time | 700 kh |
| Memory | 376 GB Total |
| Memory/Core | 9.4 GB |
## Software
The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
Major Libraries:
- OpenMPI (3.1.6, 4.0.5)
- HDF5
- NetCDF (C, Fortran)
- ECCODES from [ECMWF](https://confluence.ecmwf.int/display/ECC)
- Math libraries e.g. intel-mkl, lapack,scalapack
- Interpreters: Python, Julia
- Tools: cdo, ncl, nco, ncview
These software libraries are usually handled by environment modules.
![](https://upload.wikimedia.org/wikipedia/en/thumb/0/0a/Environment_Modules_logo.svg/320px-Environment_Modules_logo.svg.png)
## Currently installed modules
```bash
$ module av
------------------------------ /home/spack-root/share/spack/modules/linux-centos8-skylake_avx512 ------------------------------
anaconda3/2020.07-gcc-8.3.1 intel-parallel-studio/composer.2020.2-intel-20.0.2 netcdf-fortran/4.5.3-gcc-8.3.1
eccodes/2.18.0-intel-20.0.2 libemos/4.5.9-gcc-8.3.1 netcdf-fortran/4.5.3-intel-20.0.2
eccodes/2.19.1-gcc-8.3.1 miniconda3/4.8.2-gcc-8.3.1 netlib-lapack/3.8.0-gcc-8.3.1
eccodes/2.19.1-intel-20.0.2 miniconda3/4.9.2-gcc-8.3.1 netlib-scalapack/2.1.0-gcc-8.3.1
eccodes/2.21.0-intel-20.0.2 ncl/6.6.2-gcc-8.3.1 openblas/0.3.12-gcc-8.3.1
hdf5/1.10.7-gcc-8.3.1 netcdf-c/4.7.4-gcc-8.3.1 openmpi/3.1.6-gcc-8.3.1
hdf5/1.10.7-intel-20.0.2 netcdf-c/4.7.4-intel-20.0.2 openmpi/3.1.6-intel-20.0.2
```
on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
......@@ -17,11 +17,112 @@
| CPU time | 245 kh |
| Memory | 504 GB Total |
| Memory/Core | 18 Gb |
| Network | 10 Gbit/s |
```
----------------------------------------------
_
(` ).
( ).
) _( SRVX8 '`.
.=(`( . ) .--
(( (..__.:'-' .+( )
`. `( ) ) ( . )
) ` __.:' ) ( ( ))
) ) ( ) --' `- __.'
.-' (_.' .')
(_ )
131.130.157.8
--..,___.--,--'`,---..-.--+--.,,-,,.-..-._.-.-
----------------------------------------------
```
## Software
Software is installed in numerous places. This is a legency system with no software controller.
A reinstall is planed for summer 2021.
The typcial installation of a intel-server has the INTEL Compiler suite (`intel-parallel-studio`, `intel-oneapi`) and the open source GNU Compilers installed. Based on these two different compilers (`intel`, `gnu`), there are usually two version of each scientific software.
Major Libraries:
- OpenMPI (3.1.6, 4.0.5)
- HDF5
- NetCDF (C, Fortran)
- ECCODES from [ECMWF](https://confluence.ecmwf.int/display/ECC)
- Math libraries e.g. intel-mkl, lapack,scalapack
- Interpreters: Python, Julia
- Tools: cdo, ncl, nco, ncview
These software libraries are usually handled by environment modules.
![](https://upload.wikimedia.org/wikipedia/en/thumb/0/0a/Environment_Modules_logo.svg/320px-Environment_Modules_logo.svg.png)
## Currently installed modules
on how to use environment modules go to [Using Environment Modules](Misc/Environment-Modules.md)
```bash
$ module av
------------------- /home/swd/spack/share/spack/modules/linux-rhel7-haswell -------------------
anaconda2/2019.10-gcc-8.4.0
anaconda3/2021.05-gcc-8.4.0
eccodes/2.21.0-gcc-8.4.0
gcc/8.4.0-gcc-4.8.5
git/1.8.3.1-gcc-8.4.0
git/2.31.1-gcc-8.4.0
hdf5/1.10.7-gcc-8.4.0
hdf5/1.12.0-gcc-8.4.0
intel-oneapi-compilers/2021.3.0-oneapi-2021.3.0
intel-oneapi-mkl/2021.3.0-oneapi-2021.3.0
intel-oneapi-mpi/2021.3.0-oneapi-2021.3.0
miniconda2/4.7.12.1-gcc-8.4.0
miniconda3/4.10.3-gcc-8.4.0
ncl/6.5.0-gcc-8.4.0
ncl/6.6.2-gcc-8.4.0
nco/4.9.3-gcc-8.4.0
ncview/2.1.8-gcc-8.4.0
netcdf-c/4.6.3-gcc-8.4.0
netcdf-c/4.7.4-gcc-8.4.0
netcdf-fortran/4.5.2-gcc-8.4.0
netcdf-fortran/4.5.3-gcc-8.4.0
netlib-lapack/3.9.1-gcc-8.4.0
netlib-scalapack/2.1.0-gcc-8.4.0
openblas/0.3.17-gcc-8.4.0
openmpi/3.1.6-gcc-8.4.0
openmpi/4.0.5-gcc-8.4.0
openmpi/4.1.1-gcc-8.4.0
proj/8.1.0-gcc-8.4.0
python/3.8.9-gcc-4.8.5
-------------------------------------- /home/swd/modules --------------------------------------
micromamba/latest
```
## Virtual Machine Hub
Currently the system acts as a virtual machine host.
Active:
- VERA
## Container Hub
Currently there is the possibility to run [singularity](https://singularity.hpcng.org/) containers on all our Servers. This is really similar to docker, but much more secure for multi-user servers. Almost every docker container can be converted into a singularity container. Some of the build recipes use docker.
There are a number of prepared containers but more can be added. If you have a wish or an existing container useful for others please share.
```yaml
containers:
- root: /home/swd/containers
- available:
- RTTOV:
- RTTOV: 12.3
- compiler: gcc:7.3.0 (anaconda)
- path: /home/swd/containers/rttov-jupyter/jup3rttov.sif
- os: centos:6.10
- python: 3.7.4
- singularity: 3.5.2
- packages:
- anaconda3
- jupyter jupyterlab numpy matplotlib pandas xarray bottleneck dask numba scipy netcdf4 cartopy h5netcdf nc-time-axis cfgrib eccodes nodejs
- apps:
- atlas
- lab
- notebook
- rtcoef
- rthelp
- rttest
```
\ No newline at end of file
......@@ -26,10 +26,20 @@ This starts a new session
$ screen -S longjob
```
You can detach from this session with `CTRL + A + D` and reconnect again with `screen -r`.
You can detach from this session with `CTRL + A D` and reconnect again with `screen -r`.
Multiple Sessions can be created and the output saved (`-L` Option).
![](../Documentation/screen-cheatsheet.png)
## Tmux
[Tmux](https://wiki.ubuntuusers.de/tmux/) is a terminal multiplexer, that allows to open more consoles and allows to detach the session. It is much more complex and powerful compared to screen.
```bash
$ tmux
```
Launches a new virtual terminal, with `CTRL + B D` it can bed detached and with `tmux a` it can be reconnected.
![](https://linuxacademy.com/site-content/uploads/2016/08/tmux.png)
## Questions and Answers
- [Q: How to use ssh-key authentication?](Questions.md#q-how-to-use-ssh-key-authentication)
- [Q: How to use an ssh-agent?](Questions.md#q-how-to-use-an-ssh-agent)
......@@ -37,3 +47,9 @@ Multiple Sessions can be created and the output saved (`-L` Option).
- [Q: How to connect to Jet, SRVX8, SRVX2?](Questions.md#q-how-to-connect-to-jet-srvx8-srvx2)
- [Q: How to mount a remote file system on Linux (MAC)?](Questions.md#q-how-to-mount-a-remote-file-system-on-Linux-mac)
## Tools
Please find some useful tools for connecting to IMGW servers and University of Vienna VPN.
- BASH script using SSH to connect via a gateway, [SSH](SSH.md#connect-script) [connect2jet](connect2jet)
- BASH script for 5fpc tools, [VPN](VPN.md#connect-script) [connect2vpn](connect2vpn)
- Change VNC resolution, [VNC](VNC.md#xrandr) [add_xrandr_resolution](add_xrandr_resolution.sh)
- Mount Server directories via sshfs, [SSHFS](SSH.md#sshfs)
\ No newline at end of file
......@@ -20,21 +20,21 @@ Host *
Host srvx1
HostName srvx1.img.univie.ac.at
User [USERNAME]
Host srvx2
HostName srvx2.img.univie.ac.at
User [USERNAME]
Host srvx8
HostName srvx8.img.univie.ac.at
User [USERNAME]
Host jet
HostName jet01.img.univie.ac.at
User [USERNAME]
Host srvx2jet
HostName jet01.img.univie.ac.at
User [USERNAME]
ProxyJump srvx1.img.univie.ac.at
Host login
HostName login.univie.ac.at
User [U:Account USERNAME]
```
and replacing `[USERNAME]` and `[U:Account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx2` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway.
and replacing `[USERNAME]` and `[U:Account USERNAME]` with your usernames. Using such a file allows to connect like this `ssh srvx1` using the correct server adress and specified username. Copy this file as well on `login.univie.ac.at` and you can use commands like this: `ssh -t login ssh jet` to connect directly to `jet` via the `login` gateway.
If you want to use ssh-keys you can also use different keys in `.ssh/config` per server with `IdentityFile ~/.ssh/id_rsa_for_server`.
......@@ -99,3 +99,15 @@ Option 2: [Bitvise SSH Client](https://www.bitvise.com/ssh-client-download) and
* Set "Destination Host" to `jet01.img.univie.ac.at`
* Set "Destination Port" to `5900+[DISPLAY]`
* Now start VncViewer and connect to `127.0.0.1:5900+[DISPLAY]`
## SSHFS
It is possible to mount your home directory to your personal computer on Linux via `sshfs` or using of course a dedicated remote file browser like: Filezilla, Cyberduck, ...
on Linux you need to install `fuse2` and `sshfs`, the names might vary between distributions, but are all in the default repos.
```bash
# connect to srvx1 using your home directory and a srvx1 directory on your local computer
# mountserver [host] [remotedir] [localdir]
mkdir -p $HOME/srvx1
mountserver [USER]@srvx1.img.univie.ac.at /users/staff/[USER] $HOME/srvx1
```
Note the directories might vary, depending on your membership (staff, external, students).
......@@ -2,13 +2,48 @@
**Be aware! Everyone with the VNC password will get access to your account**
It is recommended not to use VNC. Use **jupyterhub** or **screen** instead.
It is recommended not to use VNC. Use **jupyterhub** or **screen** or **tmux** instead. However, for GUI applications there is no other way.
The VNC (Virtual Network Computing) allows to view a graphical user interface (GUI) from a remote server in an viewer application. This can be used to launch GUI programs on the servers.
Xvnc is the Unix VNC server. Applications can display themselves on Xvnc as if it were a normal display, but they will appear on any connected VNC viewers rather than on a physical screen. The VNC protocol uses the TCP/IP ports 5900+N, where N is the display number.
### Setup
Currently VNC is installed on:
- SRVX8, mainly Staff
- JET01, mainly Researchers
## Userservices
It is highly recommended to use the userservices scripts available on all IMGW Servers to make configurations for VNC.
```bash
$ userservices vnc -h
################################################################################
User Services - VNC Server Setup/Launcher/Stopper
vnc -h -s -x -d -l
Options:
-h Help
-c Check for vnc server(s) running
-s Stop vnc server(s)
-x Write xstartup in /home/spack/.vnc
-d Prepare vncserver Service
-p [] Port: 1 - 99
-l Launch vnc server/service
-w [] Desktop Session: icewm, xfce
################################################################################
Author: MB
Date: 25.01.2021
Path: /home/swd/userservices/userservices.d
################################################################################
Installed Desktops: icewm-session
################################################################################
```
Running the script without any options will run all necessary steps. In case of error try removing your `.vnc` directory, as older configurations might be in the way. There shall be at least two desktop options: icewm and xfce. You can specify this directly with the `-w [DESKTOP]` option.
## Setup - Manual
Please consider using the `userservices vnc` script to do this setup.
First of all check if a VNC server is already running or not. Depending on the results you have two options:
1. Use an existing. (Note the Port/Display Number)
2. Stop all and start a new VNC server
......@@ -28,7 +63,7 @@ vncserver -kill :[DISPLAY]
vncserver
```
#### Jet Cluser
### Jet Cluser
on Jet there are the user services available to you:
```bash
# Help information on VNC userservice
......@@ -70,11 +105,12 @@ vncconfig -iconic &
xterm -geometry -sb -sl 500 -fn 9x15bold -title "$VNCDESKTOP Desktop" &
icewm &
```
Some information on what could be put into `.Xresources` is given [here](https://wiki.archlinux.org/title/x_resources). It might be possible to replace `icewm` here with `startxfce4` to choose XFCE Desktop environment.
### VNC as a Service
This is only here for reference, on SRVX2 and Jet use the `userservices vnc`.
Setup, replace `[DISPLAY]` with an appropriate number:
Setup, replace `[DISPLAY]` with an appropriate number, e.g. `3`:
```bash
mkdir -p ~/.config/systemd/user
cp /usr/lib/systemd/user/vncserver@.service ~/.config/systemd/user/
......@@ -115,14 +151,24 @@ systemctl --user status vncserver.slice
...
```
### Change the resolution of your VNC Session
## Change the resolution of your VNC Session
`xrandr` gives you a list of available resolutions, that can be use. Requires a `$DISPLAY` variable to be set, using your VNC display number does the trick, e.g. `:3`.
`xrandr` gives you a list of available resolutions, that can be used.
```bash
# Change VNC display resolution [width x height]
$ userservices vnc-geometry 1920x1080
```
Change the resolution to e.g. 1920x1080 (HD):
```bash
xrandr -s 1920x1080 -d $DISPLAY
```
Adding resolutions according to your display's resolution have a look here: [add_xrandr_resolution.sh](add_xrandr_resolution.sh)
Adding resolutions according to your display's resolution have a look here: [add_xrandr_resolution.sh](add_xrandr_resolution.sh)
```bash
# running the script and adding a resolution you require, in pixel
$ add_xrandr_resolution [width] [height]
```
Note: `$DISPLAY` is an environment variable that is usually set to your VNC server port.
......@@ -19,4 +19,17 @@ On Windows and Mac you get a nice gui that requires you to fill in the VPN serve
```
f5fpc -s -t vpn.univie.ac.at -u [user]
```
The status can be checked with `f5fpc --info`.
\ No newline at end of file
The status can be checked with `f5fpc --info`.
## Connect script
One can use the commands above or use the [connect2vpn](connect2vpn) script to connect to the University VPN service. Especially in Linux the interface is much more primitive than on Mac or Windows.
```bash
$ connect2vpn [u:account username]
[VPN] Using [u:account username] as username
[VPN] BIG-IP Edge Command Line Client version 7213.2021.0526.1
[VPN] Full (1) or split (None) tunnel? (1/None):
```
Continue and wait until you get a response that it's connected.
The status stays visible.
\ No newline at end of file
[Desktop Entry]
Exec=gnome-terminal --class univpn --name univpn -t UNIVPN -x bash -c "connect2vpn; exec $SHELL"
Exec=gnome-terminal --class univpn --name univpn -t UNIVPN -x bash -c "connect2vpn"
Name=vpn.univie
Icon=../Documention/logo_uniwien.png
Type=Application
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment