We have the privilege to be part of the VSC and have private nodes at VSC-4 (currently operational since 2019).
[[__TOC__]]
To get access write MB and request an account on VSC-4. Please send the follogin details:
We have the privilege to be part of the VSC and have private nodes at VSC-4 (since 2020) and VSC-3 (since 2014)
- Name
- Email
- Mobile number (required)
- Purpose / Group
We can then give you access to
Access is primarily via SSH:
`ssh vsc4.vsc.ac.at`
```
ssh user@vsc4.vsc.ac.at
ssh user@vsc3.vsc.ac.at
```
Please follow some connection instruction on the [wiki](https://wiki.vsc.ac.at) or similar to all other servers (e.g. [SRVX1](SRVX1.md)).
Please follow some connection instruction on the [wiki](https://wiki.vsc.ac.at) which is similar to all other servers (e.g. [SRVX1](SRVX1.md)).
The VSC is only available from within the UNINET (VPN, ...).
We have private nodes at our disposal and in order for you to use these you need to specify the correct account in the jobs you submit to the queueing system (SLURM). The correct information will be given to you in the email. This is very similar to Jet.
We have private nodes at our disposal and in order for you to use these you need to specify the correct account in the jobs you submit to the queueing system (SLURM). The correct information will be given to you in the registration email.
## Node Information:
## Node Information VSC-4
```
```
CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
CPU model: Intel(R) Xeon(R) Platinum 8174 CPU @ 3.10GHz
2 CPU, 24 physical cores per CPU, total 96 logical CPU units
2 CPU, 24 physical cores per CPU, total 96 logical CPU units
378 GB Memory
```
## Node Information VSC-3
```
CPU model: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
2 CPU, 8 physical cores per CPU, total 32 logical CPU units
126 GB Memory
```
```
### Run time limits
On VSC-3 we have a max runtime of 10 days for the private Queue. The normal queues have 3 days. the devel only 10 min (for testing)
```
sacctmgr show qos format=name%20s,priority,grpnodes,maxwall,description%40s
mem_0096 1000 3-00:00:00 vsc-4 regular nodes with 96 gb of memory
mem_0384 1000 3-00:00:00 vsc-4 regular nodes with 384 gb of memo+
mem_0768 1000 3-00:00:00 vsc-4 regular nodes with 768 gb of memo+
```
SLURM allows for setting a run time limit below the default QOS's run time limit. After the specified time is elapsed, the job is killed:
```
#SBATCH --time=<time>
```
Acceptable time formats include `minutes`, `minutes:seconds`, `hours:minutes:seconds`, `days-hours`, `days-hours:minutes` and `days-hours:minutes:seconds`.
## Example Job
## Example Job
:construction:
-[VSC Wiki Slurm](https://wiki.vsc.ac.at/doku.php?id=doku:slurm)
-[VSC Wiki private Queue](https://wiki.vsc.ac.at/doku.php?id=doku:vsc3_queue)
## Software
The VSC use the same software system as Jet and have environmental modules available to the user:
### Example Job on VSC-3
We have 16 CPUs per Node. In order to fill:
-`--partition=mem_xxxx` (per email)
-`--qos=xxxxxx` (see below)
-`--account=xxxxxx` (see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-3 our account is called:
```
sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s
#SBATCH --mail-type=BEGIN # first have to state the type of event to occur
#SBATCH --mail-user=<email@address.at> # and then your email address
#SBATCH --partition=mem_xxxx
#SBATCH --qos=p70653_0128
#SBATCH --account=p70653
#SBATCH --time=<time>
# when srun is used, you need to set (Different from Jet):
<srun -l-N2-n32 a.out >
# or
<mpirun -np 32 a.out>
```
***-J** job name
***-N** number of nodes requested (16 cores per node available)
***-n, --ntasks=<number>** specifies the number of tasks to run,
***--ntasks-per-node** number of processes run in parallel on a single node
***--ntasks-per-core** number of tasks a single core should work on
***srun** is an alternative command to **mpirun**. It provides direct access to SLURM inherent variables and settings.
***-l** adds task-specific labels to the beginning of all output lines.
***--mail-type** sends an email at specific events. The SLURM doku lists the following valid mail-type values: *"BEGIN, END, FAIL, REQUEUE, ALL (equivalent to BEGIN, END, FAIL and REQUEUE), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit), TIME_LIMIT_80 (reached 80 percent of time limit), and TIME_LIMIT_50 (reached 50 percent of time limit). Multiple type values may be specified in a comma separated list."*[cited from the SLURM doku](http://slurm.schedmd.com)
***--mail-user** sends an email to this address
```
```
module av
[username@l31 ~]$ sbatch check.slrm # to submit the job
[username@l31 ~]$ squeue -u `whoami` # to check the status of own jobs
[username@l31 ~]$ scancel JOBID # for premature removal, where JOBID
# is obtained from the previous command
```
```
will show you all available modules (much more than on jet)
### Example Job on VSC-4
We have 48 CPUs per Node. In order to fill:
-`--partition=mem_xxxx` (per email)
-`--qos=xxxxxx` (see below)
-`--account=xxxxxx` (see below)
The core hours will be charged to the specified account. If not specified, the default account will be used.
on VSC-4 our account is called:
```
sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s