diff --git a/.docs/deployment-docker-compose.md b/.docs/deployment-docker-compose.md
index 761feccfc17587db9d26517a6cf7a689a90a2b47..9769675597ace5f624999f87ad4a8b24634801b0 100644
--- a/.docs/deployment-docker-compose.md
+++ b/.docs/deployment-docker-compose.md
@@ -20,8 +20,8 @@ For this small, local, test deployment any modern hardware would suffice, we rec
 the following settings.
 
 - min. 8 vCPU cores
-- min. 16GB RAM memory
-- min. 200GB SSD storage
+- min. 8GB free RAM memory
+- min. 200GB free SSD storage
 - min. 100Mbit/s connection
 
 *Optional*: public IP-address if you want to secure the deployment with a (free) TLS-certificate from Let's Encrypt.
@@ -134,6 +134,22 @@ In case the deployment is unsuccessful, we have explanations on their origin and
                 `sudo netstat -tulpn` (sudo is necessary for the process id) and then stop the service or application
                 gracefully or force a stop via `kill -15 PID` (not recommended).
 
+**IllegalArgumentException values less than -1 bytes are not supported**
+
+:   *Origin*:   Your deployment machine (e.g. laptop, virtual machine) appears to not have enough RAM assigned.
+:   *Solution*: Assign more RAM to the deployment machine (e.g. add vRAM to the virtual machine).
+
+## Next Steps
+
+You should now be able to view the front end at [http://localhost](http://localhost).
+
+Please be warned that the default configuration is not intended for public deployments. It is only intended to have a
+running system within minutes to play around within the system and explore features. It is strongly advised to change 
+the default `.env` environment variables.
+
+Next, create a [user account](../usage-overview/#create-user-account) and 
+then [create a database](../usage-overview/#create-database) to [import a dataset](../usage-overview/#import-dataset).
+
 ## Security
 
 !!! warning "Known security issues with the default configuration"
diff --git a/.docs/images/screenshots/import-database-dump-step-1.png b/.docs/images/screenshots/import-database-dump-step-1.png
new file mode 100644
index 0000000000000000000000000000000000000000..6078687b34d755d2770d0a64243d5aeb903a10b0
Binary files /dev/null and b/.docs/images/screenshots/import-database-dump-step-1.png differ
diff --git a/.docs/images/screenshots/import-database-dump-step-2.png b/.docs/images/screenshots/import-database-dump-step-2.png
new file mode 100644
index 0000000000000000000000000000000000000000..b827109ad6368aa8e258a9a573b35c79410a62c0
Binary files /dev/null and b/.docs/images/screenshots/import-database-dump-step-2.png differ
diff --git a/.docs/images/screenshots/import-database-dump-step-3.png b/.docs/images/screenshots/import-database-dump-step-3.png
new file mode 100644
index 0000000000000000000000000000000000000000..a51f10fa5986ec1e1a188c9ee34426018367e11d
Binary files /dev/null and b/.docs/images/screenshots/import-database-dump-step-3.png differ
diff --git a/.docs/usage-overview.md b/.docs/usage-overview.md
index af1ce9cbf8ea18645c2795b3d390e49ab41d6208..075de96375e6a1a65e70659bd4d5773f53ae1e31 100644
--- a/.docs/usage-overview.md
+++ b/.docs/usage-overview.md
@@ -11,7 +11,7 @@ We give usage examples of the most important use-cases we identified.
 | [Create User Account](#create-user-account)               | :white_check_mark: | :white_check_mark: |        :x:         | :white_check_mark: | 
 | [Create Database](#create-database)                       | :white_check_mark: | :white_check_mark: |        :x:         | :white_check_mark: | 
 | [Import Dataset](#import-dataset)                         | :white_check_mark: | :white_check_mark: |        :x:         | :white_check_mark: | 
-| [Import Database Dump](#import-database-dump)             |        :x:         |        :x:         | :white_check_mark: |        :x:         | 
+| [Import Database Dump](#import-database-dump)             | :white_check_mark: | :white_check_mark: |        :x:         |        :x:         | 
 | [Import Live Data](#import-live-data)                     |        :x:         | :white_check_mark: | :white_check_mark: | :white_check_mark: | 
 | [Export Subset](#export-subset)                           | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | 
 | [Assign Database PID](#assign-database-pid)               | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | 
@@ -541,19 +541,58 @@ access to. This is the default for self-created databases like above in [Create
 
 A user wants to import a database dump in `.sql` (or in `.sql.gz`) format into DBRepo.
 
-=== "JDBC API"
+=== "UI"
+
+    First, create a new database as descriped in the [Create Database](#create-database) use-case above. Then, import
+    the database dump `dump.sql` via the [MySQL Workbench](https://www.mysql.com/products/workbench/) client which is 
+    semi-compatible with MariaDB databases, i.e. core features work some status/performance features do not.
+
+    Setup a new connection in the MySQL Workbench (c.f. Figure 14) by clicking the small 
+    ":material-plus-circle-outline:" button :material-numeric-1-circle-outline: to open the dialog. In the opened dialog
+    fill out the connection parameters (for local deployments the hostname is `127.0.0.1` and port `3307` for the
+    [Data Database](../system-databases-data/) :material-numeric-2-circle-outline:.
+
+    The default credentials are username `root` and password `dbrepo`, type the password in 
+    :material-numeric-3-circle-outline: and click the "OK" button. Then finish the setup of the new connection by
+    clicking the "OK" button :material-numeric-4-circle-outline:.
+
+    <figure markdown>
+    ![Setup New Connection in MySQL Workbench](images/screenshots/import-database-dump-step-1.png)
+    <figcaption>Figure 14: Setup New Connection in MySQL Workbench.</figcaption>
+    </figure>
+
+    Now you should be able to see some statistics for the Data Database (c.f. Figure 15), especially that it is running
+    :material-numeric-1-circle-outline: and basic connection and version information :material-numeric-2-circle-outline:.
+
+    <figure markdown>
+    ![Server status of the Data Database in MySQL Workbench](images/screenshots/import-database-dump-step-2.png)
+    <figcaption>Figure 15: Server status of the Data Database in MySQL Workbench.</figcaption>
+    </figure>
+
+    Then proceed to import the database dump `dump.sql` by clicking "Data Import/Restore" 
+    :material-numeric-1-circle-outline: and select "Import from Self-Contained File" in the Import Options. Then
+    select the `dump.sql` file in the file path selection. Last, select the database you want to import this `dump.sql`
+    into :material-numeric-4-circle-outline: (you can also create a new database for the import by clicking "New...").
+    The import starts after clicking "Start Import" :material-numeric-5-circle-outline:.
+
+    <figure markdown>
+    ![Data Import/Restore in MySQL Workbench](images/screenshots/import-database-dump-step-3.png)
+    <figcaption>Figure 16: Data Import/Restore in MySQL Workbench.</figcaption>
+    </figure>
+
+=== "Terminal"
 
     First, create a new database as descriped in the [Create Database](#create-database) use-case above. Then, import
     the database dump `dump.sql` via the `mariadb` client.
 
     ```bash
-    mariadb -u USERNAME -pYOURPASSWORD db_name < dump.sql
+    mariadb -H127.0.0.1 -p3307 -uUSERNAME -pYOURPASSWORD db_name < dump.sql
     ```
 
     Alternatively, if your database dump is compressed, import the `dump.sql.gz` by piping it through `gunzip`.
 
     ```bash
-    gunzip < dump.sql.gz | mysql -u root -pYOURPASSWORD db_name
+    gunzip < dump.sql.gz | mysql -H127.0.0.1 -p3307 -uUSERNAME -pYOURPASSWORD db_name
     ```
 
     The [Metadata Service](../system-services-metadata) periodically (by default configuration every 60 seconds) checks
diff --git a/install.sh b/install.sh
index da07dd9482681299a47356e55bbd1097ea338729..8c436d2e3c5c2d9f24e903d08bb1e336a49dd877 100644
--- a/install.sh
+++ b/install.sh
@@ -1,29 +1,59 @@
 #!/bin/bash
+MIN_CPU=8
+MIN_RAM=8
+SKIP_CHECKS=${SKIP_CHECKS:-0}
 
-# dependency
-docker info > /dev/null
-if [ $? -ne 0 ]; then
-  echo "Docker is not installed (or accessible in bash) on your system:"
-  echo ""
-  echo "  - install docker from https://docs.docker.com/desktop/install/linux-install/"
-  echo "  - make sure the docker executable is in \$PATH"
-  exit 2
+# checks
+if [[ $SKIP_CHECKS -eq 0 ]]; then
+  echo "[✨] Startup check ..."
+  docker info > /dev/null
+  if [[ $? -ne 0 ]]; then
+    echo "Docker is not installed (or accessible in bash) on your system:"
+    echo ""
+    echo "  - install docker from https://docs.docker.com/desktop/install/linux-install/"
+    echo "  - make sure the docker executable is in \$PATH"
+    exit 2
+  else
+    echo "$(docker --version) OK"
+  fi
+  CPU=$(cat /proc/cpuinfo | grep processor | wc -l)
+  if [[ $CPU -lt $MIN_CPU ]]; then
+    echo "You do not have enough vCPU resources available:"
+    echo ""
+    echo "  - we found ${CPU} vCPU cores instead of necessary ${MIN_CPU}"
+    echo "  - if you believe this is a mistake, skip startup checks with the SKIP_CHECKS=1 flag"
+    exit 3
+  else
+    echo "vCPU ${CPU} OK"
+  fi
+  RAM=$(free -g -t | awk 'NR==2 {print $4}')
+  if [[ $RAM -lt $MIN_RAM ]]; then
+    echo "You do not have enough RAM free resources:"
+    echo ""
+    echo "  - we found ${RAM}GB RAM (free) instead of necessary ${RAM}GB"
+    echo "  - if you believe this is a mistake, skip startup checks with the SKIP_CHECKS=1 flag"
+    exit 4
+  else
+    echo "RAM ${RAM}GB OK"
+  fi
+else
+  echo "[✨] Skipping checks ..."
 fi
 
 # environment
 echo "[🚀] Gathering environment ..."
 mkdir -p ./dist
-curl -sSL -o ./docker-compose.yml https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/docker-compose.prod.yml
-curl -sSL -o ./dist/setup-schema_local.sql https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-metadata-db/setup-schema_local.sql
-curl -sSL -o ./dist/rabbitmq.conf https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-broker-service/rabbitmq.conf
-curl -sSL -o ./dist/enabled_plugins https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-broker-service/enabled_plugins
-curl -sSL -o ./dist/cert.pem https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-broker-service/cert.pem
-curl -sSL -o ./dist/pubkey.pem https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-broker-service/pubkey.pem
-curl -sSL -o ./dist/definitions.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-broker-service/definitions.json
-curl -sSL -o ./dist/dbrepo.conf https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-gateway-service/dbrepo.conf
-curl -sSL -o ./dist/opensearch_dashboards.yml https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-search-db/opensearch_dashboards.yml
-curl -sSL -o ./dist/dbrepo.config.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-ui/dbrepo.config.json
-curl -sSL -o ./dist/s3_config.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-storage-service/s3_config.json
+curl -sSL -o ./docker-compose.yml https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/docker-compose.prod.yml
+curl -sSL -o ./dist/setup-schema_local.sql https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-metadata-db/setup-schema_local.sql
+curl -sSL -o ./dist/rabbitmq.conf https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-broker-service/rabbitmq.conf
+curl -sSL -o ./dist/enabled_plugins https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-broker-service/enabled_plugins
+curl -sSL -o ./dist/cert.pem https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-broker-service/cert.pem
+curl -sSL -o ./dist/pubkey.pem https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-broker-service/pubkey.pem
+curl -sSL -o ./dist/definitions.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-broker-service/definitions.json
+curl -sSL -o ./dist/dbrepo.conf https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-gateway-service/dbrepo.conf
+curl -sSL -o ./dist/opensearch_dashboards.yml https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-search-db/opensearch_dashboards.yml
+curl -sSL -o ./dist/dbrepo.config.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-ui/dbrepo.config.json
+curl -sSL -o ./dist/s3_config.json https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/dbrepo-storage-service/s3_config.json
 
 echo "[📦] Pulling images ..."
 docker compose pull
@@ -45,4 +75,5 @@ if [ $? -eq 0 ]; then
   echo ""
   echo "  docker compose logs -f"
   echo ""
+  echo "Read about next steps online: https://www.ifs.tuwien.ac.at/infrastructures/dbrepo/latest/deployment-docker-compose/#next-steps"
 fi