diff --git a/docs/deployment-kubernetes-azure.md b/docs/deployment-kubernetes-azure.md
index 48903673c55bca9cc553cace5bf4c04a57caa7ba..e1039dc1f0a407181af1e77ed1cc0a75edd91e0f 100644
--- a/docs/deployment-kubernetes-azure.md
+++ b/docs/deployment-kubernetes-azure.md
@@ -12,7 +12,7 @@ with Microsoft Azure as infrastructure provider.
 ### Hardware
 
 For this small cloud, test deployment any public cloud provider would suffice, we recommend a 
-small [:simple-microsoftazure: Azure Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service)
+small [:simple-microsoftazure: Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service)
 with Kubernetes version *1.24.10* and node sizes *Standard_B4ms*
 
 - 4 vCPU cores
@@ -35,15 +35,20 @@ recommend to at least deploy the Metadata Database as high-available, managed da
     Microsoft decided to still maintain MariaDB 10.3
     until [September 2025](https://learn.microsoft.com/en-us/azure/mariadb/concepts-supported-versions).
 
-### Shared Volume
+### Fileshare
 
-For the shared volume PersistentVolumeClaim `dbrepo-shared-volume-claim`, select an appropriate StorageClass that 
-supports `ReadWriteMany` access modes and modify the `premiumStorageClassName` variable accordingly.
+For the shared volume *PersistentVolumeClaim* `dbrepo-shared-volume-claim`, select an appropriate *StorageClass* that 
+supports:
 
-It is sufficient, to select the cost-efficient `azurefile` StorageClass for Azure:
+1. Access mode `ReadWriteMany`
+2. Hardlinks (TUSd creates lockfiles during upload)
 
-```yaml title="values.yaml"
-...
-premiumStorageClassName: azurefile
-...
-```
+You will need to use a *StorageClass* of either `managed-*` or `azureblob-*` (after enabling the 
+proprietary [:simple-microsoftazure: CSI driver for BLOB storage](https://learn.microsoft.com/en-us/azure/aks/azure-blob-csi?tabs=NFS#azure-blob-storage-csi-driver-features)
+in your Kubernetes Cluster).
+
+We recommend to create 
+a [Container](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction#containers) for the
+[Upload Service](/infrastructures/dbrepo/latest/system-services-upload/) to deposit files and mount the BLOB storage
+via CSI drivers into the *Deployment*. It greatly increases the available interfaces (see below) for file uploads and
+provides a highly-available filesystem for the many deployments that need to use the files.
diff --git a/docs/deployment-kubernetes-minikube.md b/docs/deployment-kubernetes-minikube.md
index f6dd6dc730ebbca4337ff1175f662ebdcf98a48d..bd64c05af3ee3e50150c11372d1b7c7316013aa9 100644
--- a/docs/deployment-kubernetes-minikube.md
+++ b/docs/deployment-kubernetes-minikube.md
@@ -10,7 +10,7 @@ suitable for test-deployments.
 
 ## Requirements
 
-### Hardware
+### Virtual Machine
 
 For this small, local, test deployment any modern hardware would suffice, we recommend a dedicated virtual machine with
 the following settings. Note that most of the vCPU and RAM resources will be needed for starting the infrastructure,
@@ -20,7 +20,7 @@ this is because of Docker. During idle times, the deployment will use significan
 - 16GB RAM memory
 - 100GB SSD storage
 
-### Software
+### Minikube
 
 First, install the minikube virtualization tool that provides a single-node Kubernetes environment, e.g. on a virtual
 machine. We do not regularly check these instructions, they are provided on best-effort. Check 
@@ -41,6 +41,8 @@ minikube kubectl -- get po -A
 minikube addons enable ingress
 ```
 
+### NGINX
+
 Deploy a NGINX reverse proxy on the virtual machine to reach your minikube cluster from the public Internet:
 
 ```nginx title="/etc/nginx/conf.d/dbrepo.conf"
@@ -79,6 +81,36 @@ Replace `CLUSTER_IP` with the result of:
 Replace `DOMAIN_NAME` with the domain name. You will need also a valid TLS certificate with private key for TLS enabled
 in the cluster. In our test deployment we obtained a certificate from Let's Encrypt.
 
+### Fileshare
+
+Since the Upload Service uses a shared filesystem with the [Analyst Service](/infrastructures/dbrepo/latest/system-services-analyse/),
+[Metadata Service](/infrastructures/dbrepo/latest/system-services-metadata/) and
+[Data Database](/infrastructures/dbrepo/latest/system-databases-data/), the dynamic provision of the *PersistentVolume* 
+by the *PersistentVolumeClaim* 
+of [`pvc.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/blob/master/charts/dbrepo-core/templates/upload-service/pvc.yaml)
+needs to happen statically. You can make use of the host's filesystem and mount it in each of those deployments.
+
+For example, mount the *hostPath* directly in
+the [`deployment.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/blob/master/charts/dbrepo-core/templates/analyse-service/deployment.yaml).
+
+```yaml title="deployment.yaml"
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: analyse-service
+  ...
+spec:
+  template:
+    spec:
+      containers:
+        - name: analyse-service
+      volumeMounts:
+        - name: shared
+          hostPath: /path/of/host
+          mountPath: /mnt/shared
+      ...
+```
+
 ## Deployment
 
 To install the DBRepo Helm Chart, download and edit 
diff --git a/docs/system-services-upload.md b/docs/system-services-upload.md
index e1e370ce650eab5ff321a6960ffd1469839bdd29..dbae245540663c7dfa095dca0c83c4f80dee4f73 100644
--- a/docs/system-services-upload.md
+++ b/docs/system-services-upload.md
@@ -9,6 +9,8 @@ author: Martin Weise
 !!! debug "Debug Information"
 
     * Ports: 1080/tcp
+    * TUSd: `http://:1080/api/upload/files`
+    * Prometheus: `http://:1080/metrics`
 
 ## Overview
 
@@ -18,10 +20,16 @@ Upload files using one of the official the TUSd clients:
 * [Java](https://github.com/tus/tus-java-client)
 * [Python](https://github.com/tus/tus-py-client)
 
+The [TUS](https://tus.io/) protocol allows for flexible file uploads that, when interrupted, can be resumed at a later
+point. It is based on the open HTTP protocol and uploading a new file is a sequence of `HEAD`, `POST` and `PATCH`
+requests for large files.
+
+For more information, see the [official Docker image](https://hub.docker.com/r/tusproject/tusd).
+
 ## Limitations
 
-(none)
+* No support for authentication
 
 ## Security
 
-(none)
+1. Since authentication is not supported, use IP-based ingress rules to limit access to the upload endpoint.