Skip to content
Snippets Groups Projects
Verified Commit bc835f2f authored by Martin Weise's avatar Martin Weise
Browse files

Updated to make sidecar working

parent 835cecdf
No related branches found
No related tags found
3 merge requests!231CI: Remove build for log-service,!228Better error message handling in the frontend,!223Release of version 1.4.0
Showing
with 628 additions and 843 deletions
...@@ -4,60 +4,30 @@ author: Martin Weise ...@@ -4,60 +4,30 @@ author: Martin Weise
## TL;DR ## TL;DR
To install DBRepo in your existing cluster, download the sample [`values.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/raw/master/charts/dbrepo-core/values.yaml?inline=false) To install DBRepo in your existing cluster, download the sample [`values.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/raw/dev/charts/dbrepo-core/values.yaml?inline=false)
for your deployment and update the variables, especially `hostname`. The chart depends on for your deployment and update the variables, especially `hostname`.
installed [Keycloak Operator](https://www.keycloak.org/operator/installation) that can be installed following the
official guide.
```shell ```shell
helm upgrade --install dbrepo \ helm upgrade --install dbrepo \
-n dbrepo \ -n dbrepo \
"oci://dbrepo.azurecr.io/helm/dbrepo-core" \ "oci://dbrepo.azurecr.io/helm/dbrepo-core" \
--values ./values.yaml \ --values ./values.yaml \
--version "0.1.3" \ --version "0.1.4-RC2" \
--create-namespace \ --create-namespace \
--cleanup-on-fail --cleanup-on-fail
``` ```
## Dependencies ## Dependencies
The helm chart depends on four components: Our chart depends on seven other charts which will be automatically resolved when installing our `dbrepo-core` chart:
1. [Ingress NGINX Controller](https://kubernetes.github.io/ingress-nginx/) for basic ingress. * Keycloak (Bitnami, v17.3.3) for [Authentication Service](../system-services-authentication)
2. [Cert-Manager Controller](https://cert-manager.io/) for TLS certificate management with Let's Encrypt. * MariaDB Galera (Bitnami, v10.1.3) for [Data Database](../system-databases-data) & [Metadata Database](../system-databases-metadata)
3. [MariaDB Operator](https://github.com/mariadb-operator/mariadb-operator/) for creation of databases. * MinIO (Bitnami, v12.9.4) for [Storage Service](../system-services-storage)
4. [Keycloak Operator](https://www.keycloak.org/operator/installation) for creation of the authentication service. * OpenSearch (OpenSearch Project, v2.16.0) for [Search Database](../system-databases-search)
* OpenSearch Dashboards (OpenSearch Project, v2.14.0) for [Search Dashboard](../system-other-search-dashboard)
## Configuration before the installation * PostgreSQL HA (Bitnami, v12.1.7) for [Auth Database](../system-databases-auth)
* RabbitMQ (Bitnami, v12.5.1) for [Broker Service](../system-services-broker)
Define an admin user that the services can use to communicate with
the [Authentication Service](../system-services-authentication). You will need to manually create this user later after
the installation.
## Configuration after the installation
After installing, get the initial administrator password created by the [Keycloak operator](https://www.keycloak.org/operator/basic-deployment):
```shell
kubectl -n dbrepo \
get \
secret \
auth-service-initial-admin \
-o jsonpath='{.data.password}' | base64 --decode
```
On success, the output should look like this: `1f5581a01d8e8f47f2dae08cc88f56fd` which is the initial password for the
user `admin`. This password should be considered as *temporary* and be changed immediately now! Login into
the [authentication service](../system-services-authentication) as `admin` and:
1. Create a new user in the `master` realm.
2. Create credentials (non-temporary) for this user in the `master` realm.
3. Assign this user the role `admin`.
4. Delete the user `admin`.
Then import the DBRepo realm by clicking the dropdown "master" > Create Realm and import
the [`dbrepo-realm.json`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/dev/dbrepo-authentication-service/dbrepo-realm.json)
by uploading the file *or* copying the contents and click "Create".
### Backup ### Backup
......
---
author: Martin Weise
---
# Special Instructions for Azure Cloud
You can use our pre-built Helm chart for deploying DBRepo in your Kubernetes Cluster
with Microsoft Azure as infrastructure provider.
## Requirements
### Hardware
For this small cloud, test deployment any public cloud provider would suffice, we recommend a
small [Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service)
with Kubernetes version *1.24.10* and node sizes *Standard_B4ms*
- 4 vCPU cores
- 16GB RAM memory
- 200GB SSD storage
This is roughly met by selecting the *Standard_B4ms* flavor and three worker nodes.
## Deployment
### Databases
Since Azure offers a managed [Azure Database for MariaDB](https://azure.microsoft.com/en-us/products/mariadb), we
recommend to at least deploy the Metadata Database as high-available, managed database.
!!! warning "End of Life software"
Unfortunately, Azure does not (yet) support managed MariaDB 10.5, the latest version supported by Azure is 10.3
which is End of Life (EOL) from [May 2023 onwards](https://mariadb.com/kb/en/changes-improvements-in-mariadb-10-3/).
Microsoft decided to still maintain MariaDB 10.3
until [September 2025](https://learn.microsoft.com/en-us/azure/mariadb/concepts-supported-versions).
### Fileshare
For the shared volume *PersistentVolumeClaim* `dbrepo-shared-volume-claim`, select an appropriate *StorageClass* that
supports:
1. Access mode `ReadWriteMany`
2. Hardlinks (TUSd creates lockfiles during upload)
You will need to use a *StorageClass* of either `managed-*` or `azureblob-*` (after enabling the
proprietary [CSI driver for BLOB storage](https://learn.microsoft.com/en-us/azure/aks/azure-blob-csi?tabs=NFS#azure-blob-storage-csi-driver-features)
in your Kubernetes Cluster).
We recommend to create
a [Container](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction#containers) for the
[Upload Service](../system-services-upload) to deposit files and mount the BLOB storage
via CSI drivers into the *Deployment*. It greatly increases the available interfaces (see below) for file uploads and
provides a highly-available filesystem for the many deployments that need to use the files.
---
author: Martin Weise
---
# Special Instructions for Minikube
You can use our Helm chart for deploying DBRepo in your Kubernetes Cluster
using [minikube](https://minikube.sigs.k8s.io/docs/start/) as infrastructure provider which deploys a single-node Kubernetes cluster on your machine,
suitable for test-deployments.
## Requirements
### Virtual Machine
For this small, local, test deployment any modern hardware would suffice, we recommend a dedicated virtual machine with
the following settings. Note that most of the vCPU and RAM resources will be needed for starting the infrastructure,
this is because of Docker. During idle times, the deployment will use significantly less resources.
- 4 vCPU cores
- 16GB RAM memory
- 200GB SSD storage
### Minikube
First, install the minikube virtualization tool that provides a single-node Kubernetes environment, e.g. on a virtual
machine. We do not regularly check these instructions, they are provided on best-effort. Check
the [official documentation](https://minikube.sigs.k8s.io/docs/start/) for up-to-date information.
For Debian:
```shell
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
```
Start the cluster and enable basic plugins:
```shell
minikube start --driver='docker'
minikube kubectl -- get po -A
minikube addons enable ingress
```
### NGINX
Deploy a NGINX reverse proxy on the virtual machine to reach your minikube cluster from the public Internet:
```nginx title="/etc/nginx/conf.d/dbrepo.conf"
resolver 127.0.0.11 valid=30s;
server {
listen 80;
server_name _;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://CLUSTER_IP;
}
}
server {
listen 443 ssl;
server_name DOMAIN_NAME;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/certificate.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://CLUSTER_IP;
}
}
```
Replace `CLUSTER_IP` with the result of:
$ minikube ip
192.168.49.2
Replace `DOMAIN_NAME` with the domain name. You will need also a valid TLS certificate with private key for TLS enabled
in the cluster. In our test deployment we obtained a certificate from Let's Encrypt.
### Fileshare
Since the Upload Service uses a shared filesystem with the [Analyst Service](../system-services-analyse),
[Metadata Service](../system-services-metadata) and
[Data Database](../system-databases-data), the dynamic provision of the *PersistentVolume*
by the *PersistentVolumeClaim*
of [`pvc.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/blob/master/charts/dbrepo-core/templates/upload-service/pvc.yaml)
needs to happen statically. You can make use of the host's filesystem and mount it in each of those deployments.
For example, mount the *hostPath* directly in
the [`deployment.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/blob/master/charts/dbrepo-core/templates/analyse-service/deployment.yaml).
```yaml title="deployment.yaml"
apiVersion: apps/v1
kind: Deployment
metadata:
name: analyse-service
...
spec:
template:
spec:
containers:
- name: analyse-service
volumeMounts:
- name: shared
hostPath: /path/of/host
mountPath: /mnt/shared
...
```
## Deployment
To install the DBRepo Helm Chart, download and edit
the [`values.yaml`](https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-deployment/-/raw/master/charts/dbrepo-minikube/values.yaml?inline=false)
file. At minimum you need to change the values for:
* `hostname`, set to your domain, e.g. `subdomain.example.com`
* `authAdminApiUrl`, similarly but with https and the api to the keycloak server, e.g. `https://subdomain.example.com/api/auth`
It is advised to also change the usernames and passwords for all credentials. Next, install the chart using your edited
`values.yaml` file:
!!! info "Documentation of values.yaml"
We documented all values in the `values.yaml` file [here](http://127.0.0.1:8000/deployment-helm/#chart-values) with
default values and description for each value.
```shell
helm upgrade --install dbrepo \
-n dbrepo \
"oci://dbrepo.azurecr.io/helm/dbrepo-core" \
--values ./values.yaml \
--version "0.1.3" \
--create-namespace \
--cleanup-on-fail
```
<mxfile host="Electron" modified="2023-11-17T11:53:24.549Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/21.1.2 Chrome/106.0.5249.199 Electron/21.4.3 Safari/537.36" etag="BtfeOmnXFlr2YLKq2VZ0" version="21.1.2" type="device" pages="7"> <mxfile host="Electron" modified="2023-11-22T22:51:30.674Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/21.1.2 Chrome/106.0.5249.199 Electron/21.4.3 Safari/537.36" etag="rfDZiCzC6Lc0py7Ucjta" version="21.1.2" type="device" pages="7">
<diagram id="mvBsv1rP8O80Qe3yGnn_" name="docker-compose"> <diagram id="mvBsv1rP8O80Qe3yGnn_" name="docker-compose">
<mxGraphModel dx="1434" dy="822" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0"> <mxGraphModel dx="1434" dy="822" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1169" pageHeight="827" math="0" shadow="0">
<root> <root>
...@@ -798,38 +798,38 @@ ...@@ -798,38 +798,38 @@
</mxGraphModel> </mxGraphModel>
</diagram> </diagram>
<diagram id="n3Gsc6DDUkQ8nNTTz0wk" name="data-db"> <diagram id="n3Gsc6DDUkQ8nNTTz0wk" name="data-db">
<mxGraphModel dx="819" dy="-180" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1019" pageHeight="650" math="0" shadow="0"> <mxGraphModel dx="1434" dy="172" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="1019" pageHeight="650" math="0" shadow="0">
<root> <root>
<mxCell id="0" /> <mxCell id="0" />
<mxCell id="1" parent="0" /> <mxCell id="1" parent="0" />
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-9" value="shared filesystem&lt;br&gt;/tmp" style="rounded=1;whiteSpace=wrap;html=1;arcSize=3;verticalAlign=bottom;fontStyle=2" vertex="1" parent="1"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-9" value="shared filesystem&lt;br&gt;/tmp" style="rounded=1;whiteSpace=wrap;html=1;arcSize=3;verticalAlign=bottom;fontStyle=2" parent="1" vertex="1">
<mxGeometry x="425" y="840" width="248" height="130" as="geometry" /> <mxGeometry x="425" y="840" width="248" height="130" as="geometry" />
</mxCell> </mxCell>
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-11" value="jdbc" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;" edge="1" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-1"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-11" value="jdbc" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;exitPerimeter=0;endArrow=none;endFill=0;startArrow=classic;startFill=1;" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-1" edge="1">
<mxGeometry x="0.3769" relative="1" as="geometry"> <mxGeometry x="0.3769" relative="1" as="geometry">
<mxPoint x="472.71428571428555" y="810" as="targetPoint" /> <mxPoint x="472.71428571428555" y="810" as="targetPoint" />
<mxPoint as="offset" /> <mxPoint as="offset" />
</mxGeometry> </mxGeometry>
</mxCell> </mxCell>
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-1" value="" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=8.600000000000023;fillColor=#dae8fc;strokeColor=#000000;" vertex="1" parent="1"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-1" value="" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=8.600000000000023;fillColor=#dae8fc;strokeColor=#000000;" parent="1" vertex="1">
<mxGeometry x="447.5" y="857" width="50" height="64" as="geometry" /> <mxGeometry x="447.5" y="857" width="50" height="64" as="geometry" />
</mxCell> </mxCell>
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-2" value="data-db" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;spacing=-1;" vertex="1" parent="1"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-2" value="data-db" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;spacing=-1;" parent="1" vertex="1">
<mxGeometry x="431.5" y="919" width="85" height="20" as="geometry" /> <mxGeometry x="431.5" y="919" width="85" height="20" as="geometry" />
</mxCell> </mxCell>
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-12" value="http" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;startArrow=classic;startFill=1;endArrow=none;endFill=0;" edge="1" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-7"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-12" value="http" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=0;exitDx=0;exitDy=0;startArrow=classic;startFill=1;endArrow=none;endFill=0;" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-7" edge="1">
<mxGeometry x="0.4743" relative="1" as="geometry"> <mxGeometry x="0.4743" relative="1" as="geometry">
<mxPoint x="585.0952380952381" y="810" as="targetPoint" /> <mxPoint x="585.0952380952381" y="810" as="targetPoint" />
<mxPoint as="offset" /> <mxPoint as="offset" />
</mxGeometry> </mxGeometry>
</mxCell> </mxCell>
<mxCell id="m0IQrUpga-DAo2afT193-3" value="S3" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;startArrow=classic;startFill=1;" edge="1" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-7" target="m0IQrUpga-DAo2afT193-1"> <mxCell id="m0IQrUpga-DAo2afT193-3" value="S3" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;startArrow=classic;startFill=1;" parent="1" source="S8wz9ZtwZs3Sd4maCRdY-7" target="m0IQrUpga-DAo2afT193-1" edge="1">
<mxGeometry relative="1" as="geometry" /> <mxGeometry relative="1" as="geometry" />
</mxCell> </mxCell>
<mxCell id="S8wz9ZtwZs3Sd4maCRdY-7" value="Data DB Sidecar" style="rounded=1;whiteSpace=wrap;html=1;" vertex="1" parent="1"> <mxCell id="S8wz9ZtwZs3Sd4maCRdY-7" value="Data DB Sidecar" style="rounded=1;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="520" y="869" width="130" height="40" as="geometry" /> <mxGeometry x="520" y="869" width="130" height="40" as="geometry" />
</mxCell> </mxCell>
<mxCell id="m0IQrUpga-DAo2afT193-1" value="Storage Service&lt;br&gt;(minIO)" style="rounded=1;whiteSpace=wrap;html=1;" vertex="1" parent="1"> <mxCell id="m0IQrUpga-DAo2afT193-1" value="Storage Service&lt;br&gt;(minIO)" style="rounded=1;whiteSpace=wrap;html=1;" parent="1" vertex="1">
<mxGeometry x="720" y="869" width="130" height="40" as="geometry" /> <mxGeometry x="720" y="869" width="130" height="40" as="geometry" />
</mxCell> </mxCell>
</root> </root>
......
.docs/images/exchange-binding.png

4.04 KiB

.docs/images/queue-quorum.png

15.3 KiB

...@@ -21,7 +21,7 @@ It holds exchanges and topics responsible for holding AMQP messages for later co ...@@ -21,7 +21,7 @@ It holds exchanges and topics responsible for holding AMQP messages for later co
use [RabbitMQ](https://www.rabbitmq.com/) in the implementation. By default, the endpoint listens to the insecure port `5672` for incoming use [RabbitMQ](https://www.rabbitmq.com/) in the implementation. By default, the endpoint listens to the insecure port `5672` for incoming
AMQP tuples and insecure port `15672` for the management UI. AMQP tuples and insecure port `15672` for the management UI.
The default configuration creates a user with administrative privileges: The default configuration creates a user with administrative privileges on the default virtual host `dbrepo`:
* Username: `fda` * Username: `fda`
* Password: `fda` * Password: `fda`
...@@ -35,6 +35,22 @@ The Broker Service allows two ways of authentication: ...@@ -35,6 +35,22 @@ The Broker Service allows two ways of authentication:
For detailed examples how to authenticate with the Broker Service see For detailed examples how to authenticate with the Broker Service see
the [usage](/usage-broker) page. the [usage](/usage-broker) page.
The architecture of the Broker Service is very simple. There is only one durable, topic exchange `dbrepo` and one quorum
queue `dbrepo`, connected with a binding of `dbrepo.#` which routes all tuples with routing key prefix `dbrepo.` (mind
the dot!) to this queue.
<figure markdown>
![Data ingest](images/queue-quorum.png)
<figcaption>Replicated quorum queue dbrepo in a cluster with three nodes</figcaption>
</figure>
The consumer takes care of writing it to the correct table in the [Data Service](../system-services-data).
<figure markdown>
![Data ingest](images/exchange-binding.png)
<figcaption>Architecture Broker Service</figcaption>
</figure>
## Limitations ## Limitations
* No support for MQTT in the [Metadata Service](../system-services-metadata) * No support for MQTT in the [Metadata Service](../system-services-metadata)
......
...@@ -349,9 +349,9 @@ scan-data-db: ...@@ -349,9 +349,9 @@ scan-data-db:
- master - master
allow_failure: true allow_failure: true
script: script:
- trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-data-db-report.json docker.io/bitnami/mariadb:10.5 - trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-data-db-report.json docker.io/bitnami/mariadb:11.1.3
- trivy image --insecure --exit-code 0 docker.io/bitnami/mariadb:10.5 - trivy image --insecure --exit-code 0 docker.io/bitnami/mariadb:11.1.3
- trivy image --insecure --exit-code 1 --severity CRITICAL docker.io/bitnami/mariadb:10.5 - trivy image --insecure --exit-code 1 --severity CRITICAL docker.io/bitnami/mariadb:11.1.3
cache: cache:
paths: paths:
- .trivycache/ - .trivycache/
......
...@@ -57,7 +57,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ...@@ -57,7 +57,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed ### Changed
- Metadata database to use a system-versioned MariaDB 10.5 database - Metadata database to use a system-versioned MariaDB 11.1.3 database
- Query Store to use trigger for query result count and query hash as well as result hash calculation - Query Store to use trigger for query result count and query hash as well as result hash calculation
- Query service to allow XML/CSV export for PIDs - Query service to allow XML/CSV export for PIDs
- Query service to support subsets of views - Query service to support subsets of views
......
...@@ -211,10 +211,10 @@ scan-search-dashboard: ...@@ -211,10 +211,10 @@ scan-search-dashboard:
trivy image --insecure --exit-code 1 --severity CRITICAL "opensearchproject/opensearch-dashboards:2.10.0" trivy image --insecure --exit-code 1 --severity CRITICAL "opensearchproject/opensearch-dashboards:2.10.0"
scan-data-db: scan-data-db:
docker pull "bitnami/mariadb:10.5" docker pull "bitnami/mariadb:11.1.3"
trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-data-db-report.json "bitnami/mariadb:10.5" trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-data-db-report.json "bitnami/mariadb:11.1.3"
trivy image --insecure --exit-code 0 "bitnami/mariadb:10.5" trivy image --insecure --exit-code 0 "bitnami/mariadb:11.1.3"
trivy image --insecure --exit-code 1 --severity CRITICAL "bitnami/mariadb:10.5" trivy image --insecure --exit-code 1 --severity CRITICAL "bitnami/mariadb:11.1.3"
scan-ui: scan-ui:
trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-ui-report.json dbrepo-ui:latest trivy image --insecure --exit-code 0 --format template --template "@.trivy/gitlab.tpl" -o ./.trivy/trivy-ui-report.json dbrepo-ui:latest
......
...@@ -2,7 +2,7 @@ FROM python:3.10-alpine ...@@ -2,7 +2,7 @@ FROM python:3.10-alpine
RUN apk add bash curl jq RUN apk add bash curl jq
WORKDIR /app ENV PYTHONFAULTHANDLER=1
COPY Pipfile Pipfile.lock ./ COPY Pipfile Pipfile.lock ./
...@@ -10,18 +10,18 @@ RUN pip install pipenv && \ ...@@ -10,18 +10,18 @@ RUN pip install pipenv && \
pipenv install gunicorn && \ pipenv install gunicorn && \
pipenv install --system --deploy pipenv install --system --deploy
USER 1000 USER 1001
WORKDIR /app
COPY ./clients ./clients COPY --chown=1001 ./clients ./clients
COPY ./ds-yml ./ds-yml COPY --chown=1001 ./ds-yml ./ds-yml
COPY ./app.py ./app.py COPY --chown=1001 ./app.py ./app.py
ENV S3_STORAGE_ENDPOINT="http://storage-service:9000" ENV S3_STORAGE_ENDPOINT="http://storage-service:9000"
ENV S3_ACCESS_KEY_ID="minioadmin" ENV S3_ACCESS_KEY_ID="minioadmin"
ENV S3_SECRET_ACCESS_KEY="minioadmin" ENV S3_SECRET_ACCESS_KEY="minioadmin"
RUN ls -la ./clients
EXPOSE 3305 EXPOSE 3305
ENTRYPOINT [ "gunicorn", "-w", "4", "-b", ":3305", "app:app" ] ENTRYPOINT [ "gunicorn", "--log-level", "DEBUG", "--workers", "4", "--bind", ":3305", "app:app" ]
...@@ -55,6 +55,7 @@ class MinioClient: ...@@ -55,6 +55,7 @@ class MinioClient:
def file_exists(self, bucket, filename): def file_exists(self, bucket, filename):
try: try:
self.client.head_object(Bucket=bucket, Key=filename) self.client.head_object(Bucket=bucket, Key=filename)
logging.debug(f"file with name {filename} exists in bucket with name {bucket}")
except ClientError as e: except ClientError as e:
if e.response["Error"]["Code"] == "404": if e.response["Error"]["Code"] == "404":
logging.error("Failed to find key %s in bucket %s", filename, bucket) logging.error("Failed to find key %s in bucket %s", filename, bucket)
...@@ -66,6 +67,7 @@ class MinioClient: ...@@ -66,6 +67,7 @@ class MinioClient:
def bucket_exists_or_exit(self, bucket): def bucket_exists_or_exit(self, bucket):
try: try:
self.client.head_bucket(Bucket=bucket) self.client.head_bucket(Bucket=bucket)
logging.debug(f"bucket {bucket} exists.")
except ClientError as e: except ClientError as e:
if e.response["Error"]["Code"] == "404": if e.response["Error"]["Code"] == "404":
logging.error("Failed to find bucket %s", bucket) logging.error("Failed to find bucket %s", bucket)
......
...@@ -534,7 +534,7 @@ VALUES ('MIT', 'https://opensource.org/licenses/MIT'), ...@@ -534,7 +534,7 @@ VALUES ('MIT', 'https://opensource.org/licenses/MIT'),
('CC-BY-4.0', 'https://creativecommons.org/licenses/by/4.0/legalcode'); ('CC-BY-4.0', 'https://creativecommons.org/licenses/by/4.0/legalcode');
INSERT INTO `mdb_images` (name, version, default_port, dialect, driver_class, jdbc_method) INSERT INTO `mdb_images` (name, version, default_port, dialect, driver_class, jdbc_method)
VALUES ('mariadb', '10.5', 3306, 'org.hibernate.dialect.MariaDBDialect', 'org.mariadb.jdbc.Driver', 'mariadb'); VALUES ('mariadb', '11.1.3', 3306, 'org.hibernate.dialect.MariaDBDialect', 'org.mariadb.jdbc.Driver', 'mariadb');
INSERT INTO `mdb_images_date` (iid, database_format, unix_format, example, has_time) INSERT INTO `mdb_images_date` (iid, database_format, unix_format, example, has_time)
VALUES (1, '%Y-%c-%d %H:%i:%S.%f', 'yyyy-MM-dd HH:mm:ss.SSSSSS', '2022-01-30 13:44:25.499', true), VALUES (1, '%Y-%c-%d %H:%i:%S.%f', 'yyyy-MM-dd HH:mm:ss.SSSSSS', '2022-01-30 13:44:25.499', true),
......
...@@ -2,6 +2,6 @@ BEGIN; ...@@ -2,6 +2,6 @@ BEGIN;
INSERT INTO `mdb_containers` (name, internal_name, image_id, host, port, sidecar_host, sidecar_port, INSERT INTO `mdb_containers` (name, internal_name, image_id, host, port, sidecar_host, sidecar_port,
privileged_username, privileged_password) privileged_username, privileged_password)
VALUES ('MariaDB 10.5', 'mariadb_10_5', 1, 'data-db', 3306, 'data-db-sidecar', 3305, 'root', 'dbrepo'); VALUES ('MariaDB 11.1.3', 'mariadb_11_1_3', 1, 'data-db', 3306, 'data-db-sidecar', 3305, 'root', 'dbrepo');
COMMIT; COMMIT;
...@@ -157,7 +157,7 @@ public interface QueryMapper { ...@@ -157,7 +157,7 @@ public interface QueryMapper {
} }
default PreparedStatement pathToRawInsertQuery(Connection connection, Table table, ImportDto data) throws QueryMalformedException { default PreparedStatement pathToRawInsertQuery(Connection connection, Table table, ImportDto data) throws QueryMalformedException {
final StringBuilder statement = new StringBuilder("LOAD DATA LOCAL INFILE '/tmp/") final StringBuilder statement = new StringBuilder("LOAD DATA INFILE '/tmp/")
.append(data.getLocation()) .append(data.getLocation())
.append("' INTO TABLE `") .append("' INTO TABLE `")
.append(table.getDatabase().getInternalName()) .append(table.getDatabase().getInternalName())
......
import axios from 'axios' import axios from 'axios'
import config from '../dbrepo.config.json'
const protocol = config.api.useSsl ? 'https' : 'http' const baseUrl = `${location.protocol}//${location.host}`
const baseUrl = `${protocol}://${config.api.endpoint}:${config.api.port}`
console.debug('base url', baseUrl)
const instance = axios.create({ const instance = axios.create({
timeout: 10000, timeout: 10000,
......
import Vue from 'vue' import Vue from 'vue'
import config from '../dbrepo.config'
const tus = require('tus-js-client') const tus = require('tus-js-client')
class UploadService { class UploadService {
upload (file) { upload (file) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
const protocol = config.api.useSsl ? 'https' : 'http' const endpoint = `${location.protocol}//${location.host}/api/upload/files`
const baseUrl = `${protocol}://${config.api.endpoint}:${config.api.port}` console.debug('upload endpoint', endpoint)
if (!tus.isSupported) { if (!tus.isSupported) {
console.error('Your browser does not support uploads!') console.error('Your browser does not support uploads!')
Vue.$toast.error('Your browser does not support uploads!') Vue.$toast.error('Your browser does not support uploads!')
return return
} }
const upload = new tus.Upload(file, { const upload = new tus.Upload(file, {
endpoint: `${baseUrl}/api/upload/files`, endpoint,
retryDelays: [0, 3000, 5000, 10000, 20000], retryDelays: [0, 3000, 5000, 10000, 20000],
metadata: { metadata: {
filename: file.name, filename: file.name,
......
...@@ -11,13 +11,10 @@ ...@@ -11,13 +11,10 @@
"path": "/favicon.ico" "path": "/favicon.ico"
}, },
"api": { "api": {
"endpoint": "localhost",
"port": 80,
"useSsl": false "useSsl": false
}, },
"broker": { "broker": {
"connection": { "connection": {
"host": "localhost",
"ports": [ "ports": [
5672 5672
], ],
...@@ -25,6 +22,9 @@ ...@@ -25,6 +22,9 @@
} }
}, },
"storage": { "storage": {
"endpoint": "storage-service",
"port": 9000,
"useSsl": false,
"accessKey": { "accessKey": {
"id": "minioadmin", "id": "minioadmin",
"secret": "minioadmin" "secret": "minioadmin"
......
...@@ -4,9 +4,8 @@ import config from './dbrepo.config.json' ...@@ -4,9 +4,8 @@ import config from './dbrepo.config.json'
const proxy = {} const proxy = {}
const api = 'http://localhost'
if (process.env.NODE_ENV === 'development') { if (process.env.NODE_ENV === 'development') {
const api = 'http://localhost'
proxy['/api'] = api proxy['/api'] = api
proxy['/pid'] = { proxy['/pid'] = {
target: api + '/api', target: api + '/api',
......
...@@ -272,14 +272,6 @@ export default { ...@@ -272,14 +272,6 @@ export default {
token () { token () {
return this.$store.state.token return this.$store.state.token
}, },
config () {
if (this.token === null) {
return {}
}
return {
headers: { Authorization: `Bearer ${this.token}` }
}
},
user () { user () {
return this.$store.state.user return this.$store.state.user
}, },
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment