Skip to content
Snippets Groups Projects
Commit 46aad66c authored by Martin Weise's avatar Martin Weise
Browse files

Merge branch 'dev'

parents e3576675 92d3fe6e
No related branches found
No related tags found
No related merge requests found
Showing
with 276 additions and 43 deletions
# Development Guide # Application Developer Guide
## Dependencies ## Dependencies
Local development depends on the following packages: Local development depends on the following packages for Debian 12:
* [Apache Maven](https://maven.apache.org/) 3+ ```shell
* [Java JDK](https://openjdk.org/) 17 (LTS) apt install -y maven openjdk-17-jdk make
* [Docker Engine](https://docs.docker.com/engine/install/) 24+ ```
Optional but recommended: Required tools with their own installing guides:
* [GNU Make](https://www.gnu.org/software/make/) 4+ * [Docker Engine](https://docs.docker.com/engine/install/) 24+
* [Minikube](https://minikube.sigs.k8s.io/docs/start/) 1.32.0
## Getting Started ## Getting Started
...@@ -25,11 +26,13 @@ mvn -f ./dbrepo-metadata-service/pom.xml clean install -DskipTests ...@@ -25,11 +26,13 @@ mvn -f ./dbrepo-metadata-service/pom.xml clean install -DskipTests
We practice test-driven development and require contributors to test their code with at least 90% code coverage. We practice test-driven development and require contributors to test their code with at least 90% code coverage.
## Code Documentation ```shell
make test
Before creating a merge request, make sure you: ```
- [x] Generate the [Swagger Docs](#swagger-docs) endpoint documentation The Java-based services have the coverage reports generated by `JaCoCo` in the `report/site/` subdirectory, the
Python-based services have the coverage reports generated by `coverage` in the `.coverage` SQLite3 database
and `coverage.txt` log file respectively.
### Swagger Docs ### Swagger Docs
...@@ -41,20 +44,20 @@ bash .swagger/swagger-generate.sh ...@@ -41,20 +44,20 @@ bash .swagger/swagger-generate.sh
### Branching Strategy ### Branching Strategy
<p align="center"> <figure markdown>
<img src="../.gitlab/branching-strategy.png" alt="Branching strategy from the master-dev-feature branches and release branches." width="732" height="391" /><br/> ![Branching strategy from the master-dev-feature branches and release branches](images/branching-strategy.png)
<i><strong>Figure 1.</strong> Branching strategy of the source code development.</i> <figcaption>Figure 1: Branching strategy of the source code development.</figcaption>
</p> </figure>
### CI/CD ### CI/CD
We get compute resources in-kind from [dataLAB](https://www.it.tuwien.ac.at/en/services/network-and-servers/datalab) We get compute resources in-kind from [dataLAB](https://www.it.tuwien.ac.at/en/services/network-and-servers/datalab)
to run our pipeline: to run our pipeline:
<p align="center"> <figure markdown>
<img src="../.gitlab/gitlab-runner.png" alt="Gitlab runner configuration in the cluster" width="732" height="262" /><br/> ![Gitlab runner configuration in the cluster](images/gitlab-runner.png)
<i><strong>Figure 2.</strong> Gitlab runner configuration in the cluster.</i> <figcaption>Figure 2: Gitlab runner configuration in the cluster.</figcaption>
</p> </figure>
Minikube cluster with 6vCPU and 28GB RAM. The CI pipeline is configured as follows in the `config.toml`: Minikube cluster with 6vCPU and 28GB RAM. The CI pipeline is configured as follows in the `config.toml`:
......
# Infrastructure Developer Guide
## tl;dr
```shell
make cluster-start cluster-image-pull cluster-install
```
## Dependencies
Local development depends on the following packages for Debian 12:
```shell
apt install -y make
```
Required tools with their own installing guides:
* [Docker Engine](https://docs.docker.com/engine/install/) 24+
* [Minikube](https://minikube.sigs.k8s.io/docs/start/) 1.32.0
## Getting Started
Start the local development cluster with the Docker driver (takes at least 8 vCPUs and 12GB RAM). It installs a Minikube
single-node Kubernetes cluster with enabled Ingress and Dashboard
```shell
make cluster-start
```
Build the local images with `make build-docker` and copy them to the cluster image cache:
```shell
make cluster-image-pull
```
Build and install the Helm chart:
```shell
make cluster-install
```
## Debug
Open the Minikube (Kubernetes) Dashboard:
```shell
make cluster-dashboard
```
<figure markdown>
![Minikube Dashboard](images/screenshots/minikube-dashboard.png)
<figcaption>Figure 1: Minikube Dashboard</figcaption>
</figure>
Optionally enable the Prometheus metrics addon with:
```shell
minikube addons enable metrics-server
```
## Test
Test if the Helm chart raises errors on start (the script aborts after 5 minutes automatically if some pods are not
starting or erroneous).
```shell
make cluster-test
```
## Uninstall
To uninstall DBRepo from the local Minikube cluster, removing all data:
```shell
make cluster-uninstall
```
# Overview
## Guides
* The [application developer guide](../dev-guide-app) guides you through the steps on how to build DBRepo from
scratch and customize the application.
* The [infrastructure developer guide](../dev-guide-infra) guides you through the steps on how to build and customize
the operation environment.
## Organization
* Monthly sprints with patch-releases (i.e. `1.4.2` in February, `1.4.3` in March, ...).
* Branching from `dev` for feature development, one release branch per patch (i.e. `release-1.4.2` for release version
`1.4.2`).
## Roadmap
- [x] Q1: Python library, versioning in every component, bumping frontend versions, i18n
- [ ] Q2: Kubernetes deployment guidelines for OpenShift
- [ ] Q3: TBD
- [ ] Q4: Release of 2.0.0
\ No newline at end of file
File moved
File moved
.docs/images/screenshots/minikube-dashboard.png

541 KiB

...@@ -10,6 +10,7 @@ schema.xsd ...@@ -10,6 +10,7 @@ schema.xsd
final/ final/
build/ build/
swagger/ swagger/
*.tar
# docs # docs
.docs/.swagger/dist/ .docs/.swagger/dist/
......
.PHONY: all .PHONY: all
TAG ?= latest TAG ?= latest
APP_VERSION ?= 1.4.2
CHART_VERSION ?= 1.4.2
REPOSITORY_1_URL ?= docker.io/dbrepo REPOSITORY_1_URL ?= docker.io/dbrepo
REPOSITORY_2_URL ?= s210.dl.hpc.tuwien.ac.at/dbrepo REPOSITORY_2_URL ?= s210.dl.hpc.tuwien.ac.at/dbrepo
...@@ -231,5 +233,54 @@ teardown: ...@@ -231,5 +233,54 @@ teardown:
build-api: build-api:
bash .docs/.swagger/swagger-generate.sh bash .docs/.swagger/swagger-generate.sh
helm-build:
cp ./helm-charts/dbrepo/Chart.tpl.yaml ./helm-charts/dbrepo/Chart.yaml
sed -i -e "s/__CHART_VERSION__/\"${CHART_VERSION}\"/g" ./helm-charts/dbrepo/Chart.yaml
sed -i -e "s/__APP_VERSION__/\"${APP_VERSION}\"/g" ./helm-charts/dbrepo/Chart.yaml
#helm dependency update ./helm-charts/dbrepo
helm package ./helm-charts/dbrepo --destination ./build
cluster-start:
minikube start --driver="docker" --memory="12g" --cpus="8" # 2 CPUs for Control Plane + 6
minikube addons disable metrics-server
minikube addons enable ingress && minikube addons enable dashboard
./helm-charts/dbrepo/hack/add-hosts.sh
#CERT_MANAGER_VERSION=1.14.4 ./helm-charts/dbrepo/hack/install-cert-manager.sh
cluster-test: cluster-start cluster-image-pull cluster-install
bash ./helm-charts/dbrepo/test.sh
minikube stop
cluster-stop:
minikube stop
cluster-image-pull:
docker image save -o ui.tar dbrepo-ui:latest
docker image save -o data-service.tar dbrepo-data-service:latest
docker image save -o search-db-init.tar dbrepo-search-db-init:latest
docker image save -o search-service.tar dbrepo-search-service:latest
docker image save -o analyse-service.tar dbrepo-analyse-service:latest
docker image save -o data-db-sidecar.tar dbrepo-data-db-sidecar:latest
docker image save -o metadata-service.tar dbrepo-metadata-service:latest
echo "[INFO] Saved local images"
minikube image load ui.tar
minikube image load data-service.tar
minikube image load search-db-init.tar
minikube image load search-service.tar
minikube image load analyse-service.tar
minikube image load data-db-sidecar.tar
minikube image load metadata-service.tar
echo "[INFO] Imported local images"
rm -f ./ui.tar ./data-service.tar ./search-service.tar ./analyse-service.tar ./data-db-sidecar.tar ./metadata-service.tar
cluster-install: helm-build
helm upgrade --install dbrepo -n dbrepo ./build/dbrepo-${CHART_VERSION}.tgz --create-namespace --cleanup-on-fail
cluster-uninstall:
helm uninstall -n dbrepo dbrepo
cluster-dashboard:
minikube dashboard
docs: docs:
bash ./build-docs.sh bash ./build-docs.sh
# generated
*.crt
*.key
*.srl
*.csr
\ No newline at end of file
apiVersion: v2
name: dbrepo
description: Helm Chart for installing DBRepo
sources:
- https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services
type: application
version: __CHART_VERSION__
appVersion: __APP_VERSION__
keywords:
- dbrepo
maintainers:
- name: Martin Weise
email: martin.weise@tuwien.ac.at
home: https://www.ifs.tuwien.ac.at/infrastructures/dbrepo/
icon: https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services/-/raw/master/.docs/images/signet_white.png
dependencies:
- name: opensearch
alias: searchdb
version: 2.15.0 # app version 2.10.0
repository: https://opensearch-project.github.io/helm-charts/
- name: opensearch-dashboards
alias: searchDbDashboard
version: 2.13.0 # app version 2.10.0
repository: https://opensearch-project.github.io/helm-charts/
- name: keycloak
alias: authService
version: 17.3.3
repository: https://charts.bitnami.com/bitnami
- name: mariadb-galera
alias: dataDb
version: 11.0.1
repository: https://charts.bitnami.com/bitnami
- name: mariadb-galera
alias: metadataDb
version: 11.0.1
repository: https://charts.bitnami.com/bitnami
- name: postgresql-ha
alias: authDb
version: 12.1.7
repository: https://charts.bitnami.com/bitnami
- name: rabbitmq
alias: brokerService
version: 12.5.1
repository: https://charts.bitnami.com/bitnami
- name: fluent-bit
alias: logservice
version: 0.40.0
repository: https://fluent.github.io/helm-charts
- name: seaweedfs
alias: storageservice
version: 3.59.4
repository: https://seaweedfs.github.io/seaweedfs/helm
apiVersion: v2 apiVersion: v2
name: dbrepo name: dbrepo
description: Helm Chart for installing DBRepo description: Helm Chart for installing DBRepo
category: Database
sources: sources:
- https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services - https://gitlab.phaidra.org/fair-data-austria-db-repository/fda-services
type: application type: application
version: "1.4.2-RC1" version: "1.4.2"
appVersion: "1.4.2" appVersion: "1.4.2"
keywords: keywords:
- dbrepo - dbrepo
......
...@@ -17,9 +17,9 @@ helm install my-release "oci://s210.dl.hpc.tuwien.ac.at/dbrepo/helm/dbrepo" --va ...@@ -17,9 +17,9 @@ helm install my-release "oci://s210.dl.hpc.tuwien.ac.at/dbrepo/helm/dbrepo" --va
* Kubernetes 1.24+ * Kubernetes 1.24+
* Kubernetes 3.8.0+ * Kubernetes 3.8.0+
* PV provisioner support in the underlying infrastructure * Optional PV provisioner support in the underlying infrastructure (for persistence).
* Ingress support in the underlying infrastructure * Optional ingress support in the underlying infrastructure: e.g. [NGINX](https://docs.nginx.com/nginx-ingress-controller/) (for the UI).
* TLS certificate provisioner support in the underlying infrastructure, e.g. [cert-manager](https://cert-manager.io/) * Optional certificate provisioner support in the underlying infrastructure: e.g. [cert-manager](https://cert-manager.io/) (for production use).
## Installing the Chart ## Installing the Chart
......
#!/bin/bash
cat /etc/hosts | grep "dbrepo.local"
if [ "$?" -ne 0 ]; then
echo "$(minikube ip) dbrepo.local" | sudo tee -a /etc/hosts
fi
\ No newline at end of file
#!/bin/bash
HOSTNAME="dbrepo.local"
openssl genrsa -out ./tls/ca.key 2048
openssl req -new -x509 -days 365 -key ./tls/ca.key -subj "/C=AT/O=Acme, Inc./CN=Acme Root CA" -out ./tls/ca.crt
openssl req -newkey rsa:2048 -nodes -keyout ./tls/tls.key -subj "/C=AT/O=DBRepo/CN=${HOSTNAME}" -out ./tls/tls.csr
openssl x509 -req -extfile <(printf "subjectAltName=DNS:${HOSTNAME},DNS:www.${HOSTNAME}") -days 365 -in ./tls/tls.csr \
-CA ./tls/ca.crt -CAkey ./tls/ca.key -CAcreateserial -out ./tls/tls.crt
\ No newline at end of file
#!/bin/bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v${CERT_MANAGER_VERSION}/cert-manager.yaml
if [ $? -ne 0 ]; then
echo "ERROR: Failed to install cert-manager" > /dev/stderr
else
echo "SUCCESS: Installed cert-manager"
fi
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-cluster-issuer
spec:
selfSigned: {}
EOF
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }} {{- if .Values.ingress.enabled }}
{{- range $ingress := .Values.ingress.data }} 1. Get the application URL by running these commands:
{{- range $host := $ingress.hosts }} https://{{ .Values.hostname }}
{{- range .paths }} {{- else }}
http{{ if $ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} Enable ingress to access the UI with `ingress.enabled: true`.
{{- end }}
{{- end }}
{{- end }}
{{- end }} {{- end }}
...@@ -4,14 +4,14 @@ apiVersion: apps/v1 ...@@ -4,14 +4,14 @@ apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: analyse-service name: analyse-service
namespace: {{ $.Values.namespace }} namespace: {{ .Values.namespace }}
labels: labels:
app: analyse-service app: analyse-service
service: analyse-service service: analyse-service
spec: spec:
replicas: {{ .Values.analyseService.replicaCount }} replicas: {{ .Values.analyseService.replicaCount }}
strategy: strategy:
type: {{ $.Values.strategyType }} type: {{ .Values.strategyType }}
selector: selector:
matchLabels: matchLabels:
app: analyse-service app: analyse-service
...@@ -29,7 +29,7 @@ spec: ...@@ -29,7 +29,7 @@ spec:
runAsGroup: 1000 runAsGroup: 1000
containers: containers:
- name: analyse-service - name: analyse-service
image: {{ printf "%s/%s:%s" .Values.analyseService.image.registry .Values.analyseService.image.repository .Values.analyseService.image.tag }} image: {{ .Values.analyseService.image.name }}
imagePullPolicy: {{ .Values.analyseService.image.pullPolicy | default "IfNotPresent" }} imagePullPolicy: {{ .Values.analyseService.image.pullPolicy | default "IfNotPresent" }}
ports: ports:
- containerPort: 5000 - containerPort: 5000
......
...@@ -3,7 +3,6 @@ kind: PersistentVolumeClaim ...@@ -3,7 +3,6 @@ kind: PersistentVolumeClaim
metadata: metadata:
name: data-db-shared name: data-db-shared
spec: spec:
storageClassName: {{ .Values.dataDb.persistence.sharedStorageClass }}
accessModes: accessModes:
- ReadWriteMany - ReadWriteMany
resources: resources:
......
...@@ -4,14 +4,14 @@ apiVersion: apps/v1 ...@@ -4,14 +4,14 @@ apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: data-service name: data-service
namespace: {{ $.Values.namespace }} namespace: {{ .Values.namespace }}
labels: labels:
app: data-service app: data-service
service: data-service service: data-service
spec: spec:
replicas: {{ .Values.metadataService.replicaCount }} replicas: {{ .Values.dataService.replicaCount }}
strategy: strategy:
type: {{ $.Values.strategyType }} type: {{ .Values.strategyType }}
selector: selector:
matchLabels: matchLabels:
app: data-service app: data-service
...@@ -28,7 +28,7 @@ spec: ...@@ -28,7 +28,7 @@ spec:
runAsGroup: 1000 runAsGroup: 1000
containers: containers:
- name: data-service - name: data-service
image: {{ printf "%s/%s:%s" .Values.dataService.image.registry .Values.dataService.image.repository .Values.dataService.image.tag }} image: {{ .Values.dataService.image.name }}
imagePullPolicy: {{ .Values.dataService.image.pullPolicy | default "IfNotPresent" }} imagePullPolicy: {{ .Values.dataService.image.pullPolicy | default "IfNotPresent" }}
ports: ports:
- containerPort: 9093 - containerPort: 9093
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment