Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
DBRepo
Manage
Activity
Members
Labels
Plan
External wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Deploy
Releases
Package registry
Model registry
Operate
Terraform modules
Analyze
Contributor analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
FAIR Data Austria DB Repository
DBRepo
Merge requests
!363
Resolve "Hotfix query execution"
Code
Review changes
Check out branch
Download
Patches
Plain diff
Expand sidebar
Merged
Resolve "Hotfix query execution"
476-hotfix-query-execution
into
dev
Overview
0
Commits
23
Pipelines
0
Changes
95
Merged
Martin Weise
requested to merge
476-hotfix-query-execution
into
dev
7 months ago
Overview
0
Commits
23
Pipelines
0
Changes
95
Closes #476
0
0
Merge request reports
Compare
dev
version 1
3c701098
7 months ago
dev (base)
and
latest version
latest version
4b205cf5
23 commits,
7 months ago
version 1
3c701098
22 commits,
7 months ago
95 files
+
1816
−
2854
Inline
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
Files
95
.docs/api/data-service.md
+
15
−
0
View file @ 4b205cf5
Edit in single-file editor
Open in Web IDE
Show full file
@@ -28,8 +28,23 @@ The Data Service is responsible for inserting AMQP tuples from the Broker Servic
via
[
Spring AMQP
](
https://docs.spring.io/spring-amqp/reference/html/
)
. To increase the number of consumers, scale the
Data Service up.
## Data Processing
The Data Service uses
[
Apache Spark
](
https://spark.apache.org/
)
, a data engine to load data from/into
the
[
Data Database
](
../data-db
)
with a wide range of open-source connectors. The default deployment uses a local mode of
embedded processing directly in the service until there exists
a
[
Bitnami Chart
](
https://artifacthub.io/packages/helm/bitnami/spark
)
for Spark 4.
Retrieving data from a subset internally generates a view with the 64-character hash of the query. This view is not
automatically deleted currently.
## Limitations
*
Views in DBRepo can only have 63-character length (it is assumed only internal views have the maximum length of 64
characters).
*
Local mode of embedded processing of Apache Spark directly in the service using
a
[
`local[2]`
](
https://spark.apache.org/docs/latest/#running-the-examples-and-shell
)
configuration.
!!! question "Do you miss functionality? Do these limitations affect you?"
We strongly encourage you to help us implement it as we are welcoming contributors to open-source software and get
Loading