Skip to content
Snippets Groups Projects

Added scheduled scraping that removes stale datasets

Merged Martin Weise requested to merge 513-scrape-old-datasets into dev
12 files
+ 109
10
Compare changes
  • Side-by-side
  • Inline
Files
12
+ 7
1
@@ -46,12 +46,18 @@ everytime e.g. a sensor measurement is inserted. By default, this information is
administrators can disable this behavior by setting `CREDENTIAL_CACHE_TIMEOUT=0` (cache is deleted after 0 seconds).
## Upload
## Storage
The Data Service also is capable to upload files to the S3 backend. The default limit
of [`Tomcat`](https://spring.io/guides/gs/uploading-files#_tuning_file_upload_limits) in Spring Boot is configured to be
`2GB`. You can provide your own limit with setting `MAX_UPLOAD_SIZE`.
By default, the Data Service removes datasets older than 24 hours on a regular basis every 60 minutes. You can set the
`MAX_AGE` (in seconds) and `S3_STALE_CRON` to fit your use-case. You can disable this feature by setting `S3_STALE_CRON`
to `-`, this may lead to storage issues as no space will be available inevitably. Note
that [Spring Boot uses its own flavor](https://spring.io/blog/2020/11/10/new-in-spring-5-3-improved-cron-expressions#usage)
of cron syntax.
## Limitations
* Views in DBRepo can only have 63-character length (it is assumed only internal views have the maximum length of 64
Loading