Long batch job runtimes since yesterday


we experienced significantly longer runtimes for even very simple batch jobs yesterday and today while being connected to the VITO back-end via openEO Platform.
Any idea why?
Related to that, is there a way to assess the back-end’s current workload? Knowing that, non-urgent or bigger jobs could be run at times of low degrees of capacity utilization.
Thanks for any advice.

Hi Hendrik,
we’re also seeing this for Sentinelhub based layers, and it seems that this is related to work going on in that web service. It’s not necessarily related to the service being overloaded.
We’re trying to resolve this asap, in the meantime, using non-sentinelhub layers can be a workaround. (Even in general, if you want fast interactive testing, this can be a good idea.)

apologies for the inconvenience!

We improved the rate limiting towards Sentinelhub, so that it now allows for higher throughput. We also identified the jobs that cause the high number of requests, and implemented an improvement that reduced the amount of requests by about 50%. Please let us know if you still see issues.

There is an upper limit to the number of requests we can do per minute, so some variability in performance is certainly expected.