Hey all,
This is an example of the processing ID’s which failed multiple times:
j-03fb0ef385fd4e0abd809c740416db7c
j-fd9d5d47599541999e73456928f4e353
The local error message: **JobFailedException**: Batch job 'j-03fb0ef385fd4e0abd809c740416db7c' didn't finish successfully. Status: error (after 3:11:54).
and the Web editor shows the error information: Uour batch job failed because workers used too much Python memory. The same task was attempted multiple times. Consider increasing executor-memoryOverhead or contact the developers to investigat
.
Thus, I believe there is only the memory problem. This worklfow was tested successfully using synchronous downalod for a smaller spatial and temporal extent. Could you please confirm that this is only related to memory problem?
It is good to say that, I was running the completely same batch job (j-ef42883d631f4784812fc9116b5e6a86
) without one extra step and it finished perfectly. The extra step that was added at the end of the not working workflow is: cube_threshold = s2_cube.mask(s2_cube.apply(lambda x: lt(x,0.75))))
. Is this process problematic?