When I ran the following code I had errror logs:
segmentation_job = segmentationband.create_job(
title=“segmentation_onnx_job”, out_format=“NetCDF”, job_options=job_options
)
segmentation_job.start_and_wait()
segmentation_job.download_result(base_path / “delineation.nc”)
error logs in my jupyter notebook:
0:00:00 Job ‘j-2602100202294d54bc4ae0387f7c692b’: send ‘start’
0:00:15 Job ‘j-2602100202294d54bc4ae0387f7c692b’: created (progress 0%)
0:00:20 Job ‘j-2602100202294d54bc4ae0387f7c692b’: created (progress 0%)
0:00:27 Job ‘j-2602100202294d54bc4ae0387f7c692b’: created (progress 0%)
0:00:35 Job ‘j-2602100202294d54bc4ae0387f7c692b’: created (progress 0%)
0:00:46 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:00:58 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:01:14 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:01:33 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:01:57 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:02:28 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:03:05 Job ‘j-2602100202294d54bc4ae0387f7c692b’: running (progress N/A)
0:03:52 Job ‘j-2602100202294d54bc4ae0387f7c692b’: error (progress N/A)
Your batch job ‘j-2602100202294d54bc4ae0387f7c692b’ failed. Error logs:
[{‘id’: ‘[1770689136161, 332748]’, ‘time’: ‘2026-02-10T02:05:36.161Z’, ‘level’: ‘error’, ‘message’: ‘Task 6 in stage 32.0 failed 4 times; aborting job’}, {‘id’: ‘[1770689136171, 677265]’, ‘time’: ‘2026-02-10T02:05:36.171Z’, ‘level’: ‘error’, ‘message’: ‘Stage error: Job aborted due to stage failure: Task 6 in stage 32.0 failed 4 times, most recent failure: Lost task 6.3 in stage 32.0 (TID 398) (10.42.38.191 executor 15): org.apache.spark.api.python.PythonException: Traceback (most recent call last):\n File “/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py”, line 2044, in main\n process()\n File “/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py”, line 2036, in process\n serializer.dump_stream(out_iter, outfile)\n File “/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py”, line 145, in dump_stream\n for obj in iterator:\n File “/usr/local/spark/python/lib/pyspark.zip/pyspark/util.py”, line 131, in wrapper\n return f(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/openeogeotrellis/utils.py”, line 66, in memory_logging_wrapper\n return function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/epsel.py”, line 44, in wrapper\n return _FUNCTION_POINTERS[key](*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/epsel.py”, line 37, in first_time\n return f(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/openeogeotrellis/geopysparkdatacube.py”, line 585, in tile_function\n result_data = run_udf_code(code=udf_code, data=data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/epsel.py”, line 44, in wrapper\n return _FUNCTION_POINTERS[key](*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/epsel.py”, line 37, in first_time\n return f(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/openeogeotrellis/udf/init.py”, line 70, in run_udf_code\n return openeo.udf.run_udf_code(code=code, data=data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/openeo/udf/run_code.py”, line 152, in run_udf_code\n module = load_module_from_string(code)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File “/opt/venv/lib64/python3.11/site-packages/openeo/udf/run_code.py”, line 64, in load_module_from_string\n exec(code, globals)\n File “”, line 12, in \n File “/opt/spark/work-dir/onnx_deps/onnxruntime/init.py”, line 55, in \n raise import_capi_exception\n File “/opt/spark/work-dir/onnx_deps/onnxruntime/init.py”, line 23, in \n from onnxruntime.capi._pybind_state import (\n File “/opt/spark/work-dir/onnx_deps/onnxruntime/capi/_pybind_state.py”, line 33, in \n from .onnxruntime_pybind11_state import * # noqa\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nModuleNotFoundError: No module named 'onnxruntime.capi.onnxruntime_pybind11_state'\n\n\tat org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:581)\n\tat org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:940)\n\tat org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:925)\n\tat org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:532)\n\tat org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)\n\tat scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)\n\tat scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)\n\tat scala.collection.Iterator$$anon$6.hasNext(Iterator.scala:477)\n\tat scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)\n\tat scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:601)\n\tat org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:197)\n\tat org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)\n\tat org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:57)\n\tat org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:111)\n\tat org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:54)\n\tat org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:147)\n\tat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:647)\n\tat org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:80)\n\tat org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:77)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:650)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n\nDriver stacktrace:’}, {‘id’: ‘[1770689137878, 983728]’, ‘time’: ‘2026-02-10T02:05:37.878Z’, ‘level’: ‘error’, ‘message’: ‘OpenEO batch job failed: UDF exception while evaluating processing graph. Please check your user defined functions. stacktrace:\n File “”, line 12, in \n File “/opt/spark/work-dir/onnx_deps/onnxruntime/init.py”, line 55, in \n raise import_capi_exception\n File “/opt/spark/work-dir/onnx_deps/onnxruntime/init.py”, line 23, in \n from onnxruntime.capi._pybind_state import (\n File “/opt/spark/work-dir/onnx_deps/onnxruntime/capi/_pybind_state.py”, line 33, in \n from .onnxruntime_pybind11_state import * # noqa\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nModuleNotFoundError: No module named 'onnxruntime.capi.onnxruntime_pybind11_state'’}]
Full logs can be inspected in an openEO (web) editor or with connection.job('j-2602100202294d54bc4ae0387f7c692b').logs().
JobFailedException Traceback (most recent call last)
Cell In[11], line 4
1 segmentation_job = segmentationband.create_job(
2 title=“segmentation_onnx_job”, out_format=“NetCDF”, job_options=job_options
3 )
----> 4 segmentation_job.start_and_wait()
5 segmentation_job.download_result(base_path / “delineation.nc”)
File ~.conda\envs\openeo\Lib\site-packages\openeo\rest\job.py:382, in BatchJob.start_and_wait(self, print, max_poll_interval, connection_retry_interval, soft_error_max, show_error_logs, require_success)
378 print(self.logs(level=logging.ERROR))
379 print(
380 f"Full logs can be inspected in an openEO (web) editor or with connection.job({self.job_id!r}).logs()."
381 )
→ 382 raise JobFailedException(
383 f"Batch job {self.job_id!r} didn’t finish successfully. Status: {status} (after {elapsed()}).",
384 job=self,
385 )
387 return self
JobFailedException: Batch job ‘j-2602100202294d54bc4ae0387f7c692b’ didn’t finish successfully. Status: error (after 0:03:53).
it seemed the job failed on the server(or back-end) side. Help is greatly appreciated.