unfortunately, having all information already requires us to read a lot of data. For instance, if a workflow masks out clouds, the availability of observations really differs per pixel, so we can not easily make a prediction that tells you which dates you’ll get.
It does seem however that you are using batch jobs, which indeed have a large overhead. What I do personally is to work on a small area and use a direct download call in combination with netCDF. This will return a lot faster, and gives you a good view on the datacube that you’re working on.
There’s also some other general recommendations for interactive working. I in fact made a detailed overview in this notebook:
It’s more than 10 months old already, but should still be relevant enough.
We do get the question more often, and are often thinking about how to improve this yourself, so let us know if you have any ideas, or if these suggestions are already helpful!