faster get_unprocessed_files by parallelisation and restarting stuck database requests
This MR provides a solution anaologously to what was done in !429 (merged) for the initial fetching.
Also here, the database accesses can get stuck, which happens in a significant fraction of tries. Therefore, I was not able to run this script for a json which contains many datasets.
With these changes, it runs in a few minutes even for large numbers of datasets.
It would be great if @pdevouge could have a look since you developed this script. Thanks.