How to control the number of running subworkchains submitted by a bigger workchain

I tried by increasing the number of workers, but it did not change the result.

My bad, you are right, it is possible that the worker that is running the main workchain picked up the subprocesses and it would then still be stuck. Forget this approach, it is indeed not viable.

My problem is that all subworkchains have significantly different running times. I am running DFT calculations with Siesta and the workchains can take from hours to literally days. This is the reason why it is essential for me not to wait for a batch of N subworkchains to finish.

I am afraid that if you want to keep the top-level workchain, there is currently not really a better approach than the one suggested by @t-reents . Is there any way you can “guess” the runtime of the subprocesses by their inputs? If so, you could group them by expected runtime and still run them in batches without losing too much time.

In the end, it would be great if AiiDA allowed your use case, becausa it is not the first time we have come across it, but it would require some new functionality. One solution would be if the workchain interface would become asynchronous code, instead of the current synchronous. Back in the day, we explicitly decided against this, because writing workchain code was already quite complicated for novice users, and forcing them to write asynchronous code would be worse. But that has improved a lot since and writing async code in Python is now relatively easy. We are thinking whether we can allow workchains to optionally write async code, but that is a feature for the future and don’t expect that to come very soon. Alternatively we could come up with a different solution that just relies on synchronous code that would allow a workchain step to temporarily “yield” control for the worker to run other processes and come back later, but not sure yet if that is possible Would have to think about it a bit.