Documentation on how to write test for Workchain

There are general docs on how to use the AiiDA plugin test fixtures, but there seems to be no information on how to write a test for a workchain. Users need to look at the example of the tests of aiida-core or aiida-quantumespresso. Writing test for a workchain is necessary but sometimes can be difficult. It would be helpful to add a doc on this.


Fully agree! Maybe we should write a tutorial for this, that is based on the workflows that we use to learn users how to write workflows in AiiDA?

Maybe we should write a tutorial for this, that is based on the workflows that we use to learn users how to write workflows in AiiDA?

A tutorial is great! Should it not be for tutorial-2021, but in the tutorial AiiDA tutorials — AiiDA Tutorials that can be quickly found I assume, or in the How-To Guides — AiiDA 2.4.0.post0 documentation?

Big +1, my current workflows are completely untested! :scream:

There is also aiida-testing package The aiida-testing pytest plugin — aiida-testing 0.1.0.dev1 documentation
and I have a vague memory that there was also something else.

aiida-testing is not for the unit test which I think @Xing addressed in the topic. The aiida-testing is more for like the Black-box testing - Wikipedia. There is an issue open for the renaming of it ( consider name change · Issue #60 · aiidateam/aiida-testing · GitHub. )

1 Like

The aiida-testing package was mostly created to simplify the testing of CalcJob plugins. Typically the codes that are run through CalcJob plugins are costly and are not the actual subject of the test, so running an actual CalcJob as one would in production is not an effective manner of testing the plugin in a CI pipeline.

Essentially what one wants test is that the plugin creates the correct input files from the inputs it receives. In addition, in AiiDA the parsing out of outputs is done by a separate plugin (a subclass of Parser) but is conceptually part of a calculation job. The aiida-testing package circumvents the problem by mapping a set of inputs for a particular CalcJob plugin to a directory of output files that would have been created by the target executable if it would have been run normally. These outputs can then be passed to the Parser plugin and parsed. The first time a new test is run, and the reference output files therefore do no yet exist, the package will automatically try to run the executable to generate the reference files. This requires the executable to be runnable on the machine where the test is run of course. This approach is more an integration test rather than a unit test as it integrates the complete running of the CalcJob through AiiDA’s engine, but is just mocking the execution of the underlying executable.

I have personally used a different approach in my plugins where I test the CalcJob and Parser plugins separately in a more unit-test like approach. For calculation jobs, I call the prepare_for_submission method directly and check that the input files are written to the sandbox are as expected and that the CalcInfo object that is returned is as expected. The Parser plugins I test by have a number of directories that contain reference output files for different scenarios. A pytest fixture then automatically creates a FolderData out of this and attaches it as as the retrieved output to a CalcJobNode from which the Parser is then called.

1 Like

I’d propose some tangible actions here. I can imagine that if we gonna have very detailed documentation for this, it takes a lot of effort. Maybe @Xing can summarize from your use case, (1) what basic test example you need, (2) where is the best place to put this documentation, e.g “How to” section, or “tutorial”

Then rather than having very detailed documentation, just some links point to examples where people can find the examples (I think we use aiida-quantumespresso as an example) and start writing “standard” unit tests. I small PR to documentation should be enough and will make the ball rolling.

@sphuber thanks for the explanation! Is your test for CalcJob in aiida-shell, I think you also used it a bit in aiida-quantumespresso as well? If it all work and in the public repositories of aiidateam, we cat add the links to that as examples of how to write unit tests.

I use it pretty much everywhere. It is a generic concept so not tied to any particular kind of CalcJob plugin. I think we can add a section to the plugin topic on the docs (Plugins — AiiDA 2.4.0.post0 documentation) to give examples of how to test various plugins, such as CalcJob and Parser plugins. We can give an example of the “unit-test” approach and the aiida-testing approach that is a more full integration test.

1 Like

I think this is very useful indeed. Writing tests can be confusing sometimes, and having some well detailed examples would help a lot to make sure that we cover the basics in most plugins. In the aiida-lammps plugin we are looking very much at what aiida-quantumespresso does. But sometimes it is not easy to understand, and then one ends with many fixtures that do more or less the same, when probably they could be condensed into fewer more complete ones.

1 Like