AiiDA is not running vasp calculations in the database folder

Dear Users/Developers,

I have installed AiiDA and created the profile, setup the computer and pre-installed vasp code. For this, I used vasp and slurm installed locally on my workstation; vasp and slurm works fine individually. I am running AiiDA through API (5. Test a VASP run — AiiDA-VASP 3.1.0 documentation).
I have uploaded potcar files also. I have performed verdi computer and code test; they are perfectly fine. When I am launching the python script, AiiDA lauches VASP job, but it does not generate any data in the database directory. It is unable to create file, _scheduler-stderr.txt or _scheduler-stdout.txt in the database directory.

import numpy as np

from aiida import load_profile
from aiida.common.extendeddicts import AttributeDict
from aiida.engine import submit
from aiida.orm import Bool, Code, Str
from aiida.plugins import DataFactory, WorkflowFactory

###code_to_used = load_code(128)

def get_structure():
Set up Si primitive cell

     0.0000000000000000    0.5000000000000000    0.5000000000000000
     0.5000000000000000    0.0000000000000000    0.5000000000000000
     0.5000000000000000    0.5000000000000000    0.0000000000000000
  0.8750000000000000  0.8750000000000000  0.8750000000000000
  0.1250000000000000  0.1250000000000000  0.1250000000000000


structure_data = DataFactory('core.structure')
alat = 5.431
lattice = np.array([[.5, 0, .5], [.5, .5, 0], [0, .5, .5]]) * alat
structure = structure_data(cell=lattice)
for pos_direct in ([0.875, 0.875, 0.875], [0.125, 0.125, 0.125]):
    pos_cartesian =, lattice)
    structure.append_atom(position=pos_cartesian, symbols='Si')
return structure

def main(code_string, incar, kmesh, structure, potential_family, potential_mapping, options):
“”“Main method to setup the calculation.”“”

# First, we need to fetch the AiiDA datatypes which will
# house the inputs to our calculation
dict_data = DataFactory('core.dict')
kpoints_data = DataFactory('core.array.kpoints')

# Then, we set the workchain you would like to call
workchain = WorkflowFactory('vasp.vasp')

# And finally, we declare the options, settings and input containers
settings = AttributeDict()
inputs = AttributeDict()

# Organize settings
settings.parser_settings = {'output_params': ['total_energies', 'maximum_force']}

# Set inputs for the following WorkChain execution
# Set code
inputs.code = Code.get_from_string(code_string)
# Set structure
inputs.structure = structure
# Set k-points grid density
kpoints = kpoints_data()
inputs.kpoints = kpoints
# Set parameters
inputs.parameters = dict_data(dict=incar)
# Set potentials and their mapping
inputs.potential_family = Str(potential_family)
inputs.potential_mapping = dict_data(dict=potential_mapping)
# Set options
inputs.options = dict_data(dict=options)
# Set settings
inputs.settings = dict_data(dict=settings)
# Set workchain related inputs, in this case, give more explicit output to report
inputs.verbose = Bool(True)
# Submit the requested workchain with the supplied inputs
submit(workchain, **inputs)

if name == ‘main’:
# Code_string is chosen among the list given by ‘verdi code list’
CODE_STRING = ‘vasp-ws@workstation’

# POSCAR equivalent
# Set the silicon structure
STRUCTURE = get_structure()

# INCAR equivalent
# Set input parameters
INCAR = {'incar': {'prec': 'NORMAL', 'encut': 200, 'ediff': 1E-4, 'ialgo': 38, 'ismear': -5, 'sigma': 0.1}}

# KPOINTS equivalent
# Set kpoint mesh
KMESH = [9, 9, 9]

# POTCAR equivalent
# Potential_family is chosen among the list given by
# 'verdi data vasp-potcar listfamilies'
# The potential mapping selects which potential to use, here we use the standard
# for silicon, this could for instance be {'Si': 'Si_GW'} to use the GW ready
# potential instead

# Jobfile equivalent
# In options, we typically set scheduler options.
# See
# AttributeDict is just a special dictionary with the extra benefit that
# you can set and get the key contents with mydict.mykey, instead of mydict['mykey']
OPTIONS = AttributeDict()
OPTIONS.account = ''
OPTIONS.qos = ''
OPTIONS.resources = {'num_machines': 1, 'tot_num_mpiprocs': 1}
OPTIONS.queue_name = 'workstation'
OPTIONS.max_wallclock_seconds = 3600
OPTIONS.max_memory_kb = 10240000


Somehow, by struggling with slurm and JobIDs, I found that AiiDA momentarily generates (I grep it using scontrol show JobID as long as Slurm shows jobs’ status running). But, it vanishes including the directory “1b6f-e832-455d-82dd-1e6791ddd6be” given in the path below.

cat /home/msajjad/gautam/project/aiida/c5/dc/1b6f-e832-455d-82dd-1e6791ddd6be/
#SBATCH --no-requeue
#SBATCH --job-name=“aiida-1299”
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --partition=workstation
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=01:00:00
#SBATCH --mem=10000

source /opt/intel/oneapi/

‘mpirun’ ‘-np’ ‘1’ ‘/home/msajjad/gautam/software/vasp/vasp.6.2.0/bin/vasp_std’ > ‘vasp_output’ 2>&1

I took this job script, and tried running vasp calculation with it separately, it runs fine. But, AiiDA is unable to produce any results.

Please suggest me what could be the issue here. I would grateful to you.

With Regards,
Gautam Sharma,
Postdoctoral fellow,
Khalifa University of Science and Technology,
Abu Dhabi, UAE.


it would be helpful to see the report of the CalcJob and/or WorkChain. You can use
verdi process list -a (or verdi process list -a -p 1 to limit the processes to those that were submitted within the last day) to list all processes, see also the tutorial that you were referring to.

First off, you will already see if the processes exited with a certain exit status unequal to 0. To get more detailed information about a certain process, you can run verdi process report <pk> (replace pk with the pk of the job you’re interested in, you find it in the process list).
Maybe you will already get an idea of what went wrong from that output. Please feel free to share it here in case you need further help.

Another option might be the following: The VaspWorkChain sets the clean_workdir input to True per default (aiida-vasp/src/aiida_vasp/workchains/ at develop · aiida-vasp/aiida-vasp · GitHub). This means that the working directory is cleaned after a successful run. So in case you see in the process list mentioned above that your WorkChain has exit status 0, everything went well. You can inspect the outputs via the Python API or via the command line, as indicated in the tutorial.

To keep the working directory after completion, you can simply set

inputs.clean_workdir = Bool(False)

In this way, you should be able to still go to the working directory after completion using
verdi calcjob gotocomputer <pk> (where pk is the pk of your CalcJob which was launched by the VaspWorkChain).

Don’t hesitate to ask for additional support.

1 Like

I am grateful to you. It worked.