Hello,
I have installed aiida to a local institutional cluster and used the following commands in order:
- verdi computer setup --config baseline.yml
baseline.yml:
label: "localhost"
hostname: "localhost"
transport: "core.local"
scheduler: "core.slurm"
work_dir: "/gpfs/wolf2/cades/phy191/scratch/ksu/aiida"
mpirun_command: "srun -n {tot_num_mpiprocs}"
mpiprocs_per_machine: 128
prepend_text: |
#SBATCH -p batch
#SBATCH -A PHY191
module purge; module load DefApps intel/20.0.4 openmpi/4.0.4 hdf5/1.14.3
export OMP_NUM_THREADS=1
Also chose the following options from the interactive setup after invoking the command above:
Description []:
Shebang line (first line of each script, starting with #!) [#!/bin/bash]:
Default amount of memory per machine (kB).: 10000000
Escape CLI arguments in double quotes [y/N]: N
- verdi -p ks computer configure core.local localhost
Use login shell when executing command [Y/n]: N
Connection cooldown time (s) [0.0]:
- verdi code create core.code.installed -n --computer localhost --label pw --default-calc-job-plugin quantumespresso.pw --filepath-executable pw.x
Then I ran the example script from aiida.quantum_espresso plugin:
aiida-quantumespresso calculation launch pw -X pw@localhost -F SSSP/1.1/PBE/efficiency
In the folder where the calculation is submitted, here is the slurm submission script:
#!/bin/bash
#SBATCH --no-requeue
#SBATCH --job-name="aiida-221"
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=_scheduler-stderr.txt
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --time=00:30:00
#SBATCH --mem=9765
#SBATCH -p batch
#SBATCH -A PHY191
module purge; module load DefApps intel/20.0.4 openmpi/4.0.4 hdf5/1.14.3
export OMP_NUM_THREADS=1
'pw.x' '-in' 'aiida.in' > 'aiida.out'
for some reason, the srun command is not picked up and that causes the calculation to fail. If I add the srun command by hand and submit the calculation by hand, then it runs with no error. Thank you for the help!
Kayahan