Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Hello everyone!
I've been a Molcas user for years. I'm also familiar with the installation process and configuration of the software (in local machines). Now I had to get OpenMolcas installed on a cluster with SLURM (it's my first time). Despite the software was properly installed and configured to run in parallel, I'm struggling to execute it in the different nodes. My job script is something like this:
#!/bin/bash
#SBATCH -J h
#SBATCH -p partition
#SBATCH -t 09:00:00
#SBATCH -N 1
#SBATCH --tasks-per-node=4
#SBATCH --mem-per-cpu=16000
#SBATCH --exclusive
#SBATCH -o job-%j.stdout
#SBATCH -e job-%j.stderr
# Loading OpenMOLCAS
module load mpi/gcc/openmpi-4.1.0
module load openmolcas/pymolcas
# OpenMP settings
export OMP_NUM_THREADS=1
#OpenMOLCAS Settings
export MOLCAS_MEM=12000
export MOLCAS_NNODES=$SLURM_NNODES
export MOLCAS_NPROCS=$SLURM_NPROCS
export MOLCAS_WORKDIR=/tmp/openmolcas
#### start the calculation ####
pymolcas h.inp -oe h.log -b 1
When I run this script, the following error is displayed:
"/apps/mpi/gcc/openmpi-4.1.0/bin/mpiexec: error while loading shared libraries: libevent_core-2.1.so.6: cannot open shared object file: No such file or directory
parnell failed to create a WorkDir at /tmp/openmolcas/h"
The molcas.rte file contains the following lines:
# molcas runtime environment
OS='Linux-x86_64'
PARALLEL='ON'
DEFMOLCASMEM='2048'
DEFMOLCASDISK='20000'
RUNSCRIPT='$program $input'
RUNBINARY='/apps/mpi/gcc/openmpi-4.1.0/bin/mpiexec -n $MOLCAS_NPROCS $program'
RUNBINARYSER='$program'
I'd appreciate some help here. I know that after the calculation is done, I have to clean the tmp files, but I don't even have the calculation running yet.
Best
Offline
Does the directory /tmp/openmolcas exist in all the nodes that are running the job? Typically you submit your calculation in a "login" node, and then the job is run in "compute" nodes. Each node has a local disk and then there's probably access to some shared storage. My guess is that your /home is in the shared storage, but /tmp is in the local disk, so whatever you set in /tmp in the login node is not visible for the compute nodes.
(By the way, I don't think MOLCAS_NNODES does anything.)
Offline
From the version of 23.06 for OpenMolcas, it always gives the same error in parallel calculations. It may be a bug. Please fix it
Offline
I have sloved it. The created file of pymolcas may have some error. One can use the old pymolcas created by old version of Openmolcas.
Offline