Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Hi everybody,
I want to do CASSCF optimization for the ground and first singlet excited states. I'm using the following job script for running. But, when I look at the output file it seems it runs in serial not parallel. Because there is nothing that shows it is running in parallel and it seems more cores don't speed up the calculation. I wonder maybe I need to add some other lines in my script to make it parallel. Has anyone tried to do casscf optimization in parallel and could help me? Thank you so much.
#!/bin/sh
# embedded options to qsub - start with #PBS
# -- Name of the job ---
#PBS -N molcas
# -- specify queue --
#PBS -q hpc
# -- estimated wall clock time (execution time): hh:mm:ss --
#PBS -l walltime=72:00:00
# -- number of processors/cores/nodes --
#PBS -l nodes=1:ppn=20,mem=30gb
module load molcas/80.openmpi
cd $PBS_O_WORKDIR
# here follow the commands you want to execute
export Project=structure
export WorkDir=/my/directory/$Project
export MOLCAS_MEM=30000
export MOLCAS_NPROCS=$PBS_NP
# Run Gateway
molcas gateway -f
Best,
Mostafa
Offline
First of all, you need Molcas compiled with parallel (MPI) support. Refer to the "configuration info" section that should appear in your output file after the banner. This is how mine looks like:
configuration info
------------------
C Compiler ID: GNU
C flags: -std=gnu99 -Wall -Werror
Fortran Compiler ID: GNU
Fortran flags: -cpp -fno-aggressive-loop-optimizations -fdefault-integer-8 -Wall -Werror
Definitions: _MOLCAS_;_I8_;_LINUX_;_GA_;_MOLCAS_MPP_;_DELAYED_;_FDE_
Parallel: ON (GA=ON)
Then, at the beginning of each program you should have a header. For example:
Molcas compiled without parallel support:
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
&GATEWAY
only a single process is used
available to each process: 2.0 GB of memory, 1 thread
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
Molcas compiled with parallel support, but MOLCAS_NPROCS=1:
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
&GATEWAY
only a single process is used, running in SERIAL mode
available to each process: 2.0 GB of memory, 1 thread
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
Actual parallel run:
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
&GATEWAY
launched 2 MPI processes, running in PARALLEL mode (work-sharing enabled)
available to each process: 2.0 GB of memory, 1 thread
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
Other than that, please note that MOLCAS_MEM is the memory requested per process, so it looks like you are requesting 20*30GB of memory...
Offline
Dear Ignacio,
Thank you so much for your fast and helpful answer. I asked our IT guys to compile the mpi-versions of Molcas with openmpi. But when they ran a test calculation they haven't seen any configuration info in the log file and it crashed after some minutes. We have molcas-8.0 update 1. I wonder whether it's a bug in Molcas or we are doing it wrong or maybe we should use the newest version of it.
Thanks
Best,
Mostafa
Last edited by moabe (2018-07-06 12:12:15)
Offline
There are many things that can go wrong in parallel. Make sure to read the sticky post. For further support, I advise you try the support at molcas.org
Offline