Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Pages: 1
Hello everyone,
As I was trying to estimate the memory requirements for a future job (CASPT2), I failed to understand the relation between the given estimated memory requirements and the assigned value of MOLCAS_MEM (=112500 MB in the present case using only one thread on one process).
Estimated memory requirements:
POLY3 : 12590917656
RHS: 1324472
SIGMA : 2359744
PRPCTL: 0
Available workspace: 12647109326
In which units are expressed these quantities ? I would have assumed bytes if nothing is specified but if so the value of the MOLCAS_MEM variable is about 10 times greater than the available workspace value given. I do not understand the difference here.
By the way I am using an openmp version with no GA of OpenMolcas 21.02.
Offline
I'd guess it's in 8-byte words (the space a single 64-bit integer/real takes)
Offline
Indeed, order of magnitude checks out with 8-byte words. Thank you I should have checked that to begin with.
I have an additional question, perhaps a little off-topic. For such jobs, where multiple cores are usually not an option due to the high amount of memory needed, I wondered if there was a significant gain from using a serial build of OpenMolcas instead of the OpenMP build on 1 core. I have read something pointing in that direction in an old post, but I was curious to know if that was still relevant for the current version.
Offline
OpenMP parallelization (multiple threads) does not use more memory, at least not as far as MOLCAS_MEM is concerned, the memory is shared by all threads. It happens within a single process/node, and only (so far) in the linear algebra libraries (e.g. MKL). It should mostly be safe and faster than a single thread.
MPI/GA parallelization (multiple processes) runs a separate copy of the program on each process, which can be on different physical nodes. The MOLCAS_MEM value applies for each process, and each process uses its own separate memory.
Using a parallel (MPI) build of OpenMolcas with MOLCAS_NPROCS=1 should be equivalent to a serial build
Offline
Thanks for clarifying these few points.
Offline
Pages: 1