Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Pages: 1
Dear Users,
Is there a simple rule of a thumb (e.g. "10MB per basis function") to estimate peak memory requirements of SEWARD?
With such, there would be no trial and error search for optimal MOLCASMEM.
In some cases, if the memory limit is very low, the code does complain from the start.
However, sometimes it happens late in the calculations, resulting in a lot of wasted time.
(Some of my SEWARD runs take a day.)
Also, is there a precise way to do this estimate from the start, like, do all of the memory allocation calls before any actual work?
Thank you.
Andrew
Offline
*if* you use conventional integrals, yes, such rule can be suggested. But [unless something has been changed recently] for CD and RI the code trying to figure out the max available memory, and use all of it. This is why the routine computing MAXMEM cheat, and return a bit smaller value. And I can imagine how F90 allocates helps to use the memory outside the molcas memory manager.
So, I would suggest a very simple trick - locate the place where MAXMEM is computed, and reduce the number of available memory. CD/RI will consume a bit less, it will (not significantly) reduce the speed, but it will leave more memory for other use, so the code stop crashing.
Offline
Thank you very much. I'll try to implement such a hack. Also, sorry for the delayed answer.
Last edited by andrewshyichuk (2023-09-12 15:02:47)
Offline
Pages: 1