Molcas Forum

Support and discussions for Molcas and OpenMolcas users and developers

You are not logged in.

Announcement

Welcome to the Molcas forum.

Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!

You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".

#1 2018-12-12 10:08:30

LucaBabetto
Member
Registered: 2018-11-21
Posts: 31

CASPT2 - segmentation fault

Hello,

I'm trying to run a CASPT2 calculation on a Eu(III) complex, but I keep getting segmentation fault errors.

I'm running OpenMolcas on a cluster, with GA/OpenMPI and OpenBLAS, configured as follows:

OpenMPI v2.1.5
OpenBLAS v0.2.20 (make flags: USE_OPENMP=1 NO_LAPACK=0 INTERFACE64=1 BINARY=64 NO_AVX2=1 DYNAMIC_ARCH=1 libs netlib shared)
GA v5.7 (configured with ../configure --enable-i8 --with-blas8="-L/path/to/openblas/lib -lopenblas")
OpenMolcas (cmake flags: CC=mpicc FC=mpifort cmake -DMPI=ON -DGA=ON -DLINALG=OpenBLAS, and then I enabled OpenMP from ccmake . so it could use multithreading)

The input file for the calculation is:

&GATEWAY
  Coord = $Project.xyz
  Basis = ANO-RCC-VTZP
  Group = NoSym
  Douglas-Kroll
  AMFI
  ANGMOM; 2.709713 13.865543 27.340890
&SEWARD
  Cholesky
> COPY $CurrDir/CASSCF_QUINTUPLETS $Project.JobIph
&CASPT2
  Title = CASPT2 | Quintuplets
  Multistate = all
  MaxIter = 300
> COPY $Project.JobMix $CurrDir/CASPT2_QUINTUPLETS

Obviously, I previously ran a CAS(6,7) calculation which produced the CASSCF_QUINTUPLETS file, with the same input in the &GATEWAY section.

This is the job input file for PBS (I took out uninformative stuff like my cluster username, my email, the queue name, the scratch directory path, etc.):

#!/bin/sh
#
#PBS -l nodes=1:ppn=4
#PBS -l mem=240gb
#

export MOLCAS_NPROCS=4
export OMP_NUM_THREADS=4
export MOLCAS_MEM=60000
export MOLCAS_PRINT=2

pymolcas $JOB_NAME.input > $JOB_NAME.log

So, I'm using 4 processes each with 60GB of RAM available, and I reserved 240GB of RAM on the node, which has 256GB of total RAM.

Despite each process having 60GB available, I keep getting this message:

--------------------------------------------------------------------------
mpiexec noticed that process rank 2 with PID 0 on node avogadro-86 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

I've already tried using only 1 process and allocating more RAM, which does do the job, but the calculation then becomes way too slow, the last time I tried this it went on for 12 days and then it failed because it reached max iterations in the CASPT2 module.

Is this supposed to be so much memory-consuming? Is there a way to reduce the RAM usage? Have I configured the program in a non-optimal way and it's using more RAM than it should?

The .xyz file for this calculation is:

43

Eu         2.70971       13.86554       27.34089
O          3.94998       14.17110       25.32917
C          3.58630       12.33434       23.86167
C          4.18674       13.55320       24.23740
O          2.27178       12.00571       25.82876
C          2.65528       11.64400       24.66514
O          1.92585       14.67313       29.44960
O          2.02164       16.10312       26.71049
C          2.53563       17.27118       26.77338
O          0.29452       14.08419       26.72441
H          0.15802       15.02412       26.49864
O          1.94748       11.98820       28.69925
O          4.47417       15.41579       28.05998
C          4.68259       16.66955       27.92840
C          1.42163       14.21928       30.53057
C          1.17477       12.85492       30.78698
O          4.90385       12.84030       28.09349
H          5.30547       13.67974       28.39930
C          1.44999       11.82325       29.86400
C          3.79335       17.58548       27.32888
C         -0.77104       13.25541       26.21892
H         -1.67264       13.42116       26.83120
H         -0.42829       12.22720       26.38296
C         -1.05149       13.50830       24.74841
H         -1.37035       14.54849       24.57852
H         -1.85964       12.84859       24.39997
H         -0.15734       13.31306       24.14112
C          5.32882       11.71950       28.90055
H          4.73158       11.68661       29.82612
H          6.38578       11.86409       29.17465
C          5.14732       10.44769       28.09898
H          5.76427       10.46887       27.19066
H          4.09546       10.32796       27.80861
H          5.43905        9.57808       28.70446
H          1.23152       10.85715       30.15376
H          0.77029       12.59627       31.70050
H          1.17755       14.89820       31.26848
H          2.25067       10.77551       24.28166
H          3.84184       11.92512       22.94938
H          4.86507       13.97549       23.58428
H          5.57427       17.04233       28.29030
H          1.98327       18.05135       26.38445
H          4.08925       18.57352       27.29352

Thank you for your help.

Offline

#2 2018-12-12 11:22:50

LucaBabetto
Member
Registered: 2018-11-21
Posts: 31

Re: CASPT2 - segmentation fault

Ah, also, I noticed that CASPT2 only uses 1 thread per process, while for instance SEWARD uses all the threads allocated in OMP_NUM_THREADS. Is this normal?

Offline

#3 2018-12-12 14:01:39

Ignacio
Administrator
From: Uppsala
Registered: 2015-11-03
Posts: 1,085

Re: CASPT2 - segmentation fault

OpenMolcas itself does not use multithreading, all multithreading occurs in the linear algebra library (OpenBLAS). For the rest, I'm afraid I cannot be of much help, but one of the recommendations is not to use more than ~75% of total physical memory. In this case I'd assume "total physical memory" means the amount you reserve with PBS -l

Offline

Board footer

Powered by FluxBB 1.5.11

Last refresh: Today 19:07:09