Molcas Forum

Support and discussions for Molcas and OpenMolcas users and developers

You are not logged in.

Announcement

Welcome to the Molcas forum.

Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!

You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".

#1 2018-12-14 15:57:37

David
Member
Registered: 2017-05-16
Posts: 85

problem in numerical_gradient module

Hi everone! I carry out a QMMM CASPT2 optimization job interfaces Tinker in parallel, and numerical gradient module still report error, it is noteworthy to point out that the same input without parallel version can run normally. MOLCAS version is 8.0. Anyone ever encountered this problem before? Thanks very much!

  1 &GATEWAY
  2  tinker
  3 
  4  basis
  5  ANO-S-MB
  6 
  7  group
  8  nosym
  9 END OF INPUT
 10 
 11 >>> Export MOLCAS_MAXITER=100
 12 
 13 >>>>>>>>>>>>> DO WHILE <<<<<<<<<<<<<
 14 &SEWARD
 15 END OF INPUT
 16 
 17 &ESPF
 18  external
 19  tinker mulliken
 20 END OF INPUT
 21 
 22 >>>>>>>>>>>>> IF ( ITER = 1 ) <<<<<<
 23 &SCF
 24 END OF INPUT
 25 >>>>>>>>>>>>> ENDIF <<<<<<<<<<<<<<<<
 26 
 27 &RASSCF
 28  title
 29  test for qmmm
 30 
 31  nactel
 32  4 0 0
 33 
 34  inactive
 35  36
 36  
 37  ras2
 38  4
 39 
 40  symmetry
 41  1
 42 
 43  spin
 44  1
 45 
 46  ciroot
 47  2 2 1
 48 
 49  rlxroot
 50  1
 51 
 52  lumorb
 53 END OF INPUT
 54 
 55  &CASPT2
 56  multistate
 57  2 1 2
 58 
 59   ipea
 60    0.0
 61 
 62   imag
 63    0.2
 64  END OF INPUT
 65 
 66  &ALASKA
 67  END OF INPUT
 68 
 69  &SLAPAF
 70  RHIDden = 10 Angstrom
 71  END OF INPUT
 72 >>> ENDDO <<<

Last edited by David (2018-12-14 16:02:06)

Offline

#2 2018-12-14 15:59:21

David
Member
Registered: 2017-05-16
Posts: 85

Re: problem in numerical_gradient module

3522 Maximal available memory for Molcas = 31456687232
3523 --- Stop Module:  caspt2 at Thu Dec 13 21:50:25 2018 /rc=0 ---
3524 --- Module caspt2 spent 49 seconds
3525 ***
3526 --- Start Module: alaska at Thu Dec 13 21:50:38 2018
3527 ----------------------------------------------------------------------------------------------------
3528  DGA/MPI-2 Parallel Environment:    4 Molcas's processes are running on   1 node(s) x  4 cores each
3529 ----------------------------------------------------------------------------------------------------
3530 --- Stop Module:  alaska at Thu Dec 13 21:50:39 2018 /rc= _INVOKED_OTHER_MODULE_ ---
3531 ***
3532 --- Start Module: numerical_gradient at Thu Dec 13 21:50:50 2018
3533 ----------------------------------------------------------------------------------------------------
3534  DGA/MPI-2 Parallel Environment:    4 Molcas's processes are running on   1 node(s) x  4 cores each
3535 ----------------------------------------------------------------------------------------------------
3536 
3537 
3538 ()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
3539                            MOLCAS executing module NUMERICAL_GRADIENT with 30000 MB of memory
3540                                               at 21:50:50 Thu Dec 13 2018
3541                                 Parallel run using   4 nodes, running replicate-data mode
3542 ()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
3543 
3544 
3545 
3546      Gradient is translational variant!
3547      Gradient is rotational variant!
3548 
3549  Root to use:                     1
3550  Number of internal degrees                               48
3551  Number of constraints                                     0
3552  Number of "hard" constraints                              0
3553  Number of displacements                                  96
3554  Effective number of displacements are                    96
3555 
3556 --------------------------------------------------------------------------
3557 An MPI process has executed an operation involving a call to the
3558 "fork()" system call to create a child process.  Open MPI is currently
3559 operating in a condition that could result in memory corruption or
3560 other system errors; your MPI job may hang, crash, or produce silent
3561 data corruption.  The use of fork() (or system() or other calls that
3562 create child processes) is strongly discouraged.
3563 
3564 The process that invoked fork was:
3565 
3566   Local host:          node6_ib (PID 25252)
3567   MPI_COMM_WORLD rank: 3
3568 
3569 If you are *absolutely sure* that your application will successfully
3570 and correctly survive a call to fork(), you may disable this warning
3571 by setting the mpi_warn_on_fork MCA parameter to 0.
3572 --------------------------------------------------------------------------
3573 --------------------------------------------------------------------------
3574 MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD
3575 with errorcode 128.
3576 
3577 NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
3578 You may or may not see output from other processes, depending on
3579 exactly when Open MPI kills them.
3580 --------------------------------------------------------------------------
3581 MOLCAS error: Terminating!, code = 128
3582 --- Stop Module:  numerical_gradient at Thu Dec 13 21:52:04 2018 /rc=                   -1 (Unknown) ---
3583 --- Module numerical_gradient spent 1 minute and 14 seconds
3584  Aborting..

Offline

#3 2018-12-17 13:46:02

niko
Member
From: Marseille
Registered: 2015-11-08
Posts: 59
Website

Re: problem in numerical_gradient module

Hi,
Despite the fact that Tinker should be called only by the master process, I never succeeded in running a QM/MM numerical_gradient in parallel. I have not a clue what is going on. Sorry about that.

Offline

Board footer

Powered by FluxBB 1.5.11

Last refresh: Today 18:57:16