Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Dear all,
I am trying to do geometry optimization in RASSCF with Cholesky decomposition,
and I am wondering if Molcas can have analytical gradient for it?
I found a topic in 2011 said:
One of the new features in MOlcas 7.x is to be 1-center Cholesky decomposition, for which analytical gradientas are available in CASSCF geometry optimization.
But I cannot find related topics in manual, either can I try to use analytical gradient.
Molcas will automatically turn into calculating the numerical gradient.
Can I do this with Molcas?
Best,
M. H.
Offline
You have to use RICD (no "Cholesky" keyword) in GATEWAY and add the keyword "DoAnalytical" in SEWARD (this should disappear in the future).
Then remember it only works for CASSCF wavefunctions, i.e., RAS1=0; RAS3=0
Offline
Thanks for your kind reply, and it (RICD) works with water molecule.
However, when I tried to optimized with my system (39 atoms with 507 basis functions), an error occurred in ALASKA module.
Here's the output:
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
&ALASKA
launched 15 MPI processes, running in PARALLEL mode (work-sharing enabled)
available to each process: 5.5 GB of memory, 1 thread
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
###############################################################################
###############################################################################
### ###
### ###
### Error in Alaska_Super_Driver ###
### ###
### ###
###############################################################################
###############################################################################
RI SA-CASSCF analytical gradients do not work correctly in parallel (yet).
[ process 0]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 13]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 1]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 4]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 5]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 11]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 14]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 7]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 2]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 9]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 3]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 6]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 10]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 8]: xquit (rc = 128): _INTERNAL_ERROR_
[ process 12]: xquit (rc = 128): _INTERNAL_ERROR_
forrtl: severe (174): SIGSEGV, segmentation fault occurred
.....
.....
.....
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[20663,1],1]
Exit code: 174
--------------------------------------------------------------------------
--- Stop Module: alaska at Tue Feb 27 18:57:45 2018 /rc= _INTERNAL_ERROR_ ---
*** files: xmldump
saved to directory /temp/
And here is part of my input:
>export MOLCAS_PRINT=Verbose
&GATEWAY
RICD
>>> do while
&seward
DoAnalytical
&RASSCF
lumorb
symmetry=1
charge=0
spin=1
Nactel = 12 0 0
CiRoot= 2 2 1
&alaska
root=1
&slapaf
>>> enddo
I also tried to use 8 threads, 15GB memory for each core, and the same error messages were printed.
Is there a technical problem? But in the case of water molecule with same calculation level, the ALASKA and latter calculations can be run normally.
Thanks!
Offline
Just read the output:
RI SA-CASSCF analytical gradients do not work correctly in parallel (yet).
I guess the water molecule you tried in serial, which works.
Offline
I see.
I did water calculation in a single state, not SA.
So that's why I can do it in parallel.
By the way, can I set ALASKA program to use just 1 core while other modules in parallel?
Thanks again.
Offline
By the way, can I set ALASKA program to use just 1 core while other modules in parallel?
No, the parallel environment must be constant throughout the calculation.
Offline
got it, thanks for your clear explanation.
Best,
M.H
Last edited by MHHsieh (2018-03-01 05:45:48)
Offline
Dear Ignacio, I was wondering whether Openmolcas can support RI SA-CASSCF analytical gradients now. In my case, I need to calculate gradients of serveral comparatively large molecules ( basis function exceed 500 ). When RICD keyword was not added, computational efficiency is quite low, it almost comsumes 1 hour in the Seward module, is there any other solution to this problem?
&GATEWAY
coord
test.xyz
basis
ano-rcc-vdzp
group
nosym
&SEWARD
doanalytical
END OF INPUT
&SCF
END OF INPUT
&RASSCF
spin
1
nactel
12 0 0
charge
0
ras2
10
ciroot
3 3 1
lumorb
END OF INPUT
&ALASKA
root
1
END OF INPUT
Last edited by David (2020-06-09 13:07:39)
Offline
OpenMolcas supports analytical RI SA-CASSCF gradients since the beginning, as described. Only not in parallel, and that hasn't changed.
Offline
You can compile the openmp version of OpenMolcas and you can run analytical RI SA-CASSCF gradients with openmp parallel. Although it is not as fast as MPI parallelism, it is still much faster than serial version.
Offline
Note that OpenMP parallelization only happens in the linear algebra (MKL, OpenBLAS) library. You need a multithreaded version of the library.
Offline