Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Pages: 1
Dear all,
I am trying to optimize a transition state structure on S1. Firstly, I would like to ask if this input can fulfill such a task:
> export MOLCAS_MAXITER = 100
> Do while
&SEWARD
DoAnalytical
&ESPF
External = Tinker
lamorok
&RASSCF; spin=1; nActEl=12 0 0; Inactive=76; Ras2=12
JobIph; ciroot = 3 3 1; rlxroot = 2
> COPY $WorkDir/$Project.JobIph $InpDir
> COPY $Project.JobIph $Project.JobOld
> COPY $WorkDir/$Project.RunFile $InpDir
&ALASKA
&SLAPAF
prfc
FindTS
TSConstraints
d1 = Dihedral C26 C27 C28 C29
d2 = Dihedral H46 C27 C28 C34
Values
d1 = -89.9 degrees
d2 = -89.9 degrees
End of TSConstraints
maxstep = 0.1
cartesian
rHidden = 10.0
> End Do
I thougth that computing the gradient for root 2 would drive findTS to look for a 2nd order transition state.
I also have another question, which is the meaning of the numbers in bracket () next to the index of the Hessian matrix?
**********************************************************************************************************************
* Energy Statistics for Geometry Optimization *
**********************************************************************************************************************
Energy Grad Grad Step Estimated Geom Hessian
Iter Energy Change Norm Max Element Max Element Final Energy Update Update Index
1 -868.83000127 0.00000000 2.002153 1.033024 dEdx185 0.048489* lnm020 -868.93006523 RS-RFO None 0
2 -868.82749150 0.00250977 1.252956 0.778178 dEdx177 -0.053951* lnm119 -868.85880292 RS-RFO MSP 0
3 -868.82375596 0.00373554 0.070123-0.040238 lnm185 -0.015692* lnm185 -868.82530410 RSIRFO MSP 1
4 -868.82484163 -0.00108567 0.050050 0.024209 lnm119 -0.026242* lnm020 -868.82651679 RSIRFO MSP 1
5 -868.82676886 -0.00192723 0.040982 0.022486 lnm119 0.020318* lnm119 -868.82777774 RSIRFO MSP 1
6 -868.82803368 -0.00126482 0.039985 0.021132 lnm119 0.039123* lnm006 -868.82996827 RSIRFO MSP 1
7 -868.82877549 -0.00074181 0.036540 0.019804 lnm119 -0.037504* lnm006 -868.83057403 RSIRFO MSP 1
8 -868.82647334 0.00230215 0.082184 0.046695 lnm185 0.045556* lnm119 -868.82949129 RSIRFO MSP 1
9 -868.82940517 -0.00293183 0.035694-0.023072 lnm185 0.025676* lnm119 -868.83010522 RSIRFO MSP 1
10 -868.83033149 -0.00092631 0.121776 0.083780 lnm185 -0.030940* lnm006 -868.83265257 RSIRFO MSP 1(2)
11 -868.82880642 0.00152507 0.028781-0.018277 lnm185 0.019724* lnm119 -868.82908249 RSIRFO MSP 1(2)
12 -868.83123804 -0.00243162 0.042171 0.023064 lnm185 0.026470 lnm001 -868.83153444 RSIRFO MSP 1(3)
As expected, once the Hessian has a negative eigenvalue, the constraints are realized and the transition state structure optimization should take place, but
I havn't found on the manual which kind of information is encapsulated in the numbers in bracket.
Thank you for your help in advance!
Offline
I thougth that computing the gradient for root 2 would drive findTS to look for a 2nd order transition state.
Why? With a second-order TS I guess you mean a second-order saddle point (two negative Hessian eigenvalues). When you as for the gradient for root 2, you are simply exploring the potential energy surface of the second electronic state, but the TS is still a first-order saddle point.
I also have another question, which is the meaning of the numbers in bracket () next to the index of the Hessian matrix?
The program will "fix" the approximate Hessian it has available in order to obtain the correct number of negative eigenvalues (0 for minima, 1 for TS). The number in brackets is the number of negative eigenvalues before this fixing (if different from the desired). You should not be too worried by the fact that the number in brackets is larger than 1, since this is an approximate Hessian, but you should always compute the real Hessian at convergence, it may be that you arrived to a higher-order saddle point.
Offline
Leo wrote:I thougth that computing the gradient for root 2 would drive findTS to look for a 2nd order transition state.
Why? With a second-order TS I guess you mean a second-order saddle point (two negative Hessian eigenvalues). When you as for the gradient for root 2, you are simply exploring the potential energy surface of the second electronic state, but the TS is still a first-order saddle point.
I also have another question, which is the meaning of the numbers in bracket () next to the index of the Hessian matrix?
The program will "fix" the approximate Hessian it has available in order to obtain the correct number of negative eigenvalues (0 for minima, 1 for TS). The number in brackets is the number of negative eigenvalues before this fixing (if different from the desired). You should not be too worried by the fact that the number in brackets is larger than 1, since this is an approximate Hessian, but you should always compute the real Hessian at convergence, it may be that you arrived to a higher-order saddle point.
I would also add that QM/MM analytic Hessian is not available. Numerical Hessian should be feasible, however at a huge computational cost.
Offline
Dear Ignacio,
Thank you for your replay. Sorry for the mistake, I actually meant a 1st order saddle point (transition state).
Can you explain me or point me to some reference on how the algorithm works in "fixing" the hessian?
Thanks a lot for your help.
Offline
Dear Niko,
I would also add that QM/MM analytic Hessian is not available. Numerical Hessian should be feasible, however at a huge computational cost.
It might be a stupid question, but is it possible to have a more precise idea of "huge computational cost"?
Offline
Dear Niko,
niko wrote:I would also add that QM/MM analytic Hessian is not available. Numerical Hessian should be feasible, however at a huge computational cost.
It might be a stupid question, but is it possible to have a more precise idea of "huge computational cost"?
Using a two-point formula, you need 2 analytic gradients per active (ie. non-frozen) coordinate -> 2*3*N gradient calculations for N active atoms. In any case, if N < number of QM and MM atoms, the normal mode analysis is valid only in the subspace spanned by the active nuclear coordinates.
Offline
Can you explain me or point me to some reference on how the algorithm works in "fixing" the hessian?
I think it just changes the sign of the eigenvalue and proceeds with the "fixed" Hessian.
Offline
Dear all,
excuse me for re-opening this topic, I was not sure if I had to open a new one (?). Excuse me in advance for the large number of questions that I will bring up, but I think they are somehow linked together, and I am clearly missing something.
The calculation I was talking about has converged lately to a minimum energy structure. The final approximate hessian has three negative eigenvalue.
Do I still need to compute the real hessian? By that I mean, is there any chance that such a calculation willl end up being an actual transition state with one imaginary frequency? If not, should I try to set different constraint for the TS optimization?
Could you explain me why the algorithm can converge to a nth order saddle point?
Offline
Do I still need to compute the real hessian? By that I mean, is there any chance that such a calculation willl end up being an actual transition state with one imaginary frequency?
If you want to be sure, yes, you should compute the "exact" Hessian. It's quite possible that the approximate Hessian is wrong. Even if it had only 1 negative eigenvalue, it can still be wrong, so you should compute the Hessian.
On the other hand, it may be enough for you that you have a stationary point that links your desired reactants and products (which you verify by running an IRC), and that if there is a lower-order saddle point it would have lower energy, so you have an "upper bound".
If not, should I try to set different constraint for the TS optimization?
Different constraints, different starting structure, different settings (max step, cartesian/internal coordinates, etc.), different method (check "saddle" in gateway)...
Could you explain me why the algorithm can converge to a nth order saddle point?
The optimization will typically find the "closest" stationary point, regardless of its order. Minima are relatively easy to enforce, but saddle points are much trickier.
Offline
Pages: 1