Molcas Forum

Support and discussions for Molcas and OpenMolcas users and developers

You are not logged in.

Announcement

Welcome to the Molcas forum.

Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!

You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".

#1 2019-05-22 01:24:33

luohancfd
Member
Registered: 2019-05-22
Posts: 3

A guide for compilation with Intel toolchain

I hope this can help someone having problems of compiling with Intel toolchain. When I say intel toolchain, I mean you have impi, mkl and intel compilers installed on the machine. You should be able to use

mpiifort, mpif77, mpiicc, and mpiicpc

before reading further.

  1. Compile Global Array

    Download the source code from https://github.com/GlobalArrays/ga/releases first and unarchive the package. Open the folder and run the following command to configure the code

    MPICXX=mpiicpc  MPICC=mpiicc MPIF77=mpiifort ./configure --with-blas=-mkl --prefix=${GA_ROOT} 

    Replace ${GA_ROOT} by the folder where you want ga to be installed in. Next, run "make && make install" to install GA.

  2. Compile OpenMolcas

    export GAROOT=${GA_ROOT}  #replace ${GA_ROOT} by the one used in last step
    cd ${SOME_DIRECTORY}
    git clone https://gitlab.com/Molcas/OpenMolcas.git

    Now, use a text editor to comment out line 732 to line 743 in CMakeLists.txt about "# Intel versions prior to 15 used -openmp". https://gitlab.com/Molcas/OpenMolcas/bl … s.txt#L732. In our cluster, CMAKE_CXX_COMPILER_VERSION  doesn't give the version of intel compiler but the version of gnu compilers.

    Then run the following

    mkdir -p build
    CC=mpiicc FC=mpiifort CXX=mpiicpc cmake -DCMAKE_Fortran_COMPILER_ID=Intel -DCMAKE_C_COMPILER_ID=Intel -DLINALG=MKL -DMPI=ON -DOPENMP=ON -DGA=ON -DHDF5=ON -DCMAKE_INSTALL_PREFIX=~/Soft/OpenMolcas ../OpenMolcas

    Use a text editor to edit the file CMakeCache.txt. You may need to remove something like /compilers_and_libraries_2019.1.144/linux/mpi/intel64/include/gfortran. This will cause the compiler couldn't find correct mpi library for fortran code.

    Finally, you can compile

    make

Offline

#2 2019-05-22 08:27:01

Ignacio
Administrator
From: Uppsala
Registered: 2015-11-03
Posts: 1,012

Re: A guide for compilation with Intel toolchain

Thank you. Doesn't this help with the CMake compiler detection? (from the wiki)

Known problems: Sometimes CMake may use wrong MPI wrappers to detect the configuration, which may result in a faulty compilation. To override the MPI wrappers used you can specify them with the options MPI_Fortran_COMPILER, MPI_C_COMPILER and MPI_CXX_COMPILER.

This means running:

cmake -D MPI_Fortran_COMPILER=mpiifort -D MPI_C_COMPILER=mpicc -D MPI_CXX_COMPILER=mpiicpc [your other options]

Offline

#3 2019-05-22 20:05:06

luohancfd
Member
Registered: 2019-05-22
Posts: 3

Re: A guide for compilation with Intel toolchain

Ignacio wrote:

Thank you. Doesn't this help with the CMake compiler detection? (from the wiki)

Known problems: Sometimes CMake may use wrong MPI wrappers to detect the configuration, which may result in a faulty compilation. To override the MPI wrappers used you can specify them with the options MPI_Fortran_COMPILER, MPI_C_COMPILER and MPI_CXX_COMPILER.

This means running:

cmake -D MPI_Fortran_COMPILER=mpiifort -D MPI_C_COMPILER=mpicc -D MPI_CXX_COMPILER=mpiicpc [your other options]

Thanks Ignacio! I just learned the option and it works. But I still have an issue. I found if I just compile with MPI as the following

cmake -DMPI_Fortran_COMPILER=mpiifort -DMPI_C_COMPILER=mpiicc -DLINALG=MKL -DMPI=ON -DGA=ON -DHDF5=ON -DCMAKE_INSTALL_PREFIX=~/Soft/OpenMolcas/impi ../OpenMolcas

The program can be successfully built but when I run pymolcas verify, only case 0 passes and an error like the following is thrown for other cases no matter what I set for MOLCAS_NPROCS.

 Error in `/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe': free(): invalid next size (fast): 0x000000000f885cc0 ***
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x81489)[0x2ac29e61b489]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x9f6088]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x4cbbe1]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x71d992]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x64e1e5]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x47c118]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x40dfa9]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407c00]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407b9e]
/usr/lib64/libc.so.6(__libc_start_main+0xf5)[0x2ac29e5bc3d5]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407aa9]
======= Memory map: ========
00400000-00c05000 r-xp 00000000 af2:9f396 144139386374637884             /scratch/rice/l/xxxx/Molcas/build/bin/seward.exe
00e05000-00e06000 r-xp 00805000 af2:9f396 144139386374637884             /scratch/rice/l/xxxx/Molcas/build/bin/seward.exe
00e06000-00ff0000 rwxp 00806000 af2:9f396 144139386374637884             /scratch/rice/l/xxx/Molcas/build/bin/seward.exe

I have to add -DOPENMP=ON to make it work. Do you have any idea? Thank you!

Offline

#4 2019-05-22 21:40:31

luohancfd
Member
Registered: 2019-05-22
Posts: 3

Re: A guide for compilation with Intel toolchain

luohancfd wrote:
Ignacio wrote:

Thank you. Doesn't this help with the CMake compiler detection? (from the wiki)

Known problems: Sometimes CMake may use wrong MPI wrappers to detect the configuration, which may result in a faulty compilation. To override the MPI wrappers used you can specify them with the options MPI_Fortran_COMPILER, MPI_C_COMPILER and MPI_CXX_COMPILER.

This means running:

cmake -D MPI_Fortran_COMPILER=mpiifort -D MPI_C_COMPILER=mpicc -D MPI_CXX_COMPILER=mpiicpc [your other options]

Thanks Ignacio! I just learned the option and it works. But I still have an issue. I found if I just compile with MPI as the following

cmake -DMPI_Fortran_COMPILER=mpiifort -DMPI_C_COMPILER=mpiicc -DLINALG=MKL -DMPI=ON -DGA=ON -DHDF5=ON -DCMAKE_INSTALL_PREFIX=~/Soft/OpenMolcas/impi ../OpenMolcas

The program can be successfully built but when I run pymolcas verify, only case 0 passes and an error like the following is thrown for other cases no matter what I set for MOLCAS_NPROCS.

 Error in `/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe': free(): invalid next size (fast): 0x000000000f885cc0 ***
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x81489)[0x2ac29e61b489]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x9f6088]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x4cbbe1]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x71d992]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x64e1e5]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x47c118]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x40dfa9]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407c00]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407b9e]
/usr/lib64/libc.so.6(__libc_start_main+0xf5)[0x2ac29e5bc3d5]
/scratch/rice/l/xxxx/Molcas/build/bin/seward.exe[0x407aa9]
======= Memory map: ========
00400000-00c05000 r-xp 00000000 af2:9f396 144139386374637884             /scratch/rice/l/xxxx/Molcas/build/bin/seward.exe
00e05000-00e06000 r-xp 00805000 af2:9f396 144139386374637884             /scratch/rice/l/xxxx/Molcas/build/bin/seward.exe
00e06000-00ff0000 rwxp 00806000 af2:9f396 144139386374637884             /scratch/rice/l/xxx/Molcas/build/bin/seward.exe

I have to add -DOPENMP=ON to make it work. Do you have any idea? Thank you!


I figured out the problem by switching from Intel 16.x to Intel 19.x. It seems to be an issue with the compiler.

Offline

#5 2019-06-04 09:53:54

lise
Member
From: Orsay
Registered: 2019-02-14
Posts: 11
Website

Re: A guide for compilation with Intel toolchain

in Chapter 2.2 Parallel Installation of the ManualOpenMolcas18_09.pdf it is wroten:
NOTE: Open MPI versions older than v1.6.5 are not supported. More specifically,
only Open MPI v1.6.5, and v1.8.1 are tested and known to work correctly with Mol-
cas.

as it is wrote in the documentation, do I have to use my openmpi/1.6.5/intel/14.0.2 (and generate GA with that) for the parallel version of OpenMolcas ?
Because i would like to use openmpi/1.8.4/intel/14.0.5-64bit  (maybe this version has been now tested)

Thank you

Lise

Offline

#6 2019-06-04 10:04:27

Ignacio
Administrator
From: Uppsala
Registered: 2015-11-03
Posts: 1,012

Re: A guide for compilation with Intel toolchain

There haven't been any further (official) parallel tests recently, as far as I know.

You can of course compile with any version you like, but in any case it is advised you run the verification (with MOLCAS_NPROCS > 1) and evaluate any possible failure.

Offline

#7 2019-06-04 10:32:22

lise
Member
From: Orsay
Registered: 2019-02-14
Posts: 11
Website

Re: A guide for compilation with Intel toolchain

I prefer the security!! I will use Open MPI v1.6.5
bests regards

Offline

#8 2019-06-04 10:51:14

Ignacio
Administrator
From: Uppsala
Registered: 2015-11-03
Posts: 1,012

Re: A guide for compilation with Intel toolchain

That's a false security. You should always test your compilation. Whatever testing referred to in the manual, it was done long ago, many things have change in the code since then.

Offline

#9 2019-06-06 16:47:25

nikolay
Member
From: Stuttgart
Registered: 2016-03-21
Posts: 54

Re: A guide for compilation with Intel toolchain

I have everything compiled with intel toolchain using the following commands (most probably with some 'overkills'):
As it was said in the wiki GA should be compiled with --enable-i8 option:

export GA_ROOT= # Where it should be installed
./configure MPICC=mpiicc MPIFC=mpiifort MPICXX=mpiicpc --enable-i8 --with-blas8=$MKL_ROOT/lib --with-mpi --prefix=$GA_ROOT
make
make install

HDF5:

export HDF5_SRC= # The HDF5 source
export HDF5_ROOT= # Where it should be installed
FC=mpiifort F9X=mpiifort CC=mpicc $HDF5_SRC/configure --prefix=$HDF5_ROOT --enable-fortran --enable-fortran2003 --enable-parallel
make
make install

OpenMolcas:

FC=mpiifort CC=mpiicc CXX=mpiicpc I_MPI_F77=ifort I_MPI_F90=ifort I_MPI_FC=ifort GAROOT=$GA_ROOT HDF5_ROOT=$HDF5_ROOT
cmake -DHDF5=ON -DLINALG=MKL -DGA=ON -DMPI=ON  -DOPENMP=ON -DPYTHON_EXECUTABLE:FILEPATH=~/anaconda3/bin/python -DMPI_LAUNCHER="$(which mpiexec.hydra) -np \$MOLCAS_NPROCS" $MOLCAS_SRC
make -j

Last edited by nikolay (2019-06-07 20:01:19)

Offline

Board footer

Powered by FluxBB 1.5.11

Last refresh: Today 15:48:12