Support and discussions for Molcas and OpenMolcas users and developers
You are not logged in.
Please note: The forum's URL has changed. The new URL is: https://molcasforum.univie.ac.at. Please update your bookmarks!
You can choose an avatar and change the default style by going to "Profile" → "Personality" or "Display".Pages: 1
I want to compile OpenMolcas using IntelMPI which I've selected on our HPC cluster using mpi-select.
Than I run
cmake -D LINALG=MKL -D MKLROOT=/opt/intel/composerxe/mkl -D MPI=ON -D OPENMP=ON -D GA=ON -D PYTHON_EXECUTABLE=/opt/shared/anaconda/anaconda3/bin/python ../OpenMolcas-master
from build directory.
Cmake gives me (the rest of the output is omitted for clarity):
Configuring with MPI parallellization:
-- Found MPI_C: /usr/lib64/mpi/gcc/openmpi/lib64/libmpi.so (found version "3.0")
-- Found MPI_Fortran: /usr/lib64/mpi/gcc/openmpi/lib64/libmpi_usempi.so (found version "3.0")
-- Found MPI: TRUE (found version "3.0")
-- MPI_C_INCLUDE_PATH: /usr/lib64/mpi/gcc/openmpi/include
-- MPI_Fortran_INCLUDE_PATH: /usr/lib64/mpi/gcc/openmpi/include;/usr/lib64/mpi/gcc/openmpi/lib64
-- MPI_C_LIBRARIES: /usr/lib64/mpi/gcc/openmpi/lib64/libmpi.so
-- MPI_Fortran_LIBRARIES: /usr/lib64/mpi/gcc/openmpi/lib64/libmpi_usempi.so;/usr/lib64/mpi/gcc/openmpi/lib64/libmpi_mpifh.so;/usr/lib64/mpi/gcc/openmpi/lib64/libmpi.so
-- MPIEXEC: /usr/lib64/mpi/gcc/openmpi/bin/mpiexec
-- MPI_IMPLEMENTATION: openmpi
even if IntelMPI is active (I've checked it using
which mpirun
and it gave me the IntelMPI directory).
So, is there a CMake variable to specify which MPI implementation to use?
Offline
Try setting MPI_C_COMPILER and MPI_Fortran_COMPILER to the IntelMPI wrappers (probably mpiicc & mpiifort).
Offline
I've tried it, it picks up right MPI libraries but it still identifies MPI implementation incorectly and takes wrong mpiexec:
-- MPI_C_INCLUDE_PATH: /mnt/storage/opt/intel/impi/5.0.1.035/intel64/include
-- MPI_Fortran_INCLUDE_PATH: /mnt/storage/opt/intel/impi/5.0.1.035/intel64/include
-- MPI_C_LIBRARIES: /mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/libmpifort.so;/mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/release/libmpi.so;/mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/libmpigi.a;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so
-- MPI_Fortran_LIBRARIES: /mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/libmpifort.so;/mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/release/libmpi.so;/mnt/storage/opt/intel/impi/5.0.1.035/intel64/lib/libmpigi.a;/usr/lib64/libdl.so;/usr/lib64/librt.so;/usr/lib64/libpthread.so
-- MPIEXEC: /usr/lib64/mpi/gcc/openmpi/bin/mpiexec
-- MPI_IMPLEMENTATION: openmpi
I'll try substitute the mpiexec in configuration files.
Offline
You could also try setting MPIEXEC_EXECUTABLE and/or MPI_HOME or other variables. See https://cmake.org/cmake/help/latest/module/FindMPI.html
Offline
After lots of research, trial and error I've found that on our cluster IntelMPI is compiled using GCC not Intel compilers.
It means I can not compile OpenMolcas using Intel compilers, so I'll stick to GCC compiles and OpenMPI installation.
Offline
Pages: 1