code_aster 14.4 parallel version with PETSc

This page is a note of the work to build parallel version.

The linux box is xUbuntu 18.04LTS on VMware.

0.preparation

Preparation is same as the case of CodeAster 14.4.
Some packages are installed as follows.

$sudo apt-get install  gfortran g++ python-dev python-numpy liblapack-dev libblas-dev tcl tk zlib1g-dev bison flex checkinstall openmpi-bin libx11-dev cmake grace gettext libboost-all-dev swig

And install superlu package.

$sudo apt-get install libsuperlu-dev
1.OpenBLAS

To get source code from author’s HP. Then execute following procedure.

$ tar xfvz OpenBLAS-0.2.20.tar.gz
$ cd OpenBLAS-0.2.20
$ make NO_AFFINITY=1 USE_OPENMP=1
$ make PREFIX=/opt/OpenBLAS install
$ echo /opt/OpenBLAS/lib | sudo tee -a /etc/ld.so.conf.d/openblas.conf
$ sudo ldconfig
2.Code_Aster eith OpenBLAS
First, the source code of code_aster is unpacked. Next, “setup.py” is modified according to the reference.
$ cd aster-full-src-14.4.0
$ sed -i "s:PREFER_COMPILER\ =\ 'GNU':PREFER_COMPILER\ =\'GNU_without_MATH'\nMATHLIB=\ '/opt/OpenBLAS/lib/libopenblas.a':g" setup.cfg

And

$ python3 setup.py install

After the build complete, to make host file for parallel calculation.

$ echo "$HOSTNAME cpu=$(cat /proc/cpuinfo | grep processor | wc -l)" > /opt/aster/etc/codeaster/mpi_hostfile
3.ScaLAPACK
$ tar xfvz scalapack_installer.tgz
$ cd scalapack_installer
$ ./setup.py --lapacklib=/opt/OpenBLAS/lib/libopenblas.a --mpicc=mpicc --mpif90=mpif90 --mpiincdir=/usr/lib/x86_64-linux-gnu/openmpi/include --ldflags_c=-fopenmp --ldflags_fc=-fopenmp --prefix=/opt/scalapack-n
Over.
4.Parmetis

Get source code and unpack it. Integer size is 8 byte in this work.
#define IDXTYPEWIDTH 64″ and “#define REALTYPEWIDTH 32” is written in ~/parmetis-4.0.3/metis/include/metis.h. or add followings to metis.h

#ifdef INTSIZE32
#define IDXTYPEWIDTH 32
#else
#define IDXTYPEWIDTH 64
#endif

then

$ tar xvf parmetis-4.0.3.tar.gz
$ cd parmetis-4.0.3
$ make config prefix=/opt/parmetis-4.0.3
$ make

Next, checking.

$ cd Graphs
$ mpirun -np 8 ptest rotor.graph rotor.graph.xyz

When any error is reported, go to~/parmetis-4.0.3 and execute follows:

$ make install

This part is finished.

5.Scotch and Ptscotch

Unpack scotch-6.0.4-aster5.tar.gz which is included the package of code_aster in /opt.

$ cd /opt/scotch-6.0.4/src

Makefile.inc is here

$ make scotch esmumps ptscotch ptesmumps CCD=mpicc

After building, check it.

$ make check
$ make ptcheck

This part is finised.

6.MUMPS

Pick mumps-5.1.2-aster6.tar.gz from the package of code_aster out and unpack it. The name of folder is changed to mumps-5.1.2_mob.

$ cd /opt/mumps-5.1.2_mob

Makefile.inc is.here.

$ make all

Next, to go to ~/examples and check the MUMPS.

7.Petsc

petsc-3.9.4.tar.gz is downloaded from author’s HP. Then unpack it in /opt. First , a few line (line 43 to 48) is commented out metis.py in /opt/petsc-3.9.4/config/BuildSystem/config/packages.

def configureLibrary(self):
config.package.Package.configureLibrary(self)
oldFlags = self.compilers.CPPFLAGS
self.compilers.CPPFLAGS += ' '+self.headers.toString(self.include)
# if not self.checkCompile('#include "metis.h"', '#if (IDXTYPEWIDTH != '+ str(self.getDefaultIndexSize())+')\n#error incompatible IDXTYPEWIDTH\n#endif'):
# if self.defaultIndexSize == 64:
# msg= '--with-64-bit-indices option requires a metis build with IDXTYPEWIDTH=64.\n'
# else:
# msg= 'IDXTYPEWIDTH=64 metis build appears to be specified for a default 32-bit-indices build of PETSc.\n'
# raise RuntimeError('Metis specified is incompatible!\n'+msg+'Suggest using --download-metis for a compatible metis')

And when openmpi is used, path to openmpi library is added to LD_LIBRARY_PATH. Next is a sample.

$ export LD_LIBRARY_PATH =/usr/lib/x86_64-linux-gnu/openmpi/lib/:$LD_LIBRARY_PATH

Then, “configure” is done.

$./configure --with-debugging=0 COPTFLAGS=-O CXXOPTFLAGS=-O FOPTFLAGS=-O --with-shared-libraries=0 --with-scalapack-dir=/opt/scalapack-n --PETSC_ARCH=linux-metis-mumps --with-metis-dir=/opt/aster/public/metis-5.1.0 --with-parmetis-dir=/opt/parmetis-4.0.3 --with-ptscotch-dir=/opt/scotch-6.0.4 --LIBS="-lgomp" --with-mumps-dir=/opt/mumps-5.1.2_mob -with-x=0 --with-blas-lapack-lib=[/opt/OpenBLAS/lib/libopenblas.a] --download-hypre=yes --download-ml=yes

and “make”

$ make PETSC_DIR=/opt/petsc-3.9.4 PETSC_ARCH=linux-metis-mumps all
8.Parallel version Code_Aster

Go to the folder stored the source code of code_aster. And unpack aster-14.4.0.

$ cd ~/Install_Files
$ cd aster-full-src-14.4.0/SRC
$ tar xfvz aster-14.4.0.tgz
$ cd aster-14.4.0

Next a part of ~/waftools/mathematics.py is modified to skip the checking of blacs.

# program testing a blacs call, output is 0 and 1
blacs_fragment = r"""
program test_blacs
    integer iam, nprocs
#    call blacs_pinfo (iam, nprocs)
#    print *,iam
#    print *,nprocs
end program test_blacs
"""

Then copy Ubuntu_gnu.py and Ubuntu_gnu_mpi.py to the folder. Now preparation of build is ready.

$ export ASTER_ROOT=/opt/aster
$ export PYTHONPATH=/$ASTER_ROOT/lib/python3.6/site-packages/:$PYTHONPATH
$ ./waf configure --use-config-dir=$ASTER_ROOT/14.4/share/aster --use-config=Ubuntu_gnu_mpi --prefix=$ASTER_ROOT/PAR14.4MUPT
$ ./waf install -p --jobs=1

When build is finished. add register the parallel version to /opt/aster/etc/codeaster/aster.

# Code_Aster versions
# versions can be absolute paths or relative to ASTER_ROOT
# examples : NEW11, /usr/lib/codeaster/NEW11

# default version (overridden by --vers option)
default_vers : stable

# available versions
# DO NOT EDIT FOLLOWING LINE !
#?vers : VVV?
vers : stable:/opt/aster/14.4/share/aster
vers : 13.6MUPT:/opt/aster/PAR14.4MUPT/share/aster

Whole work is finished.
I got many information from Code_aster forum. I appreciate the form members.



8 Replies to “code_aster 14.4 parallel version with PETSc”

  1. Dear mr. Hitory,
    I found your advice about Code-Aster with MPI reading Coda-Aster Forum.
    Reading your procedure to install CA 13.6 with MPI was very interesting, and now I’m reading new suggestions up here.
    Before to begin to try the compilation, I want emphasize that I’n not a programmer, but only a structural analyst, so pleas be patient with me.
    My workstation is equipped with a single AMD Ryzen 1900X (8 cores, 64 GB RAM, SATA3 disks) and UBUNTUx64 18.04.04
    Some question:
    1) I could install openBLAS 0.2.20 using Synaptic: it is correct, or it is better to build it by sources? and what about release: 0.2.20 or last 0.38?
    2) same for scaLAPACK (rel. 2.0.2-4) Scotch (6.0.4), PTScotch (6.0.4), Petsc(3.7.7), parmetis(4.0.3-5), MUMPS (5.1.2, both parallel an non-parallel): coud I simpley install all them using Synaptic?
    3) could You explain why aster-full-src-14.4.0 has to be compiled just after openBLAS and BEFORE other pre-requisite sw?

    Thank you very much for you willingness to help Coda-Aster community
    Best greeting,
    Piero

    1. Hello Mr. Cirrottola -san,

      Thank you for your comment.
      About your 1st and 2nd question,
      My work is based on https://sites.google.com/site/codeastersalomemeca/home/code_asterno-heiretuka/parallel-code_aster-12-6 .
      By the article, It is necessary many special settings to to build Code_Aster.
      Therefore, I think to build from source code is better way.
      Third question, my idea are follows:
      to check output step by step
      to use configure data built by setup.py

      Thank you again,

      Hitori

  2. Hallo Mr. Hitori-san,
    thank you very much for your fast reply and for your suggestions.
    I’m now reading the article You are citing.
    Because of our big timing difference, I ‘m already proceeding following your instructions, and now I have a result I don’t understand: I have installed MUMPS without any error, but testing some example in mumps-5.1.2_mob/examples, I obtain this:

    $ /opt/mumps-5.1.2_mob/examples/csimpletest
    /opt/mumps-5.1.2_mob/examples/csimpletest: symbol lookup error: /usr/lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

    does it means that something was wrong?

    Many thanks again,
    Piero

    1. Hello Cirrottola-san,

      Do you build parallel version?
      If yes, please try
      “mpirun -np 2 ./csimpletest < input_simpletest_cmplx"
      written in README under ~/examples.

      thank you,

      Hitori

  3. Hello Mr. Hitori-san,
    unfortunately no, I wasn’t able to build parallel version.
    I discovered (oh, my memory!) that I have active a complete OpenFOAM1906 environment, added to my .bashrc in recent past: obviously it load a special PATH containing some SCOTCH reference. I think that it was the principal reason of all my errors.
    Now I’m cleaning my environment and I will try again a.s.a.p.
    Thank you a lot
    Piero

  4. Hello Mr. Hitori-san
    now I’ve fiished to build Code-Aster with openMPI.
    Now it works and I’m testing if it is working really fine.

    I found some minor problem:
    1) openBLAS: I wasn’t able to understand wich release to use, among 3.7, 3.8 and the last stable 3.9; so, I decided to install through Synaptic the official Ubuntu 18.04 version: it seems to work properly;
    2) Code_Aster with openBLAS: MATHLIB=’/usr/lib/x86_64-linux-gnu/libopenblas.a’ this is the right address for openBLAS lib, in all files where it is used;
    3) ScalaPACK: no problem, but I received this
    Running BLACS test routines…
    BLACS: error running BLACS test routines xCbtest
    BLACS: Command -np 4 ./xCbtest
    stderr:
    ****************************************
    /bin/sh: 1: -np: not found
    ****************************************
    so I don’t know if it is OK, but it seams to work;
    4) Parmetis, Scotch and Ptscotch and MUMPS: all OK
    5) Petsc: I was pushed to install Valgrind, I did with Synaptic; but package ml was missing and I loaded it by
    $ git clone https://bitbucket.org/petsc/pkg-ml.git git.ml
    But it wasn’t installed and Code_Aster successive configuration (via waf) failed because ml was missing.
    So, I reinstalled Petsc without ml package. Now, all OK.
    6) Code_Aster Parallel: insatllation via waf was now OK and I’m testing Code_Aster Parallel

    I want thank you again for your courtesy and willingness
    best regards
    Piero

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です