Ph.D. thesis of Marek Felšöci

Table of Contents

This site features all the publicly available materials related to the Ph.D. thesis of Marek Felšöci. For more information, feel free to get in touch by sending an e-mail to marek.felsoci@inria.fr!

1. Task schedule

Features work progress as well as upcoming plans.

2. Experimental environment

Literate description of the experimental environment of the thesis.

3. Writings

3.1. A comparison of selected solvers for coupled FEM/BEM linear systems arising from discretization of aeroacoustic problems

When discretization of an aeroacoustic physical model is based on the application of both the Finite Elements Method (FEM) and the Boundary Elements Method (BEM), this leads to coupled FEM/BEM linear systems combining sparse and dense parts. In this preliminary study, we compare a set of sparse and dense solvers applied on the solution of such type of linear systems with the aim to identify the best performing configurations of existing solvers.

3.2. A comparison of selected solvers for coupled FEM/BEM linear systems arising from discretization of aeroacoustic problems: literate and reproducible environment

This is an accompanying technical report for the A comparison of selected solvers for coupled FEM/BEM linear systems arising from discretization of aeroacoustic problems Inria research report №9412. Based on the principles of literate programming, this technical report aims at providing detailed guidelines for reproducing the experiments of that research report. We use Org mode for literate programming and GNU Guix for software environment reproducibility. Note that part of the software involved is proprietary.

3.3. Comparison of coupled solvers for FEM/BEM linear systems arising from discretization of aeroacoustic problems

When discretization of an aeroacoustic physical model is based on the application of both the Finite Elements Method (FEM) and the Boundary Elements Method (BEM), this leads to coupled FEM/BEM linear systems combining sparse and dense parts. In this work, we propose and compare a set of implementation schemes relying on the coupling of the open-source sparse direct solver MUMPS with the proprietary direct solvers from Airbus Central R&T, i.e. the scalapack-like dense solver SPIDO and the hierarchical \(\mathcal{H}\)-matrix compressed solver HMAT. For this preliminary study, we limit ourselves to a single 24-core computational node.

3.4. Direct solution of larger coupled sparse/dense linear systems using low-rank compression on single-node multi-core machines in an industrial aeroacoustic context

While hierarchically low-rank compression methods are now commonly available in both dense and sparse direct solvers, their usage for the direct solution of coupled sparse/dense linear systems has been little investigated. The solution of such systems is though central for the simulation of many important physics problems such as the simulation of the propagation of acoustic waves around aircrafts. Indeed, the heterogeneity of the jet flow created by reactors often requires a Finite Element Method (FEM) discretization, leading to a sparse linear system, while it may be reasonable to assume as homogeneous the rest of the space and hence model it with a Boundary Element Method (BEM) discretization, leading to a dense system. In an industrial context, these simulations are often operated on modern multicore workstations with fully-featured linear solvers. Exploiting their low-rank compression techniques is thus very appealing for solving larger coupled sparse/dense systems (hence ensuring a finer solution) on a given multicore workstation, and – of course – possibly do it fast. The standard method performing an efficient coupling of sparse and dense direct solvers is to rely on the Schur complement functionality of the sparse direct solver. However, to the best of our knowledge, modern fully-featured sparse direct solvers offering this functionality return the Schur complement as a non compressed matrix. In this paper, we study the opportunity to process larger systems in spite of this constraint. For that we propose two classes of algorithms, namely multi-solve and multi-factorization, consisting in composing existing parallel sparse and dense methods on well chosen submatrices. An experimental study conducted on a 24 cores machine equipped with 128 GB of RAM shows that these algorithms, implemented on top of state-of-the-art sparse and dense direct solvers, together with proper low-rank assembly schemes, can respectively process systems of 9 million and 2.5 million total unknowns instead of 1.3 million unknowns with a standard coupling of compressed sparse and dense solvers. Additionally, for a given problem size, we are able to exploit the whole memory available to accelerate the time to solution, following a memory-aware approach.

3.5. Study of the processor and memory power consumption of coupled sparse/dense solvers

Joint work with Hervé Mathieu, Bastien Tagliaro and Amina Guermouche.

In the aeronautical industry, aeroacoustics is used to model the propagation of acoustic waves in air flows enveloping an aircraft in flight. This for instance allows one to simulate the noise produced at ground level by an aircraft during the takeoff and landing phases, in order to validate that the regulatory environmental standards are met. Unlike most other complex physics simulations, the method resorts to solving coupled sparse/dense systems. In a previous study, we proposed two classes of algorithms for solving such large systems on a relatively small workstation (one or a few multicore nodes) based on compression techniques. The objective of this paper is to assess whether the positive impact of the proposed algorithms on time to solution and memory usage translates to the energy consumption as well. Because of the nature of the problem, coupling dense and sparse matrices, and the underlying solutions methods, including dense, sparse direct and compression steps, this yields an interesting processor and memory power profile which we aim to analyze in details.

4. Presentations

4.1. Fast solvers for high-frequency aeroacoustics

Team days (February 24, 2020)

4.2. Réunion du comité de suivi

Première année (29 juin 2020)

Deuxième année (29 juin 2021)

4.3. A preliminary comparative study of solvers for coupled FEM/BEM linear systems in a reproducible environment

SOLHARIS plenary meeting (December 7-8, 2020)

4.4. Coupled solvers for FEM/BEM linear systems arising from discretization of aeroacoustic problems

Team work group (April 26, 2021)

4.5. Coupled solvers for high-frequency aeroacoustics

Journée de l'école doctorale (20 mai 2021)

4.6. Guix et Org mode, deux amis du doctorant sur le chemin vers une thèse reproductible

Atelier reproductibilité des environnements logiciels (17 et 18 mai 2021)

J'ai commencé par utiliser Guix pour faciliter le déploiement de notre pile ogicielle sur différentes plateformes de calcul intensif et favoriser la reproducibilité des expérimentations menées dans le cadre de ma thèse. Depuis, l'ensemble de mon environement logiciel est géré par Guix, du développement et la compilation par l'exécution des tests de performances jusqu'au post-traitement et publication de résultats. Le tout documenté et organisé via Org mode.

Au moyen de cette présentation, je résume le fonctionnement de cet environnement.

4.7. Towards memory-aware multi-solve two-stage solver for coupled FEM/BEM systems

SOLHARIS plenary meeting (July, 2021)

4.8. An energy consumption study of coupled solvers for FEM/BEM linear systems: preliminary results

SOLHARIS plenary meeting (February, 2022)

4.9. Study of the processor and memory power consumption of coupled sparse/dense solvers

Team work group (February 28, 2022)

5. Miscellaneous

5.1. Building Airbus solver stack manually

A step-by-step guide to build the packages of the Airbus solver stack manually, e. g. without relying on Guix, on different kinds of machines including the occigen super-computer.

5.2. Logs of GNU Guix discussion sessions

Exhaustive summary of a series of discussion sessions on Guix with its main contributor and maintainer Ludovic Courtès.

Date: 25/02/2022 | 18:08:50

Author: Emmanuel Agullo, Marek Felšöci, Guillaume Sylvand

Email: emmanuel.agullo@inria.fr, marek.felsoci@inria.fr, guillaume.sylvand@airbus.com

Validate