Scientific Computing Seminar

Date and Place: Thursdays and hybrid (live in 32-349/online via Zoom). For detailed dates see below!

Content

In the Scientific Computing Seminar we host talks of guests and members of the SciComp team as well as students of mathematics, computer science and engineering. Everybody interested in the topics is welcome.

List of Talks

  • Thu
    22
    Apr
    2021

    11:30Online

    Jan Rottmayer, TU Kaiserslautern

    Title: Reduced Order Modeling and Nonlinear System Identification Techniques for Fluid Dynamics

    Abstract:

    Data-driven mathematical methods are increasingly important for characterizing complex dynamical systems across the physical and engineering domain. These methods discover and exploit a relatively small subset of the full high dimensional state space where low dimensional models can be used to capture the dominant system characteristics for control and prediction purposes. Even though data-driven methods are often sensitive to noise and require substantial amounts of data, with recent methods immense progress was made in developing robust methods in a low-data limit.
    Emerging dimensionality reduction techniques offer to discover low-rank spatio-temporal patterns in the dynamics of the system, provide approximations in terms of linear dynamical systems and construct reduced order models in low dimensional embeddings. The reduction in computation offered by reduced order models facilitates the implementation in on-line model predicitve control providing a computationally efficient and robust control scheme.

    In this talk we will take a look on recent advances in data-driven system identification and reduced order modelling, focusing on the work of Brunton et al., who proposed Dynamic Mode Decompositon with Control (DMDc) and Sparse Identification of Nonlinear Dynamics in Model Predictive Control (SINDy-MPC), and is a leading researcher on the field data-driven science and engineering.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_01. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Thu
    29
    Apr
    2021

    11:30Online

    Félix Givois, Fraunhofer ITWM, Kaiserslautern

    Title: Quantum Computing for Material Characterization

    Abstract:

    The description of material laws of a complex microstructure is a problem really complex to solve. As there is not any analytical description of it,the best method is to approximate material effective behaviour by simulating material unit cell on the microscopic scale, based on a tomography image. To do so, the approximation of the solution of an homogenization problem described by an elliptic PDE is needed. As the computation domain can be very wide, a memory efficient algorithm has been developed by Moulinec and Suquet. This algorithm solves the PDE iteratively by a matrix-free gradient-descent method.But despite the fact that this algorithm is matrix free, it can be very costly as it involves 3D Fourier transforms of large data domain at each iteration to invert preconditioner. During the last decades,the computation power needed for material characterization skyrockets with the improvement of material tomography imaging. This improvement in the accuracy of material images turned the computation domains to be very wide(more than a terabyte of data), this leads to way longer computation time due to Fourier transform complexity,and to memory bottleneck. Moreover, as the 3D Fourier transform is not a problem well scalable on distributed memory clusters due to data domain transpositions, this algorithm will not really benefit from parallelization. Nevertheless,recent leaps in the development of Quantum computers by industrials such as IBM or Google seem to let the door open for practical application of quantum algorithms and we could take advantage of this new kind of methods to improve our solver. In this talk we will take a look in a way of replacing classical Fast Fourier transform by its Quantum equivalent: The Quantum Fourier Transform. More especially we will focus on the different difficulties of Quantum algorithms: encoding real or complex data into qubits and reading out the results after the calculation. Moreover,we will discuss about Quantum noise and the limitation of current real quantum devices for practical solutions.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_02. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Thu
    06
    May
    2021

    11:30Online

    Paula Harder, Fraunhofer ITWM, Kaiserslautern

    Title: Emulating Aerosol Microphysics with Machine Learning

    Abstract:

    Aerosol particles play an important role in the climate system by absorbing and scattering radiation and by influencing cloud properties. They are also one of the biggest sources of uncertainty for climate predictions. Traditional climate models only use the aerosol mass, in order to achieve higher accuracy aerosol microphysics properties have to be resolved. This is done for example in the ECHAM-HAM global climate aerosol model by using the M7 microphysics model. But the microphysics model is computational expensive, which makes it impossible to run at a higher resolution or for a longer time. We use the original microphysics model to generate data of input-output pairs to train a machine learning model on it. We investigate different approaches, deep learning methods as well as ensemble models like random forest and gradient boosting, with the aim of achieving reasonable accuracy and being faster than the original. One challenge is the importance of mass conservation and how those physical constraints can be encoded in a model.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_03. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Thu
    20
    May
    2021

    11:30Online

    Johannes Blühdorn, Chair for Scientific Computing, TU Kaiserslautern

    Title: OpDiLib, an Open Multiprocessing Differentiation Library

    Abstract:

    Automatic differentiation (AD) comprises techniques and tools for acquiring machine-accurate derivatives of computer codes. AD has a long history of successful applications in areas like sensitivity analysis, simulation-based optimization, and, more recently, machine learning. These codes are usually executed on high performance architectures and exhibit various kinds of parallelism. Typically, multiple paradigms are combined, for example distributed memory parallelism via MPI and shared memory parallelism via OpenMP. The latter has long posed a challenge for operator overloading AD tools due to the inaccessibility of OpenMP directives with overloading techniques. In this talk, we present our new tool OpDiLib, a universal add-on for operator overloading AD tools that enables reverse mode AD of OpenMP parallel codes. It pursues an event-based implementation approach that can be combined with OMPT, a modern OpenMP feature, to achieve differentiation without additional modifications of the source code. Alternatively, it can be applied in a semi-automatic fashion via a macro interface. There are no a priori restrictions of the data access patterns and a parallel reverse pass is deduced in a fully automatic fashion. We explain OpDiLib’s design, discuss how it enables these features, and present performance results in an OpenMP-MPI hybrid parallel environment.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_04. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Thu
    27
    May
    2021

    11:30Online

    Raju Ram, Fraunhofer ITWM, Kaiserslautern

    Title: Hybrid parallel ILU preconditioner to solve sparse linear systems

    Abstract:

    The solution of large sparse linear systems is a ubiquitous problem in chemistry, physics, and engineering applications. Krylov subspace methods are preferred to solve the large scale linear systems instead of direct methods as they are faster and use less memory. An effective preconditioner is needed to improve the convergence of the underlying simulation.

    The iterative methods frequently incorporate incomplete LU (ILU) preconditioners because of its robustness, accuracy, and usability as a black-box preconditioner. However, The factorization and triangular solve subroutines are inherently sequential in ILU. Detailed model simulation and high resolution modelling has increased the demand of solving extremely large linear systems. Therefore, development of scalable preconditioners have become even more crucial.

    We have developed a hybrid parallel preconditioner. Across the processes, we use additive Schwarz preconditioner, since it has built-in parallelism that decomposes the original problem into subproblems. These subproblems are then solved using the Crout variant of the ILU preconditioner in a process using multiple threads. We use a multilevel nested dissection approach to extract parallelism in the Crout ILU preconditioner. We use the restricted version of additive Schwarz (RAS) method to improve the convergence across the processes.

    For scalable implementation, we have used a lightweight communication based programming model GASPI across the processes and task level parallelism using pthreads on a process.

    In this talk, we present the scalability challenges and preliminary scalability results of the preconditioner on various linear systems. For ill-conditioned and non-diagonal dominant matrices, our implementation incorporates matching, row and col based permutation, and inverse based dropping to improve the robustness of the serial subroutines.

    How to join:

    The talk is held online via Zoom. You can join with the link https://uni-kl-de.zoom.us/j/94636397127?pwd=Y1g4dGVFQitzUHVRQUFpcFB4WVFKQT09.

  • Thu
    24
    Jun
    2021

    11:30Online

    Dr. Mathias J. Krause, Lattice Boltzmann Research Group, Karlsruher Institut für Technologie (KIT)

    Title: Fluid Flow Optimization with Lattice Boltzmann Methods with Applications

    Abstract:

    For many medical as well as technical applications the accurate knowledge of fluid flow dynamics, e.g. flow rates or wall shear stresses, is fundamental to understand and describe underlying processes. A coupling of simulation and measurement (CFD-MRI) promises a significant progress in terms of the accuracy of the obtained flow data, even in situations of low image contrast. Compared to today’s state-of-the-art, the new approach may achieve patient-friendly diagnostics by reducing the required contrast agent. Furthermore, the increase of accuracy allows treating new scenarios and as a result, unanswered medical questions, e.g. perfusion disorders, can be addressed.

    In the talk, an overall strategy for numerical simulation and optimisation of fluid flow is intro-duced. The integrative approach takes advantage of numerical simulation, high performance computing and newly developed mathematical optimization techniques, all based on a mesoscopic model description and on Lattice Boltzmann Methods (LBM) as discretisation strategies. The resulting algorithms are implemented in a highly generic way in the frame-work of the open source library OpenLB (https://www.openlb.net). The approaches and re-alisations are illustrated by means of various fluid flow simulation and optimisation exam-ples. Thereby, the main focus is placed on the CFD-MRI approach. Details are given for the 3D fluid flow optimisation problem formulation, the derivation of a first order optimality sys-tem and the solving process with gradient-based methods using Adjoint Lattice Boltzmann Methods. Validation and first application results (cf. Figure, an illustration of the CFD-MRI approach applied to a human aorta) are presented and discussed.

    How to join:

    The talk is held online via Zoom. You can join with the link https://uni-kl-de.zoom.us/j/94636397127?pwd=Y1g4dGVFQitzUHVRQUFpcFB4WVFKQT09.

  • Thu
    08
    Jul
    2021

    15:00Online

    Avraam Chatzimichailidis, Fraunhofer ITWM, Kaiserslautern

    Title: Second-Order Methods for Neural Networks/Bridging the Gap between Neural Network Pruning and Neural Architecture Search

    Abstract:

    Optimization in deep learning is still dominated by first-order gradient methods, such as stochastic gradient descent. Second-order optimization provides curvature information of the objective function, effectively reducing the number of training iterations until convergence and the number of hyperparameters. However, despite its strong theoretical properties, second-order optimization methods are far less prevalent in deep learning due to prohibitive computational and memory costs.
    In the first part of the presentation we will discuss how to efficiently calculate the Hessian vector product of a neural network and how to construct new optimizers that can stabilize the training of neural networks. We will see how second-order methods in deep learning can help us gain deeper insight into the training process of neural networks. Specifically, an example will demonstrate how an optimizer that uses second-order information can stabilize the training of generative adversarial networks.

    The second part of this talk will deal with the area of neural architecture search (NAS). The aim of NAS is to find architectures that have superior performance on a given dataset, without having to rely on handcrafted neural networks. The formulation of the one-shot NAS problem as a differentiable optimization problem reduces the time complexity compared to evolutionary of reinforcement algorithms. We observe that, in its essence, one-shot NAS is a pruning process. Network pruning is a technique to compress neural networks and reduce their computational complexity without a significant performance drop.
    I will talk about how combining neural network pruning together with group sparsity can bridge the gap between the areas of network pruning and neural architecture search by casting the one-shot NAS optimizer as a single-level optimization problem.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_10. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Thu
    15
    Jul
    2021

    10:30Online

    Angel Adrian Rojas Jimenez, TU Kaiserslautern

    Title: On the stochastic global optimization and numerical implementation for classification problems

    Abstract:

    Dynamic search trajectories like B.T. Polyak’s heavy ball method have been implemented in order to speed up the convergence rate to a local minimizer compared to steepest descent method. We focus here on the global optimization aspect and weaken the requirement on the objective to Lipschitz continuity instead of twice continuous differentiability. In this sense, we developed a stochastic version of a variation of the heavy ball method renamed as Savvy Ball method and, previously, referred to as TOAST. We analyze theoretically the non-smooth but convex case where the search trajectory is not obtained by the usual ordinary differential equation but an Ordinary Differential Inclusion (ODI). Finally, we show numerical results for its implementation using neural networks in machine learning classification problems associated to MNIST-digits, CIFAR10, Fashion MNIST dataset. We compare our results with momentum methods like ADAM-optimizer.

    How to join:

    The talk is held online via Jitsi. You can join with the link https://jitsi.uni-kl.de/SciCompSeminar_11. Please follow the rules below:

    • Use a chrome based browser (One member with a different browser can crash the whole meeting).
    • Mute your microphone and disable your camera.
    • If you have a question, raise your hand.

    More information is available at https://www.rhrk.uni-kl.de/dienstleistungen/netz-telefonie/konferenzdienste/jitsi/.

  • Tue
    14
    Sep
    2021

    15:30Online

    Kavyashree Renukachari, TU Kaiserslautern

    Title: Estimation of Critical Batch Sizes for Distributed Deep Learning

    Abstract:

    The applications of Deep Learning in various domains is increasing, and so is the size of the dataset and the complexity of the model used in such applications. This increase creates a need for higher computational power and strategies to enable faster training of Deep Learning models. Data Parallelism is one such strategy that is being extensively used to handle large datasets. The number of compute resources is increased to handle large datasets where the workload on each resource is kept constant. It is also illustrated in several studies that Deep Learning models can be trained in a shorter time through larger batch sizes. However, there is no particular law to determine the upper limit on the batch size.

    A recent study introduced a statistic called Gradient Noise Scale that could help identify the largest efficient batch size that can be used for DNN training. This study also illustrated that initially, there is a linear scaling rule for batch size, and after a certain point, additional parallelism would provide no or very minimal benefits. The Gradient Noise Scale value is calculated during DNN training. It was also noted that the noise scale value is dependent predominantly on the
    dataset. Due to these factors, in this thesis, we try to estimate the gradient noise scale value before DNN training. Experiments are carried out to derive a relationship between the statistical properties of the dataset and the gradient noise scale value. Once the gradient noise scale value is understood as a function of one of the statistical properties, it could be used to obtain the value of the largest efficient batch size for a given dataset before DNN training.

    How to join:

    The talk is held online via Zoom. You can join with the link https://uni-kl-de.zoom.us/j/94636397127?pwd=Y1g4dGVFQitzUHVRQUFpcFB4WVFKQT09.

  • Thu
    16
    Sep
    2021

    10:00Online

    Shalini Shalini, TU Kaiserslautern

    Title: Sparse Deep Neural Network and Hyperparameter Optimization

    Abstract:

    Despite the considerable success of deep learning in recent years, it is still challenging to deploy state-of-the-art deep neural networks due to the high computational and memory cost. Recent deep learning research has focused on optimally generating sparse neural networks using nonsmooth regularization such as L_1 and L_{2,1} norm. However, the resulting training problem would be nonsmooth, which does not guarantee convergence using the conventional stochastic gradient descent (SGD) approach. A recent solution is the Proximal Stochastic Gradient Descent (ProxSGD) optimizer, which solves this nonsmooth optimization problem and ensures convergence much faster. In practice, the performance
    of ProxSGD could be sensitive to the precise setup of internal hyperparameters. The main focus of this thesis is to effectively train sparse neural networks through weight pruning and filter pruning using ProxSGD optimizer and its hyperparameter optimization. A new approach, GSparsity, is introduced for efficient implementation of filter pruning.

    Firstly, Bayesian optimization and evolutionary algorithms are used to optimize the hyperparameters of ProxSGD, resulting in a range of hyperparameters that helps in achieving good accuracy and compression rate; for example, with DenseNet-201 on the CIFAR100 dataset, an accuracy of 72.01% is achieved with a compression rate of 27.24x (96.33% of weights are pruned) and with ResNet-56 on CIFAR10, 93% accuracy is achieved by removing 93.51% parameters without any loss in baseline accuracy. Secondly, experiments show that ProxSGD performance (in terms of accuracy and compression rate) improves by finetuning the remaining weights using Adam optimizer with a cosine LR scheduler. Thirdly, the GSparsity approach via ProxSGD is proposed for filter pruning and empirically shows that it achieves new state-of-the-art results for filter pruning.

    How to join:

    The talk is held online via Zoom. You can join with the link https://uni-kl-de.zoom.us/j/94636397127?pwd=Y1g4dGVFQitzUHVRQUFpcFB4WVFKQT09.