Scientific Computing Seminar

Date and Place: Thursdays and hybrid (live in 32-349/online via Zoom). For detailed dates see below!

Content

In the Scientific Computing Seminar we host talks of guests and members of the SciComp team as well as students of mathematics, computer science and engineering. Everybody interested in the topics is welcome.

List of Talks

  • Thu
    28
    Apr
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Dr. Long Chen, Chair for Scientific Computing (SciComp), TU Kaiserslautern

    Title:
    A Gradient Descent Akin Method for Constrained Optimization

    Abstract:

    Motivated by the applications of large-scale shape optimization problems and inspired by singular value decomposition, we present a “gradient descent akin method” (GDAM) for solving constrained optimization problems. At each iteration, we compute a search direction using a linear combination of the negative and normalized objective and constraint gradient by introducing a parameter \zeta. While the principled idea behind GDAM is similar to that of gradient descent, we show its connection to the classical logarithmic barrier interior-point method and argue that it can be considered a first-order interior-point method. The convergence behavior of the method is studied using a dynamical systems approach. In particular, we show that the continuous-time optimization trajectory finds local solutions by asymptotically converging to the central path(s) of the barrier interior-point method. Furthermore, we show that the convergence rate of the method is bounded relative to \zeta. Numerical examples are reported, which include both common test examples and real-world applications in shape optimization. Finally, we show recent progress in the practical implementation of GDAM by incorporating Nesterov’s Acceleration Gradient method.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    05
    May
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Vassilios Yfantis, Chair of Machine Tools and Control Systems (WSKL), TU Kaiserslautern

    Title:
    Distributed Optimization of Separable Convex and Integer Programs by Quadratically Approximated Dual Ascent

    Abstract:

    In this talk, a new algorithm for dual decomposition-based distributed optimization is presented. It relies on the quadratic approximation of the dual function of the primal optimization problem. The dual variables are updated in each iteration through a maximization of the approximated dual function subject to stepsize constraints. Firstly, the updated dual variables are constrained to lie in an ellipsoid around the current dual variables. The ellipsoid is defined by the covariance matrix of the dual variables from previous iterations which have been used for the quadratic approximation. Secondly, the subgradients from previous iterations are stored in order to construct cutting planes, similar to bundle methods for nonsmooth optimization. However, instead of using the cutting planes to formulate a piece-wise linear over-approximation of the dual function, they are used to construct valid inequalities for the update step. The algorithm is evaluated on a large set of convex and integer benchmark problems and compared to the subgradient method, the alternating direction method of multipliers and the quadratic approximation coordination algorithm. The results show that the proposed algorithm performs better than the compared algorithms both in terms of the required number of iterations and in the number of solved benchmark problems.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    12
    May
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Rohit Pochampalli, Chair for Scientific Computing (SciComp), TU Kaiserslautern

    Title: Scale Separation in Convolutional Neural Networks: A Multigrid Approach

    Abstract:

    The remarkable performance of convolutional neural networks on computer vision tasks is closely linked to the properties of the convolutional filter, especially by means of the imposition of geometric priors, translational equivariance and scale separation on the neural network. By virtue of these properties, convolutional filters, and thus in turn convolutional neural networks, are able to exploit the geometric structure and symmetries of the underlying domain (which is the Hilbert space of image representations). Considering the example of image classification, scale separation emerges from the preservation of important characteristics of the image concurrent with subsampling and coarse-graining that occur as the image is propagated through the layers of the network. As a consequence, convolutional neural networks benefit from a separation of important characteristics of the image across several scales.

    A similar principle arises in the multigrid scheme, used to speed up the numerical solution of partial differential equations. Here, an iterative method is separated into several levels that smoothen errors on different frequency bands. In this work we look at a novel characterization of scale separation within layers of a convolutional neural network. This is achieved by introducing neural network architectures that utilize ideas from the multigrid method. Explicitly, a two-grid multigrid cycle is incorporated into a convolutional layer analogous to the restriction and prolongation operations of the multigrid approach. The use of dilation in the convolution operations is shown to enable the representation of image features in multiple scales of abstraction, conducive to the extraction of relevant morphological structures across these scales.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    02
    Jun
    2022

    12:00Online

    Paula Harder, Fraunhofer ITWM, Kaiserslautern

    Title: Physics-Constrained Learning of Aerosol Microphysics

    Abstract:

    Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. In order to represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. Building on our previous work, we demonstrate a machine learning approach to emulate the M7 microphysics module. We investigated different approaches, neural networks as well as ensemble models like random forest and gradient boosting, with the aim of achieving the desired accuracy and computational efficiency, finding the neural network (NN) appeared to be the most successful. We use data generated from a realistic ECHAM-HAM simulation and train a model offline to predict one time step of the aerosol microphysics. The underlying data distribution is challenging, as the changes in the variables are often zero or very close to zero. We do not predict the full values, but the tendencies. To incorporate physics into our network we explore both soft constraints for our emulator by adding regularization terms to the loss function and hard constraints by adding completion and correction layers.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Tue
    21
    Jun
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Prof. Nijso Beishuizen, Bosch Deventer and Eindhofen University of Technology

    Title:
    Adjoint-based design optimization for combustion applications

    Abstract:

    Combustion is used in many devices in the chemical industry, in transport, heating and power. Often, design requirements are competing with each other and Computational Fluid Dynamics is a necessary (but not always sufficient) tool to come to a final design. We present a framework for automatic adjoint-based design optimization of combustion devices and we focus on emission reduction and heat transfer optimization in a gas boiler. The flow solution is obtained from the preconditioned compressible Navier-Stokes equations and combustion is modeled using a laminar premixed flamelet approach. Fluid properties and reaction source terms are tabulated as a function of two controlling parameters: a progress variable and the total enthalpy. To increase the accuracy of pollutant emissions, additional transport equations for CO and NOx are solved.
    The combustion model is implemented in SU2, the discrete adjoint method is handled by CoDiPack and the optimization cycle is driven by FADO. Automatic remeshing can optionally be performed using the Pointwise mesh generation software. We will demonstrate the method by simultaneously minimizing CO and NOx emissions as well as the outlet temperature of a steady, laminar, premixed methane-air flame in a simplified 2D model of a gas boiler with strong flue gas cooling.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    23
    Jun
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Prof. Andrea Walther, Department of Mathematics, Humboldt University Berlin

    Title:
    On a semismooth conjugate gradient method

    Abstract:

    In machine learning and other large scale applications, nowadays deterministic and stochastic variants of the steepest descent method are widely used for the minimization of objectives that are only piecewise smooth. As alternative, in this talk we present a deterministic descent method based on the generalization of rescaled conjugate gradients proposed by Phil Wolfe in 1975 for objectives that are convex. Without this assumption the new method exploits semismoothness to obtain conjugate pairs of generalized gradients such that it can only converge to Clarke stationary points. In addition to the theoretical analysis, we present preliminary numerical results.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    07
    Jul
    2022

    12:00Online

    Tahmineh Zakizadeh Fallahabadi, Aon Solution Germany GmbH, Wiesbaden

    Title:
    Hyperparameter Optimization for Machine Learning Models using Bayesian Optimization

    Abstract:

    Hyperparameters are important for machine learning algorithms since they directly control the behaviours of training algorithms and have a significant effect on the performance of machine learning models. Bayesian optimization is an optimization framework for the global optimization of expensive Blackbox functions, which recently gained traction in Hyperparameter optimization for machine learning algorithms. In this talk, Bayesian optimization as a method to optimize hyperparameter in Machine Learning models is reviewed. First, we will consider the traditional Bayesian optimization method, then we will consider new suggested methods, which are based on Bayesian Optimization and supposed to be more efficient than the traditional Bayesian optimization method.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    14
    Jul
    2022

    12:00Hybrid (Room 32-349 and via Zoom)

    Rozan I. Rosandi, Differential-Algebraic Systems Group, TU Kaiserslautern

    Title:
    A Riemannian Framework for the Isogeometric Shape Optimization of Thin Shells

    Abstract:

    Structural optimization is concerned with finding an optimal design for a structure under mechanical load. In this talk, we consider thin elastic shell structures based on the linearized Koiter model, whose shape can be described by a surface embedded in Euclidean space. We regard the set of all embeddings of the surface as an infinite-dimensional Riemannian manifold and perform optimization in this setting using the Riemannian shape gradient. Non-uniform rational B-splines (NURBS) are employed to parameterize the surface and solve the underlying equations that govern the mechanical behavior of the shell via isogeometric analysis (IGA). By representing NURBS patches as B-spline patches in projective space, NURBS weights can also be incorporated into the optimization routine. We discuss the practical implementation of the method and demonstrate our approach on the compliance minimization of a half-cylindrical shell under static load and fixed area constraint.

    How to join online

    The talk is held online via Zoom. You can join with the following link:
    https://uni-kl-de.zoom.us/j/62521592603?pwd=VktnbVlrWHhiVmxQTzNWQlkxSy9WZz09

  • Thu
    06
    Oct
    2022

    12:00Room 32-349

    Marcel Sauer, DLR Cologne, Institute AT

    Title:
    Development of an optimization based automatic block structured grid generation method

    Abstract:

    In the development in turbomachinery components, flow simulations evaluate the design based on a discrete representation: the grid. The complexity of a turbomachinery blade passage allows for the usage of block-structured grids and their numerical advantages.In this talk, a new method for generating block-structured grids is presented. The method is capable of creating high quality grids by only using a small parameter set of block dimensions and distance prescriptions. The grid generation is formulated as an optimization problem, whose complexity is limited by the usage of a B-Spline based reduced grid description. Within a multi-stage process, the reduced grid description is optimized and a final grid is generated. The method is applied to turbomachinery testcases. Additionally, the method is compared to an algebraic grid generation method in the context of a process chain, which is similar to a design optimization.