All Years Seminars
[INMA] 2023-11-07 (14h) : Modelling, analysis and control of counterflow heat exchangers
At Euler building (room A.002)
Speaker :
Denis Dochain (CESAME)
Abstract :
Heat exchangers are one of the most largely used devices in industry. Almost all the produced or collected thermal energy passes at least once through a heat exchanger. The objective of this presentation is to give a survey of the results obtained during the PhD thesis of Jacques Kadima. The dynamics of counterflow heat exchangers are described by a set of partial differential equations for both fluids involved in the heat exchange. The presentation will provide identification results for the heat exchanger model, analysis results (including a thermodynamic perspective) and control design results.
[INMA] 2023-10-31 (14h) : On mixed network coordination/anticoordination games
At Euler building (room A.002)
Speaker :
Martina Vanelli (Politecnico di Torino)
Abstract :
Whilst network coordination games and network anti-coordination games have received a considerable amount of attention in the literature, network games with coexisting coordinating and anti-coordinating players are known to exhibit more complex behaviors. In fact, depending on the network structure, such games may even fail to have pure-strategy Nash equilibria. An example is represented by the well-known matching pennies (discoordination) game. We derive graph-theoretic conditions for the existence of pure-strategy Nash equilibria in mixed network coordination/anti-coordination games of arbitrary size. For the case where such conditions are met, we study the asymptotic behavior of best-response dynamics and provide sufficient conditions for finite-time convergence to the set of Nash equilibria. These results build on an extension and refinement of the notion of network cohesiveness and on the formulation of the new concept of network indecomposability. The findings are extended to directed graphs and employed to prove necessary and sufficient conditions for global stability of consensus equilibria in linear threshold dynamics, robustly with respect to a (constant or time-varying) external field.
[INMA] 2023-10-24 (14h) : Tropical toric maximum likelihood estimation
At Euler building (room A.002)
Speaker :
Karel Devriendt (max planck institute, Leipzig)
Abstract :
Many common statistical models are parametrized by polynomial maps; some examples are log-linear and graphical models. To study such statistical models,
applied algebraic geometry can be used in an approach which is known as algebraic statistics. One well-studied problem in this setting is maximum likelihood estimation (MLE):
given a model and some data, which points in your model best explain the data? In this talk, we consider the MLE problem when our data depends on a parameter
and we ask what can be said about the convergence rates of the solution as the parameter goes to zero. This problem was solved for linear models by Agostini et al. (2021) and
Ardila-Eur-Penaguiao (2022). Here we consider the problem for log-linear models, also called toric models, where the MLE problem comes down to intersecting a
toric variety with a linear space. Using tools from tropical geometry -- a combinatorical shadow of algebraic geometry -- the problem simplies to intersecting the tropical toric
variety (which is a linear space) with a tropical linear space (which is a polyhedral complex). I will present some preliminary results which show that the tropical MLE points,
i.e. the convergence rates, are given by simple linear transformations of the data, and that the different MLE points are labeled by simplices in a certain triangulation.
This is joint work with Erick Boniface and Serkan Hosten.
[INMA] 2023-10-17 (14h) : Newcomers seminar
At Euler building (room a.002)
Section 1:Derivative-free optimization methods based on finite-difference gradient approximations
Speaker :
Davar, Dânâ
Abstract : Many applications require the solution of optimization problems, however, it can be difficult to access the gradient of the objective function. This issue is very common, especially when the function values come from a computer simulation that is realized through a black-box software. In such case, derivative-free optimization methods are required, i.e., methods that only rely on function evaluations. The purpose of this project is the development and worst-case complexity analysis of derivative-free optimization methods based on finite-difference gradient approximations, for large-scale nonconvex problems with possibly inexact function values.
Section 2:Unleashing the power of neural networks for derivative-free optimization
Speaker :
Timothé Taminiau (PhD UCLouvain/INMA)
Abstract : Derivative-free methods are useful in problems where the analytical form of the objective is either hidden or too intricate. In this setup, we consider the objective function as a black box which can only be evaluated in some points. The framework "Learning-to-Optimize" is a promising way for designing such algorithms where a new method is learned thanks to a deep neural network model. These methods showed good practical performances although theoretical results of complexity have npt been proved yet. The purpose of this project is to explore empirically and theoretically the possible benefits of deep learning for designing derivative-free optimization algorithms.
Section 3:Chance-Constrained Optimization Probablistic Upper bounds Applied for the JSR
Speaker :
Alexis Vuille (PhD UCLouvain/INMA)
Abstract : Chance-Constrained Optimization - where only a subset of the constraints are sampled - allow under regularized and structural assumptions to compute probablistic upper bounds on the violation probability. This theory can be applied to the Joint Spectral Radius to analyse the data-driven stability of switched linear systems.
[INMA] 2023-10-10 (14h) : Elliptic PDE learning is provably data-efficient
At Euler building (room A.002)
Speaker :
Nicolas Boulle (University of Cambridge)
Abstract :
PDE learning is an emerging field at the intersection of machine learning, physics, and mathematics, that aims to discover properties of unknown physical systems from experimental data. Popular techniques exploit the approximation power of deep learning to learn solution operators, which map source terms to solutions of the underlying PDE. Solution operators can then produce surrogate data for data-intensive machine learning approaches such as learning reduced order models for design optimization in engineering and PDE recovery. In most deep learning applications, a large amount of training data is needed, which is often unrealistic in engineering and biology. However, PDE learning is shockingly data-efficient in practice. We provide a theoretical explanation for this behavior by constructing an algorithm that recovers solution operators associated with elliptic PDEs and achieves an exponential convergence rate with respect to the size of the training dataset. The proof technique combines prior knowledge of PDE theory and randomized numerical linear algebra techniques and may lead to practical benefits such as improving dataset and neural network architecture designs.
Seminars
Subscribe (ICS)
INMA Contact Info
Mathematical Engineering (INMA)
L4.05.01
Avenue Georges Lemaître, 4
+32 10 47 80 36
secretaire-inma@uclouvain.be
Mon – Fri 9:00A.M. – 5:00P.M.
JOIN US
