Seminar Details
2023-10-17 (14h) : Newcomers seminar
At Euler building (room a.002)
Organized by Mathematical Engineering
Section 1: Derivative-free optimization methods based on finite-difference gradient approximations
Speaker :
Davar, Dânâ
Abstract :
Many applications require the solution of optimization problems, however, it can be difficult to access the gradient of the objective function. This issue is very common, especially when the function values come from a computer simulation that is realized through a black-box software. In such case, derivative-free optimization methods are required, i.e., methods that only rely on function evaluations. The purpose of this project is the development and worst-case complexity analysis of derivative-free optimization methods based on finite-difference gradient approximations, for large-scale nonconvex problems with possibly inexact function values.
Section 2: Unleashing the power of neural networks for derivative-free optimization
Speaker :
Timothé Taminiau (PhD UCLouvain/INMA)
Abstract :
Derivative-free methods are useful in problems where the analytical form of the objective is either hidden or too intricate. In this setup, we consider the objective function as a black box which can only be evaluated in some points. The framework "Learning-to-Optimize" is a promising way for designing such algorithms where a new method is learned thanks to a deep neural network model. These methods showed good practical performances although theoretical results of complexity have npt been proved yet. The purpose of this project is to explore empirically and theoretically the possible benefits of deep learning for designing derivative-free optimization algorithms.
Section 3: Chance-Constrained Optimization Probablistic Upper bounds Applied for the JSR
Speaker :
Alexis Vuille (PhD UCLouvain/INMA)
Abstract :
Chance-Constrained Optimization - where only a subset of the constraints are sampled - allow under regularized and structural assumptions to compute probablistic upper bounds on the violation probability. This theory can be applied to the Joint Spectral Radius to analyse the data-driven stability of switched linear systems.
