secretaire-inma@uclouvain.be +32 10 47 80 36

All Years Seminars

Home > Seminars > Archive
Previous Page 20 of 152

[INMA] 2024-11-05 (14h) : Addressing data analysis challenges in next-generation gravitational-wave detectors

At Euler building (room A.002)

Speaker : Justin Janquart (UCLouvain,IRMP)
Abstract : The first detection of a gravitational wave signal in 2015 led to the opening of a new observational window on the Universe. Since then, several detector upgrades have been made, leading to more routine detections with a rate of one to a few events a week currently. To continue increasing the number of detected signals and the observational accuracy, the next generation of gravitational wave detectors is planned with Einstein Telescope in Europe and Cosmic Explorer in the United States. These detectors will see tens to hundreds of thousands of gravitational wave signals, and some will be extremely loud. This will lead to interesting new scientific avenues such as, for example, unprecedented tests of general relativity, the possibility to probe the Universe at unprecedented cosmic scales, and look for potential dark matter candidates. However, these improvements come with a cost: the data analysis systems will be challenged due to overlapping signals, strong astrophysical stochastic backgrounds, and the lack of time to characterize the noise, amongst other things. Here, I explain how these problems arise, what could be their effect if not correctly accounted for, and some of the avenues that have been explored in recent years.
More Detail

[INMA] 2024-10-24 (14h) : Alternating projections and Convex infeasibility: theory and applications

At Euler building (room A.002)

Speaker : Luiz-Rafael dos Santos (Universidade Federal de Santa Catarina, Brazil)
Abstract : In this talk, we first demonstrate that inconsistency arising from the infeasibility of closed convex sets can be leveraged to enhance the performance of alternating projections and their corresponding convergence rates, surprisingly leading to finite convergence under suitable error bounds. In the second part, we apply this concept to develop a new and numerically competitive method for solving the basis pursuit problem. Basis pursuit (BP) seeks the vector with the smallest l1-norm among the solutions to a given linear system, and it is a well-known convex relaxation of the sparse affine feasibility (SAF) problem. SAF aims to find sparse solutions to underdetermined systems, a key issue in compressed sensing, a technique used to recover sparse signals from incomplete measurements. Although SAF is NP-hard, there are instances where its solution coincides with that of BP. The importance of basis pursuit led to a great deal of research into efficient methods for solving it, particularly in large-scale settings, often via linear programming reformulations. However, our approach tackles basis pursuit in its original form, employing a scheme that uses alternating projections within subproblems. These subproblems are purposefully inconsistent, involving two disjoint sets. Numerical experiments show that the proposed algorithm is competitive.
More Detail

[INMA] 2024-10-22 (14h) : Hidden convexity in linear neural networks

At Euler building (room A.002)

Speaker : Legat, Benoît
Abstract : Training neural networks involves minimising a loss function that is nonconvex with respect to the network’s weights. Despite this nonconvexity, when the optimization converges to a local minimum, it is often close to globally optimal. This transfer from local properties to global properties is often achieved through convexity in optimization which neural networks seem to lack, or is it hidden ? There are two sources of nonconvexity in neural networks : 1) the nonlinear activation functions and 2) the multilinear product of the weight matrices. Interestingly, recent research has demonstrated that the second source does not, on its own, lead to local minima that are not global when paired with a mean squared error loss. Although this result is promising, the complexity of the proof limits its generalization to more complex models, such as those with nonlinear activation functions or other loss structures. In this talk, we reveal the convexity hidden in the problem and show how it allows for a simpler and more insightful proof. By exposing this underlying structure, we aim to open the door to recognizing which types of models are more likely to train well and to extend this understanding to other machine learning architectures.
More Detail

[INMA] 2024-10-17 (10h) : Recent developments in Direct Multisearch

At Euler building (room A.002)

Speaker : Ana Luisa Custódio (Universidade Nova de Lisboa)
Abstract : Direct Multisearch (DMS) is a well-established class of Multiobjective Derivative-free Optimization methods, widely used by the optimization community in practical applications and as benchmark for new solvers. In this talk, the key features of DMS will be described, convergence results and worst-case complexity bounds will be provided, as well as recent developments covering the definition of a search step based in quadratic polynomial interpolation and strategies to address nonlinear constraints.
More Detail

[INMA] 2024-10-15 (14h) : Newcomers seminars (PhDs)

At Euler building (room a.002)


Section 1:Generalized low-rank plus sparse models in angular and spectral differential imaging for exoplanets detection with regularized implicit neural representations

Speaker : Nicolas Mil-Homens Cavaco (PhD UCLouvain/INMA)
Abstract : Differential imaging is a widespread technique that involves post-processing images captured by ground-based telescopes during an observing campaign in order to make exoplanets in a distant planetary system directly visible. This technique is based on introducing diversity into the observation process, for example by taking advantage of the Earth's rotation in angular differential imaging (ADI) or by recording many wavelengths in spectral differential imaging (SDI), or a combination of both (ASDI). The effect is to increase the signal-to-noise ratio of exoplanet image features compared to unstructured and non-physical data corruption. Direct imaging of exoplanets with ASDI is nevertheless particularly challenging since an exoplanet is faint compared to its host star and the surrounding data corruption noise. In this context, we propose to develop novel signal representations and inverse problem-solving techniques by incorporating regularized implicit neural representations (INRs), defined as continuous parametric models based on neural network architectures, into dedicated low-rank plus sparse models to address the specific geometric transformations experienced by exoplanets in ASDI and to reduce the interpolation error induced by these transformations. More generally, this work aims at offering innovative solutions for employing INRs in continuous or high-dimensional signal representations for various inverse problems, especially when low-rank, sparse or low-rank plus sparse models are typically employed.

Section 2:Bridging Minds and Movements : Nonlinear Control Models for Human Reaching Movements

Speaker : Alexandre Thyrion (PhD UCLouvain/INMA)
Abstract : A large majority of currents research aiming at improving the understanding of the cerebral mechanisms underlying human reaching movements are based on linear approximation of the biomechanics of the body, neglecting completely the impact of the inherent nonlinearities of the system, but allowing the use of linear control models. However, evidence has shown that this simplification, although extremely common, could in many cases be inadequate. This study develops nonlinear control models allowing to study directly the behavior of a more realistic nonlinear model of biomechanics. We also aim to study the underlying hypotheses brought by these new models and their implication on the a priori functioning of the brain. Finally, we will compare the movements produced by the model with experimental observations and give some insights about future research.

Section 3:Computer-assisted analysis of inexact and stochastic first-order optimization methods.

Speaker : Vernimmen, Pierre
Abstract : The increasing complexity of large-scale optimization challenges, particularly in the field of machine learning, requires the development of more efficient algorithms. First-order methods have emerged as a preferred choice due to their simplicity and minimal computational requirements; however, their effectiveness can decrease when information is inexact, or if they are subject to stochastic influences. This study aims to improve the Performance Estimation Methodology (PEP) - a robust framework that automates the evaluation of optimization algorithms - to solve these problems in inexact and stochastic environments. Using PEP, we will examine traditional and new first-order optimization algorithms in scenarios where gradient information is inexact or where randomness affects the decision-making process, a situation frequently encountered in data-driven applications such as machine learning. The main objective is to deepen the theoretical understanding of these algorithms, refine their worst-case performance guarantees and develop improved methods that demonstrate greater reliability and efficiency in real-world applications. .
More Detail
Previous Page 20 of 152