All Years Seminars
[INMA] 2024-02-13 (14h) : Relationship between sample size and architecture for the estimation of Sobolev functions using deep neural networks
At Euler building (room A.002)
Speaker :
Stéphane Chretien (University of Lyon 2)
Abstract :
Beyond the many successes of Deep Learning based techniques in various branches of data analytics, medicine, business, engineering and the human sciences, a sound understanding of the generalisation properties of these techniques is still elusive. Central to these successes are the availability of huge datasets and the availability of huge computational ressources and some of the most recent trends have given paramount importance to the necessity of building huge neural networks with millions of parameters, and most often, of several orders of magnitude larger than the size of the training set. This set-up has however led to many surprises and counterintuitive discoveries. Overprametrisation was recently shown to favour connectivity in a weak sense of the set of stationary points, hence permitting stochastic gradient type methods to potentially reach good minimisers in several stages despite the wild nonconvexity of the training problem as demonstrated by Kuditipudi et al. Relating generalisation to stability, recent theoretical breakthroughs have been able to provide a better understanding of why generalisation cannot even happen without overparametrisation as shown by Bubeck et al. Following the ideas developed by Belkin, a substantial amount of work has also been undertaken in order to study the double descent phenomenon, and the associated benign overfitting property which holds for least norm estimators in linear and mildly non-linear regression, as well as in for certain kernel based methods. In the present paper, we aim at studying the generalisation properties of overparametrised deep neural networks using a novel approach based on Neuberger's theorem.
[INMA] 2024-02-06 (14h) : Dynamic ranking and translation synchronization
At Euler building (room A.002)
Speaker :
Hemant Tyagi (INRIA Lille - Nord Europe)
Abstract :
In many applications such as recommendation systems or sports tournaments we are given outcomes of pairwise comparisons within a collection of $n$ items, the goal being to estimate the latent strengths and/or global ranking of the items. The Bradley-Terry-Luce (BTL) model is a popular statistical model which has been studied extensively in the literature from a theoretical perspective. Existing results for this problem predominantly focus on the setting consisting of a single comparison graph $G$. However, there exist scenarios (e.g., sports tournaments) where the pairwise comparison data (both the graph and the outcomes) evolves with time, and the data is made available at $T$ grid points in the time domain. Theoretical results for this dynamic setting are relatively limited in the literature.
In this talk, I will first describe a dynamic BTL model where the latent strengths of the items evolve in a Lipschitz manner over time. Given access to a sequence of $T$ comparison graphs and the associated pairwise outcomes, our goal is to estimate the latent strength of the items ($w_t \in R^n$) at any given time point $t$. To this end we propose a simple nearest neighbor based estimator combined with an existing spectral method for ranking (namely Rank Centrality). When the graphs are Erd\"os-Renyi graphs, $\ell_2$ and $\ell_{\infty}$ error bounds are obtained for estimating $w_t$ which in particular establishes the consistency of this method in terms of $T$.
Next, we will look at a dynamic version of a related problem, namely Translation Synchronization, where the latent strengths of the items satisfy a weaker global smoothness assumption over the grid. I will describe two estimators for jointly estimating the latent strengths (over all grid points), and show $\ell_2$ error bounds which establish the (weak) consistency of the estimators with respect to the grid size $T$.
Based on joint work with Eglantine Karle and Ernesto Araya.
[INMA] 2024-01-30 (14h) : Gradient Methods with Memory featuring quadratic functional growth estimation for minimizing composite functions
At Euler building (room A.002)
Speaker :
Mihai Florea (INMA,UCLouvain)
Abstract :
The recently introduced Gradient Methods with Memory use a subset of the past oracle information to create a model of the objective function, whose accuracy enables them to surpass the traditional Gradient Methods in practical performance. The model introduces an overhead that is substantial, unless dealing with smooth unconstrained problems. In this work, we introduce several Gradient Methods with Memory that can solve composite problems efficiently, including unconstrained problems with non-smooth objectives. The inexactness of the auxiliary problem does not degrade the convergence guarantees. Actually, we dynamically increase the guarantees as to provably surpass those of their memory-less counterparts. These strengths are preserved when applying acceleration and the containment of inexactness further prevents error accumulation. Our methods are able to estimate key geometry parameters to attain state-of-the-art worst-case rates on many important subclasses of composite problems, where the objective smooth part satisfies a strong convexity condition or a relaxation thereof. In particular, we formulate a restart strategy applicable to optimization methods with sublinear convergence guarantees of any order. Preliminary computational results on a synthetic benchmark of signal processing and machine learning problems show that the combined use of memory and dynamic estimation of quadratic functional growth attains 9 digits of objective accuracy in fewer than 200 iterations, even when strong convexity is absent.
This research was mostly performed in the Department of Electronics and Computers, Transilvania University of Brasov, Romania.
[INMA] 2023-12-19 (14h) : Decoding deceptive algorithms: risks, public accountability, and regulation
At Euler building (room A.002)
Speaker :
Rocher, Luc
Abstract :
Dramatic claims about the potential and risks of Artificial Intelligence are now legion. Computers could soon replace most jobs or become sentient warlords. In this talk, we will take a step back and examine the inherent brittleness and vulnerabilities of algorithms and data-centric technologies. Luc Rocher, a researcher and lecturer at the Oxford Internet Institute, will present an overview of their research investigating harms in socio-technical systems infused with algorithms, using a human-centered computing lens. Their research in privacy and security helped uncover risks in privacy technologies, from de-identification to query-based algorithms and differential privacy . Their work on adversarial manipulation in algorithmic markets shows that pricing algorithms can be exploited by stronger competitors, leading to sustained collusion that harms consumers and may fall outside of current competition laws. They lead the Observatory of Anonymity (https://ooa.world), an interactive website in 89 countries where visitors can find out what makes them more vulnerable to re-identification and where researchers can test the anonymity of their research data.
[INMA] 2023-12-12 (14h) : Closed loop control model of human reaching movements
At Euler building (room A.002)
Speaker :
Crevecoeur, Frédéric
Abstract :
Current research in motor control aims to understand how the brain transforms sensory information into motor commands. This question is surprisingly difficult when one considers the complexity of all the components of the biological system: the human body is a complex structure with intricate geometry, non-linear dynamics, delays and noise, while the nervous system consists of distributed processing in a network of billions of cells, dealing with information from multiple senses with their own reference frames and statistical properties. In the face of this complexity, researchers have relied on simplifying assumptions and on the theory of optimal control, enabling simple movements to be modeled and macroscopic properties of behavior to be described. Recently, our lab has focused on how healthy humans respond to changes in the environment that are consistent with model parameters and motor costs. I will present examples of results showing that the human brain updates the online controller following changes in the environment. We will discuss the implications of these experimental results from the point of view of theoretical models of control and present how they can be exploited to gain new insight into the neural basis of motor dysfunctions in clinical populations.
Seminars
Subscribe (ICS)
INMA Contact Info
Mathematical Engineering (INMA)
L4.05.01
Avenue Georges Lemaître, 4
+32 10 47 80 36
secretaire-inma@uclouvain.be
Mon – Fri 9:00A.M. – 5:00P.M.
JOIN US
