secretaire-inma@uclouvain.be +32 10 47 80 36

All Years Seminars

Home > Seminars > Archive
Previous Page 22 of 152

[INMA] 2024-06-11 (14h) : New results for old MDP algorithms

At Euler building (room A.002)

Speaker : Arsenii Mustafin (Boston University)
Abstract : The Markov Decision Process (MDP) is a fundamental mathematical model for sequential task problems. While basic analysis of MDPs was done back in the 1960s, recent successes in reinforcement learning have sparked a new wave of interest and led to significant results. During my talk, I will cover our recent findings in the analysis of classical RL algorithms. The first part of the talk is dedicated to the application of variance reduction techniques to TD-learning (TD-SVRG). We will discuss how a recently introduced interpretation of gradient splitting aids in the analysis of the TD-SVRG algorithm's convergence. The second half of the talk will focus on the convergence of the value iteration algorithm. I will demonstrate how the assumption of the connectivity of an optimal policy yields an improved convergence rate for the algorithm
More Detail

[INMA] 2024-05-28 (14h) : Computational Imaging: Restoration Deep Networks as Implicit Priors

At Maxwell building, (Shannon room)

Speaker : Ulugbek Kamilov (Washington University in St.louis)
Abstract : Many interesting computational imaging problems can be formulated as imaging inverse problems. Since these problems are often ill-posed, one needs to integrate all the available prior knowledge for obtaining high-quality solutions. This talk will explore a series of techniques that leverage deep neural networks for image restoration as data-driven, implicit priors for images. The methods discussed originate from the well-known plug-and-play (PnP) methodology, known for its effectiveness in addressing imaging inverse problems. We will extend the conversation to the generalization of PnP methods, moving beyond traditional use of additive white Gaussian noise (AWGN) denoisers to include a variety of other restoration networks. This expansion not only enhances imaging performance but also offers the flexibility to train priors in the absence of clean data. Additionally, the talk will cover the theoretical underpinnings of using deep restoration networks and their applications in biomedical image reconstruction.
More Detail

[INMA] 2024-05-21 (14h) : Stability and Performance of Discrete-Time Switched Nonlinear Systems

At Euler building (room A.002)

Speaker : Grace Deaecto (Unicamp and Imperial College London)
Abstract : Switched systems are composed of a set of subsystems and a rule (or function) to orchestrate the switching among them. They present interesting theoretical properties and a wide range of applications in many different areas of the science. In this talk, our goal is to study the stabilisation problem for discrete-time switched nonlinear systems under two different scenarios: time-dependent switching function (open-loop control) and state-dependent switching function (closed-loop control). In the first, we consider that the switching function is subjected to persistent dwell time and in the second it is a state-dependent control variable to be designed. In both cases, the goal is to ensure global exponential stability of the zero equilibrium and a guaranteed performance for the overall system. A numerical method that applies to a class of switched polynomial systems is proposed to validate the results. This is able to check locally the sufficient conditions through LMI conditions and takes into account the maximisation of an ellipsoidal set for which any trajectory starting from it does not leave the stability region. Academic examples are used to illustrate the main features of the proposed methodologies.
More Detail

[INMA] 2024-05-13 (11h) : Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates

At Euler building (room A.002)

Speaker : Damien Scieur (Samsung SAIL Montreal)
Abstract : Despite the impressive numerical performance of the quasi-Newton and Anderson/nonlinear acceleration methods, their global convergence rates have remained elusive for over 50 years. This study addresses this long-standing issue by introducing a framework that derives novel, adaptive quasi-Newton and nonlinear/Anderson acceleration schemes. Under mild assumptions, the proposed iterative methods exhibit explicit, non-asymptotic convergence rates that blend those of the gradient descent and Cubic Regularized Newton's methods. The proposed approach also includes an accelerated version for convex functions. Notably, these rates are achieved adaptively without prior knowledge of the function's parameters. The framework presented in this study is generic, and its special cases include algorithms such as Newton's method with random subspaces, finite-differences, or lazy Hessian. Numerical experiments demonstrated the efficiency of the proposed framework, even compared to the l-BFGS algorithm with Wolfe line-search. See https://arxiv.org/abs/2305.19179 for more information
More Detail

[INMA] 2024-05-07 (14h) : System theory for optimization and learning in complex and distributed systems

At Euler building (room A.002)

Speaker : Giuseppe Notarstefano (Univ. of Bologna)
Abstract : In this talk I will address control and learning scenarios requiring the solution of (possibly) distributed optimization problems. I will first highlight some key challenges arising in addressing these problems as, e.g., large scale nature, structure of the required policy, local communication requirements, online and closed-loop nature of the learning strategy. Then I will show how system theory tools can be used to design and analyze solution strategies and gain insights on the algorithm performance properties. Energy and robotic systems are key sources of concrete scenarios in which these challenges arise. Applications to these key domains will be shown along with future perspectives.
More Detail
Previous Page 22 of 152