secretaire-inma@uclouvain.be +32 10 47 80 36

All Years Seminars

Home > Seminars > Archive
Previous Page 7 of 145

[INMA] 2025-10-07 (14:00) : Communication-efficient distributed optimization algorithms

At Euler building (room A.002)

Speaker : Laurent Condat (King Abdullah University of Science and Technology (KAUST))
Abstract :  In distributed optimization and machine learning, a large number of machines perform computations in parallel and communicate back and forth with a server. In particular, in federated learning, the distributed training process is run on personal devices such as mobile phones. In this context, communication, that can be slow, costly and unreliable, forms the main bottleneck. To reduce it, two strategies are popular: 1) local training, which consists in communicating less frequently; 2) compression. Also, a robust algorithm should allow for partial participation. I will present several randomized algorithms we developed recently, with proved convergence guarantees and accelerated complexity. Our most recent paper “LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression” has been presented as a Spotlight at the ICLR conference in April 2025.
More Detail

[INMA] 2025-09-30 (14:00) : Toward Resilient Operation of Large-Scale Cyber-Physical Human Systems

At Euler building (room A.002)

Speaker : Ahmad Al-Dabbagh (University of British Columbia)
Abstract :  Examples of large-scale cyber-physical human systems are many in society, such as in manufacturing and energy applications. The systems rely on a high degree of coupling between their cyber, physical, and human components, where operational information is exchanged between the components using communication networks. The involved coupling and the communication networks introduce vulnerabilities which jeopardize the reliability and security of the systems. This seminar provides an overview of decision-making methods for control and monitoring of large-scale cyber-physical human systems, using control theory and artificial intelligence, while focusing on practical challenges related to cybersecurity, fault diagnosis, and alarm management.
More Detail

[INGI] 2025-09-25 (13:00) : Learning from logical constraints with Lower- and Upper bound arithmetic circuits

At Shannon, Maxwell a.105

Speaker : Alexandre Dubray (ICTEAM)
Abstract : In this work focuses on the field of 𝐧𝐞𝐮𝐫𝐨-𝐬𝐲𝐦𝐛𝐨𝐥𝐢𝐜𝐥𝐞𝐚𝐫𝐧𝐢𝐧𝐠 (NeSy), which aims to bridge the gap between deep learning methods (neural) and the logical knowledge available in certain domains (symbolic). It has been accepted to the Main track at IJCAI 2025, one of the world’s premier conferences on Artificial Intelligence. Standard deep learning struggles with logic-based reasoning. One solution is to encode known constraints as 𝐚𝐫𝐢𝐭𝐡𝐦𝐞𝐭𝐢𝐜 𝐜𝐢𝐫𝐜𝐮𝐢𝐭𝐬 to enable a gradient-based guidance of the parameters being learned. But, for logical knowledge that is too complex to be fully encoded, existing methods use a single lower-bound approximate circuit, often compromising the quality of the computed gradients. The authors introduce a 𝐝𝐮𝐚𝐥-𝐛𝐨𝐮𝐧𝐝 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡, 𝐮𝐬𝐢𝐧𝐠 𝐛𝐨𝐭𝐡 𝐚 𝐥𝐨𝐰𝐞𝐫- 𝐚𝐧𝐝 𝐚𝐧 𝐮𝐩𝐩𝐞𝐫-𝐛𝐨𝐮𝐧𝐝 𝐜𝐢𝐫𝐜𝐮𝐢𝐭, to tightly control the error in the gradient approximation. This improves the robustness and trustworthiness of constraint-based learning.
More Detail

[INMA] 2025-09-23 (14:00) : Newcomers seminars (PhDs)

At Euler building (room a.002)


Section 1:Random Embeddings for Deep Learning: Improving Scalability and Generalization

Speaker : Roy Makhlouf (PhD UCLouvain/INMA)
Abstract : Increasingly powerful processing units have led to a dramatic surge in the number of parameters of deep neural networks (DNNs), for which training comes with heavy computational costs. As a consequence, it is essential to develop more scalable algorithms for training DNNs. This work aims to explore the possible benefits of low-dimensional embeddings as a dimensionality reduction technique for DNN training. Instead of considering the entire parameter space, the idea is to restrict training to a lower-dimensional subspace, thereby significantly reducing computational cost. This approach is motivated by prior numerical results, which suggest that training overparameterized neural networks within a subspace of very small dimension still allows to achieve a high test accuracy. Building on my Master's thesis results, we will first conduct a deeper investigation of random Gaussian embeddings for DNN training under the assumption that the training loss exhibits anisotropic variability. That is, when it varies very slowly along some directions and possibly much faster along others. This setting often occurs in overparameterized neural networks training, where not all parameters influence the training loss equally. We will then consider more structured embeddings known as sparse embeddings, which are closer to techniques already used in deep learning. Finally, we will look at how random embeddings can help avoid sharp spurious minima, a class of minima expected to harm model generalization.

Section 2:Analysis of Hidden Convexity in Neural Networks and Transformers: Toward More Efficient and Robust Deep Learning.

Speaker : Adeline Colson (PhD UCLouvain/INMA)
Abstract : While nonconvexity is traditionally viewed as a challenge in optimization, many machine learning models exhibit a phenomenon known as benign nonconvexity, where nonconvex formulations are surprisingly tractable and often more scalable than their convex counterparts. The research project aims to understand and leverage this phenomenon to identify models that are both expressive and efficient to train, develop strategies to escape spurious local minima, and propose new formulations with benign nonconvexity across domains. Hidden convexity refers to a convex structure that is not apparent in the original nonconvex problem. By reformulating the problem using local optimality conditions (first and second order), one can analyze it via a convex program. This allows global properties, like optimality of local minima, to be inferred from local conditions.

Section 3:Scalable Control Design for Networked Systems: Coordination through Local Cooperation.

Speaker : Jonas Hansson (Lund University,Sweden)
Abstract : In this talk, I will present a compositional framework for consensus and coordination, with applications to vehicular formations. The approach, called serial consensus, constructs high-order protocols by cascading simpler first-order dynamics, which makes stability transparent and enables scalable performance guarantees such as string stability. I will also discuss extensions to nonlinear settings, where the framework accommodates constraints such as saturation and time-varying topologies. Altogether, the results show how distributed controllers based only on local relative measurements can ensure robust and scalable coordination in large-scale networks.
More Detail

[INMA] 2025-09-16 (14:00) : Safety in the Face of Uncertainty: When is a Scenario Decision-Making Algorithm Safe?

At Euler building (room A.002)

Speaker : Guillaume Berger (UCLouvain)
Abstract : Making risk-aware decisions in the face of uncertainty is a central problem in many applications of engineering such as autonomous transportation, energy planning, medical devices, etc. Indeed, in these applications, failures or errors come at a high cost, so that it is important to bound the probability of such events. Nevertheless, this problem is often very challenging in practice because the probability distribution of the uncertainty is often unknown to the decision maker, which must thus make decisions in a black-box way. Scenario decision-making is a powerful data-driven approach to risk-aware decision-making, consisting in drawing samples (called scenarios) of the uncertainty and making a decision based on these samples. A key question is when such scenario-based decisions have a low risk. In this talk, I will review the main techniques from the literature for providing such bounds on the risk, and will show that they are incomparable, in that none is more general (i.e., non-vacuous on a larger class of problems) or less conservative than the other. I will then present a more general bound, inspired by the connection between scenario decision-making algorithms, set operators and VC theory. Finally, I will demonstrate the usefulness of the new bound on problems from scenario optimization.
More Detail
Previous Page 7 of 145

INMA Contact Info

Mathematical Engineering (INMA)

L4.05.01
Avenue Georges Lemaître, 4

+32 10 47 80 36
secretaire-inma@uclouvain.be

Mon – Fri 9:00A.M. – 5:00P.M.

JOIN US