secretaire-inma@uclouvain.be +32 10 47 80 36

Seminar Details

Home > Seminars > Details

2025-09-23 (14:00) : Newcomers seminars (PhDs)

At Euler building (room a.002)

Organized by Mathematical Engineering


Section 1: Random Embeddings for Deep Learning: Improving Scalability and Generalization

Speaker : Roy Makhlouf (PhD UCLouvain/INMA)
Abstract : Increasingly powerful processing units have led to a dramatic surge in the number of parameters of deep neural networks (DNNs), for which training comes with heavy computational costs. As a consequence, it is essential to develop more scalable algorithms for training DNNs. This work aims to explore the possible benefits of low-dimensional embeddings as a dimensionality reduction technique for DNN training. Instead of considering the entire parameter space, the idea is to restrict training to a lower-dimensional subspace, thereby significantly reducing computational cost. This approach is motivated by prior numerical results, which suggest that training overparameterized neural networks within a subspace of very small dimension still allows to achieve a high test accuracy. Building on my Master's thesis results, we will first conduct a deeper investigation of random Gaussian embeddings for DNN training under the assumption that the training loss exhibits anisotropic variability. That is, when it varies very slowly along some directions and possibly much faster along others. This setting often occurs in overparameterized neural networks training, where not all parameters influence the training loss equally. We will then consider more structured embeddings known as sparse embeddings, which are closer to techniques already used in deep learning. Finally, we will look at how random embeddings can help avoid sharp spurious minima, a class of minima expected to harm model generalization.

Section 2: Analysis of Hidden Convexity in Neural Networks and Transformers: Toward More Efficient and Robust Deep Learning.

Speaker : Adeline Colson (PhD UCLouvain/INMA)
Abstract : While nonconvexity is traditionally viewed as a challenge in optimization, many machine learning models exhibit a phenomenon known as benign nonconvexity, where nonconvex formulations are surprisingly tractable and often more scalable than their convex counterparts. The research project aims to understand and leverage this phenomenon to identify models that are both expressive and efficient to train, develop strategies to escape spurious local minima, and propose new formulations with benign nonconvexity across domains. Hidden convexity refers to a convex structure that is not apparent in the original nonconvex problem. By reformulating the problem using local optimality conditions (first and second order), one can analyze it via a convex program. This allows global properties, like optimality of local minima, to be inferred from local conditions.

Section 3: Scalable Control Design for Networked Systems: Coordination through Local Cooperation.

Speaker : Jonas Hansson (Lund University,Sweden)
Abstract : In this talk, I will present a compositional framework for consensus and coordination, with applications to vehicular formations. The approach, called serial consensus, constructs high-order protocols by cascading simpler first-order dynamics, which makes stability transparent and enables scalable performance guarantees such as string stability. I will also discuss extensions to nonlinear settings, where the framework accommodates constraints such as saturation and time-varying topologies. Altogether, the results show how distributed controllers based only on local relative measurements can ensure robust and scalable coordination in large-scale networks.
← Back to Seminars