secretaire-inma@uclouvain.be +32 10 47 80 36

Seminar Details

Home > Seminars > Details

2024-12-03 (14h) : MadNLP: a GPU-ready interior-point solver

At Euler building (room A.002)

Organized by Mathematical Engineering

Speaker : François Pacaud ((Mines Paris - PSL))
Abstract : The interior-point method (IPM) has become a standard algorithm to solve large-scale optimization problems. Traditionally, IPM solves a sequence of symmetric indefinite linear systems, or Karush-Kuhn-Tucker (KKT) systems, that are increasingly ill-conditioned as we approach the solution. As such, solving a KKT system with traditional sparse factorization methods requires numerical pivoting, making parallelization difficult. We present two novel interior-point methods that circumvent this issue. The first method intervenes at the level of the linear algebra: it condenses IPM's KKT system into a symmetric positive-definite matrix and solve it with a Cholesky factorization, stable without pivoting. Although condensed KKT systems are more prone to ill-conditioning than the original ones, they exhibit structured ill-conditioning that mitigates the loss of accuracy. The second method mixes IPM with an Augmented Lagrangian method (Auglag-IPM). The Augmented Lagrangian term adds an implicit dual regularization to the problem; as a result the KKT systems write as symmetric quasi-definite (SQD) matrices, also factorizable without pivoting. Aside, the Auglag-IPM is able to solve degenerate optimization problems, in particular nonlinear programs with complementarity constraints. Both methods have been implemented on the GPU using MadNLP.jl, an optimization solver interfaced with the NVIDIA sparse linear solver cuDSS and with the GPU-accelerated modeler ExaModels.jl. Our experiments on large-scale OPF instances reveal that GPUs can attain up to a tenfold speed increase compared to CPUs. In addition, Auglag-IPM is able to solve complicated optimization problems that are not solvable by a classical IPM algorithm.
← Back to Seminars