Information Bounds and Convergence Rates for Side-Channel Security Evaluators
Volume: 2023 • Number: 3 • Pages: 522-569
Current side-channel evaluation methodologies exhibit a gap betweeninefficient tools offering strong theoretical guarantees and efficient tools only offeringheuristic (sometimes case-specific) guarantees. Profiled attacks based on the empiricalleakage distribution correspond to the first category. Bronchainet al.showed atCrypto2019 that they allow bounding the worst-case security level of an imple-mentation, but the bounds become loose as the leakage dimensionality increases.Template attacks and machine learning models are examples of the second category.In view of the increasing popularity of such parametric tools in the literature, anatural question is whether the information they can extract can be bounded.In this paper, we first show that a metric conjectured to be useful for this purpose,the hypothetical information, does not offer such a general bound. It only doeswhen the assumptions exploited by a parametric model match the true leakagedistribution. We therefore introduce a new metric, the training information, thatprovides the guarantees that were conjectured for the hypothetical information forpractically-relevant models. We next initiate a study of the convergence rates ofprofiled side-channel distinguishers which clarifies, to the best of our knowledge forthe first time, the parameters that influence the complexity of a profiling. On theone hand, the latter has practical consequences for evaluators as it can guide themin choosing the appropriate modeling tool depending on the implementation (e.g.,protected or not) and contexts (e.g., granting them access to the countermeasures’randomness or not). It also allows anticipating the amount of measurements neededto guarantee a sufficient model quality. On the other hand, our results connect andexhibit differences between side-channel analysis and statistical learning theory.
