Description
The hardness of the decoding problem and its generalization, the learning with errors problem, are respectively at the heart of the security of the Post-Quantum code-based scheme HQC and the lattice-based scheme Kyber. Both schemes are to be/now NIST standards. These problems have been actively studied for decades, and the complexity of the state-of-the-art algorithms to solve them is crucially used to correctly choose the secure parameters of those schemes. Dual attacks are a type of attack against these problems that were introduced some 20 years ago. They were at first uncompetitive against other techniques but they have been massively improved in recent years and now significantly outperform the state of the art in some regimes.
In the first part of the talk, I will quickly recall these recent advances with a focus on two of our code-based works in 2022 and 2023. Those attacks are tricky to analyze, and traditionally, some independence assumptions were used to make their analysis tractable. In particular, the analysis of two of the recent lattice-based attacks by Guo & Johansson (2021) and MATZOZ (2022) that both claimed to diminish the security of Kyber was critically based on them. Problematically, in this lattice setting, Ducas & Pulles showed in 2023 that these assumptions were critically flawed, casting serious doubts on the previous claims. In the code-based setting, these assumptions also cannot be used. The matter is now settled in both worlds as tools have been developed to carry out the analysis without those assumptions. In code, in 2023 we introduced an analysis technique based on an experimentally verified model in which the weight-enumerator of random linear codes acts like a Poisson variable with a good expected value. In lattices, we introduced in 2025 a new attack that we analyzed with an experimentally verified model and ultimately showed that our attack indeed dents the security of Kyber. In either case, these new analyses are quite involved. In the second part of this talk, we will recall the rough idea behind these techniques and introduce a novel algorithmic tweak that allows us to prove these attacks without using any assumptions whatsoever. This comes at the cost of a small polynomial overhead but also greatly simplifies the analysis. I will focus on the code-based setting for simplicity but will make a quick comparison with the lattice world.
This talk contains different joint works with Kévin Carrier, Thomas Debris-Alazard, Yixin Shen and Jean-Pierre Tillich.
Practical infos
Next sessions
-
Verification of Rust Cryptographic Implementations with Aeneas
Speaker : Aymeric Fromherz - Inria
From secure communications to online banking, cryptography is the cornerstone of most modern secure applications. Unfortunately, cryptographic design and implementation is notoriously error-prone, with a long history of design flaws, implementation bugs, and high-profile attacks. To address this issue, several projects proposed the use of formal verification techniques to statically ensure the[…] -
On the average hardness of SIVP for module lattices of fixed rank
Speaker : Radu Toma - Sorbonne Université
In joint work with Koen de Boer, Aurel Page, and Benjamin Wesolowski, we study the hardness of the approximate Shortest Independent Vectors Problem (SIVP) for random module lattices. We use here a natural notion of randomness as defined originally by Siegel through Haar measures. By proving a reduction, we show it is essentially as hard as the problem for arbitrary instances. While this was[…] -
Endomorphisms via Splittings
Speaker : Min-Yi Shen - No Affiliation
One of the fundamental hardness assumptions underlying isogeny-based cryptography is the problem of finding a non-trivial endomorphism of a given supersingular elliptic curve. In this talk, we show that the problem is related to the problem of finding a splitting of a principally polarised superspecial abelian surface. In particular, we provide formal security reductions and a proof-of-concept[…]-
Cryptography
-