Designing cryptosystems that can run on today's classical computers and are secure against quantum attacks.
What's the rush in developing post-quantum cryptosystems?
Big QCs probably won't exist at a commercial level for several years, however :
Harvesting Attacks: SNDL (Store Now Decrypt Later), Attackers all over the world are currently following this attacks strategy where they are storing today's cipher texts and public keys and are in hopes of performing a brute force attack, once QCs are a common thing, and are commercially available.
Rewriting Past Timestamps (fork history and rewrite transactions that happened in the past thereby, making blockchains mutable)
Deploying new cryptography at scale takess about 10 years.
Lattices
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Security from mild worst-case hardness assumptions (this means that these cryptosystems exhibit random-instances of hardness, based on the fact that there exists hard instances).
Solutions to some amazing cryptosystems till date like Fully Homomorphic Encryption.
What is a Lattice?
A periodic grid in 𝕞. (Formally: a full-rank additive group.)
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Basis are not unique, there are different vectors forming a differemt basis, thereby constructing the same lattice, for example, differing from the previous set of vectors, if you look at this, it surprisingly builds the same lattice!
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
For , appears to require time and space, even by the QC.
Note that here is the approximation factor, so in real life, probably one doesn't need to exactly find the shortest vector, and can simply compute a vector in the factors of .
The approximation factor can also be dependent on the dimension we are dealing with, hence, means where is a polynomial in the dimension (degree for newbies) .
Lastly, as the approximation factor grows, the problem becomes easier. As the nature of what an expected solution is grows largely.
Why Shor's algorithm doesn't work here?
Shor's algorithm specializes in finding group structures explicitly, which means, suppose a problem talks about a mathematically defined lattice, using Shor one can construct that lattice explicity, but not perform geometric computations within that lattice, especially in time.
Here, the SVP is a geometric problem within that lattice, hence, it is extremely hard to find out the specific geometric pattern in terms of the SVP.
Hence, we can say that lattice problems typically fall into the complexity class, where everything depends on the degree of .
Usually, if the is a constant, it usually falls into NP-Complete, however for a bigger than the square root of the dimension, the complexity class is usually somewhere in .
Some Harder Problems : Foundations & Digital Signatures
A Hard Problem: Short Integer Solution [Ajtai96]`
𝕟𝕢 = dimensional integer vectors modulo .
GOAL: find a non-trivial such that:
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Converting this into a Collision-Resistant Hash Function
Set . Define "compressing" 𝕟𝕢. Hence the function is simply defined as:
Collision: let where , then , this yields us a short solution.
Hence the Euclidean norm of this 𝕫 is atmost .
Correlation of the Short Integer Solution with Lattices
𝕟𝕞𝕢 defines a '' lattice:
𝕞𝕞
Here '' means will be inside the lattice. And hence, the short solutions lie in the enclosed circle inside the lattice. Refer to the diagram below, this enclosed circle indeed behaves as the Shortest Vector.
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Hence, the short solutions lie in this enclosed circle.
Worst to Average Case Reduction [Ajtai'96…]
Finding "short" (𝕫) nonzero 𝕫 (for uniformly random 𝕟𝕞𝕢), then it implies that GapSVP is also solvable on any n-d lattice. –-(1)
This statement in Lattice Cryptography is known as a randomized cousin reduction problem.
Here again acts as an approximation factor for the shortest vector, which means that the more we relax we will be facing worse-approximation problems in the worst-case to average-case scenario.
Hence, for an instance here if is a polynomial, then is also a polynomial, and then the problem would indeed be considered as a hard problem.
How is the value even deduced?
Honestly, it's reverse-engineered, hit experimenting, it's regarded as the smallest value that we can get away with.
If the domain for 'short' 𝕫𝕞, then the value of at the worst-case is .
Now we can simply plug in the value of into eq –-(1) and deduce the worst-case time complexity.
How is the self-reduction problem stated here different from the self-reduction problem stated in Discrete Log?
In discrete log, self-reduction can be defined in the following way.
If a problem's discrete log can be computed in under the group . Then the discrete log of a similar problem can also be solvable in under the same group .
In lattice problems, however, these things work differently, first of all because in the cousin-reduction statement, the 2 problems may or may not be in the same group .
If the groups were definitely same like the ones' in discrete log, then the QC would likely have an easy time, figuring out all the factors of a certain number in that group, in about .
Applications of this Lattice Theory in Digital Signatures.
Application: [GentryPeikertVaikuntanathan'08]
Generate uniform such that with a secret trapdoor.
As we discussed before, in a digital signature, we just need to generate a public key and a private key, here, the following are a basically a uniformly random matrix and respectively.
Hence, just to reiterate, will be my secret signing key and will be viewing key that I give out to the public.
Sign() and use to sample a short 𝕫𝕞 s.t. 𝕫𝕟𝕢.
So to boil it down we first use as a hash function H to map the hidden message to a vector in 𝕟𝕢.
Then we use the secret key to sample a short solution 𝕫
In simpler context, we draw 𝕫 such that it reveals nothing about the secret key or basically the trapdoor, here that is .
For visualising the distribution, and the algorithm is supposed to sample random 𝕫
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Verify (𝕫), checking that and 𝕫 is sufficiently short.
The verifier redoes the computation of and checks it's equality with . In real life we deduce the value of using a random oracle, ideally, it's a encrypted by a collision-resistant hash function, usually sha-2 or sha-3.
Security Breach: forging a signature for a new message requires finding short𝕫 st . This is indeed computationally hard to find.
There is still indeed a chance of a security vulnerability, that is, when Alice signs the message multiple times.
In that case, for the same , multiple values are getting leaked.
Hence, for every sign try, there's a new 𝕫, that gets generated, so we can write down the equations like:
𝕫
and
𝕫
therefore, rearranging we get:
𝕫𝕫
This is a really harmful step here, as this signifies giving away short vectors in the A lattice.
One of the ways to mitigate this problem is to add a salt to the hash-function, so that every time we sign we are essentially hashing a different number.
here, is the standard deviation of the error, and as we know is the prime modulus.
also, the rate, also called the error rate is denoted as:
The matrix states uniformly random vectors that are horizontally stacked sideways to each other. Hence, is the linear combination of the with provided there's an extent of error among the inner products of these matrices.
Decision: to distinguish between (A, b) from uniformly independent (A, b) with no errors in their inner products.
Hardness of LWE
()-approx worst case lattice problems search-LWE decision-LWE crypto
If we have an algorithm that can solve a search-LWE problem, then one can convert this algorithm into an algorithm that solves worst case lattice problems that has () approximation factors.
again is the error rate here, which is the fraction of the cycle covered by the error here. However, () states that the approximation factor that we have here kind of grows inversely with .
Hence, when the error rate narrows, the approximation factor worsens, thereby the problem becomes more relaxed. Smaller error rate gives us a weaker guarantee for the worst case lattice problem.
Moreover, being unable to solve Decision-LWE with quantum also implies being unable to solve Search-LWE with quantum. Thus, coming to the last piece of the puzzle, that is most Post-Quantum Cryptosystems are built based on the fact that decision-LWEs are hard to solve by a quantum computer.
A Learning with errors problem can be called a dual to a Short Integer Solutions problem. We use the word dual because they share the same strong mathematical foundation.
Let
Given and , find .
Here is a diagrammatic representation
Image Not ShowingPossible Reasons
The image was uploaded to a note which you don't have access to
The note which the image was originally uploaded to has been deleted
Here we can see that is not exactly where we want it to be, but it's close enough in the neighbourhood, within the approximation factor , in terms of SIS.
However, in terms of LWE, we call this a Bounded-Distance Decoding(BDD), where the given target is '-far' from a lattice point. Our task is to find that point.
The difference between LWE and BDD is that in LWE the inputs are generated from a Gaussian Distribution of points. Whereas in BDD, the lattice, the inputs as well as the targets could be anything and anywhere. Hence BDD is more like a worst-case scenario for LWE.
Regev's Theorem
Statement: If we have an algorithm that solves BDD on an arbitrary lattice , then have a quantum procedure then there exists an algorithm that samples '-short' points from a dual lattice . The length of that Gaussian is inversely related to the relative distance of .
Design of Dilithium is based on Fiat Shamir with Aborts technique.
Uses singature compression algorithms, to send signatures that are 50% smaller.
Latest specs of Dilithium consists of compression of the public key, thereby making it 60% smaller.
Hardness of the underlying problem is neither based on Learning with Errors nor Short Integer Solutions problem, it's based on something that keeps interpolating between the 2, hence it's called Module LWE.
Principal design patterns of TrueID-Q, based on Dilithium
Easy to implement securely as it does not involve any Gaussian sampling.
Total size of public key + signature is relatively small. Although among the other NIST PQC algorithms, Falcon is smaller.
Parameters on Dilithium are chosen very conservatively, so that future developments can take place easily.
Using Module-LWE/SIS allows us to work over the same small ring for all security levels, arithmetics need to be optimized only once and for all.
Choosing such a Ring
Strategy: Choosing the smallest ring dimension n such that it gives the main advantages of Ring LWE.
Dimensions: 256 is sufficiently large enough to get a large set of small norm challenges.
Fully splitting prime q allows for NTT-based mutliplications.