<!-- <span style="color: #0; font-family: Times New Roman; font-size: 1.3em;">Threshold Signature Schemes Through Shamir Secret Sharing
</span>
===
-->
<style>
.markdown-body code,
.markdown-body tt {
color: #eee;
background-color: rgba(20, 230, 230, 0.36);
}
/* a,
.open-files-container li.selected a {
color: #5EB7E0;
} */
</style>
In this note, we take a look at a recent [survey paper](https://eprint.iacr.org/2020/1390.pdf) on *Threshold Signature Schemes* (TSSs) in the context of ECDSA (a signature scheme we discussed earlier in this [note](/@kullervo/ECDSACircuits)) and talk about a particular way of sharing a secret value $s$ among different parties that relies on Lagrange polynomials called *Shamir Secret Sharing* (we will refer to it as SSS).
Since we discussed polynomial representation/decomposition based on Lagrange as [part](/@kullervo/commitmentUnivariatePoly) of our series on polynomial commitments, SSS is easy to discuss as it uses much of the same foundation. For a more general discussion, we highly recommend the original survey. Much of our discussion will be simplified in comparison.
---
<span style="color: #0; font-family: Times New Roman; font-size: 1.1em;">Secret Sharing Schemes
</span>
---
A $(t, n)$-secret sharing scheme splits a (secret) value/mathematical object $s$ into $n$ different shares to be distributed to multiple parties, such that $t + 1 \leq n$ distinct shares are necessary and sufficient to reconstruct $s$.
This would mean that there is a certain amount of *redundency of information* in each of the shares (at least as much as $n/(t+1)$) such that accumulation of $t+1$ of them is enough to learn all the necessary information to determine $s$ (which we refer to as $n$ as if there was no redundency). Further, this redundency of information must be distributed to different shares such that *any* $(t+1)$ of them would be sufficient for reconstructing $s$.
---
<span style="color: #0; font-family: Times New Roman; font-size: 1.1em;">Shamir Secret Sharing
</span>
---
One can imagine that mathematical objects with infinite number of (easily accessible) representations such as polynomials would be useful for building redundencies.
In Shamir Secret Sharing, the mathematical object whose information is to be shared among multiple parties is a univariate polynomial $f(x)$. If the secret to be shared $s$ is a numeric value instead of the polynomial itself (which is usually the case), we can simply encode the secret value as part of this polynomial, e.g. as the evaluation of $f$ at the origin, $f(0) = s$. There are infinite number of polynomials with $f(0) = s$ however, and therefore, the choice of $f$ will involve injecting some randomness.
The only other concern is the *maximal degree* of the polynomial to be considered. As we can imagine, for a $(t, n)$-secret sharing scheme (which requires $t+1$ shares to reveal the secret), the sensible choice for the maximal degree is $t$, as knowing the evaluations of such a polynomial at $t+1$ distinct points is enough to reconstruct the entire polynomial exactly. Since the secret is baked into the polynomial at the origin, knowing the polynomial is enough to discover the secret value. It is very intuitive.
---
Now, let us describe it more carefully. Assume that we wish to share the secret value $s$ in some way to $n$ participants such that any $t+1$ of them are enough to recover $s$.
There are various ways one can randomly choose a polynomial $f(x) = s + a_1 x^1 + a_2 x^2+ ... + a_t x^t$ of maximal degree $t$ such that it passes through $s$ at the origin. The simplest way would be to randomly sample the coefficient vector $f_a = [a_1, ..., a_n]$ from a field.
As we have discussed [before](/@kullervo/commitmentUnivariatePoly), this same polynomial can be represented instead through its evaluation set $f_\bar{x} = \{(x_1, f(x_1)), (x_2, f(x_2)), ..., (x_n, f(x_n))\}$ as long as $x_i \neq 0$ and it contains at least $n\ge t+1$ values evaluated at unique points within it (redundant if more elements are in it than $t+1$). In fact, any $t+1$ element subset of $f_\bar{x}$ is sufficient to reconstruct $f(x)$ through Lagrange interpolation:
\begin{align}
f(x) &= \sum_{i=1}^{t+1} l_{x_i}(x)f(x_i)\\
l_{x_i}(x) &= \prod_{j=0, j\neq i}^{t+1} \frac{x-x_j}{x_i-x_j}
\end{align}
A distributor simply gives out a single element of the evaluation set $(x_i, f(x_i))$ to each party as a share. It is clear that $t+1$ shares would allow discovery of $f(x)$ and therefore its evaluation of it at the origin $f(0)=s$. We can refer to this procedure as $Reconst$,
\begin{align}
s=Reconst((x_1, f(x_1)), (x_2, f(x_2)), ..., (x_{t+1}, f(x_{t+1})))
\end{align}This concludes the description of the secret sharing scheme.
---
<span style="color: #0; font-family: Times New Roman; font-size: 1.1em;">Properties
</span>
---
It is important for SSS secret reconstruction $Reconst$ to be additively and multiplicatively homomorphic for it to be used for signature schemes such as ECDSA (see [1]), as the signature algorithm of ECDSA can be thought of as a circuit with addition and multiplication gates of a certain depth. Being able to execute such circuits through sole computations on individual shares without directly computing/sharing intermediary results of ECDSA would make threshold signatures viable.
Additive homomorphism is not difficult to satisfy for many secret sharing schemes. However, multiplicative homomorphism is harder to accomplish and in the case of Shamir requires more shares to be available than $t+1$, particularly $2t+1$ in the naive approach. With further communication between parties, this can be reduced back to $t+1$ as we will see. Let us show how we can prove these properties in the case of Shamir.
---
**Additive homomorphism:**
Suppose there are two secrets $s_1$ and $s_2$ that are shared across $n$ parties as before with $t+1$ shares needed for reconstructing each. For the $i$th party, the shares are $(x_i, f_1(x_i))$ and $(x_i, f_2(x_i))$, which means that we are assuming the evaluation set $\{x_{i=1:n}\}$ is the same for both polynomials $f_1, f_2$ that secrets are embedded in.
Now, we wish to show that if $s_3 = \alpha s_1 + \beta s_2$, it is possible to reconstruct this new secret $s_3$ from any $t+1$ subset of the original shares. This is very straightforward for individual parties. They can simply compute a share for the new secret $s_3$ as $(x_i, \alpha f_1(x_i) + \beta f_2(x_i))$ from their existing shares for secrets $s_1$ and $s_2$. Using the linearity of Lagrange interpolation at the same set of evaluation points $x_i$,
\begin{align}
s_3=& \hspace{0.1in} Reconst((x_1, \alpha f_1(x_1) + \beta f_2(x_i)), ..., (x_{t+1}, \alpha f_1(x_{t+1}) + \beta f_2(x_{t+1}))) \\
=& \hspace{0.1in} \alpha \cdot Reconst((x_1, f_1(x_1)), ..., (x_{t+1}, f_1(x_{t+1}))) + \\
& \hspace{0.1in} \beta \cdot Reconst((x_1, f_2(x_1)), ..., (x_{t+1}, f_2(x_{t+1})))\\
=& \hspace{0.1in} \alpha s_1 + \beta s_2
\end{align}
Adding two polynomials $f_1(x)$ and $f_2(x)$ that pass through $s_1$ and $s_2$ at the origin simply results in a polynomial $f_3(x)$ (which we can obtain through $t+1$ shares of $s_3$) that passes through $s_3 = s_1+s_2$ at the origin.
---
**Multiplicative homomorphism:**
Multiplicative homomorphism is a bit more challenging. We have the same setup as before but this time the new secret is computed as $s_3 = s_1 \cdot s_2$. Consider a polynomial $f_3(x) = f_1(x)\cdot$$f_2(x)$ where $f_1$ and $f_2$ are defined as before. It is clear that $f_3(0) = f_1(0)\cdot f_2(0)$$= s_1\cdot s_2 = s_3$, which means that this polynomial passes through $s_3$ at the origin.
<!-- If we multiply two polynomials $f_1(x)$ and $f_2(x)$ of maximal degree $t$ that pass through $s_1$ and $s_2$ at the origin,
\begin{align}
f_1(x) &= s_1 + a_1x^1 + a_2x^2 + ... a_tx^t = s_1 + f_a(x)\\
f_2(x) &= s_2 + b_1x^1 + b_2x^2 + ... b_tx^t = s_2 + f_b(x)
\end{align} we get $f_3(x) = f_1(x) \cdot f_2(x) = s_1\cdot s_2 + s_1f_2(x) + s_2f_1(x) + f_1(x)f_2(x)$ where the only term without a power of $x$ is $s_1\cdot s_2$. -->
However, the maximal degree of the polynomial $f_3(x)$ is $2t$, as it is a multiplication of two polynomials with maximal degree $t$. This would mean that it would require $2t+1$ distinct evaluations of this polynomial for it to be reconstructed through Lagrange interpolation.
As the evaluation of $f_3(x)$ at a particular point $x_i$ is simply $f_3(x_i)=f_1(x_i)\cdot f_2(x_i)$, and any $2t+1$ distinct evaluations of $f_3(x)$ uniquely identifies it among all polynomials with maximal degree $2t$, knowledge of $(x_i, f_1(x_i)\cdot f_2(x_i))$ at $(2t+1)$ distinct evaluation points $x_i$ is enough to recover $f_3(x) = f_1(x) f_2(x)$ through Lagrange interpolation. Therefore, each individual party can create $(x_i, f_1(x_i)\cdot f_2(x_i))$ from $(x_i, f_1(x_i))$ and $(x_i, f_2(x_i))$ that they already have, so that they can use it to collaborate with $2t$ other parties to reconstruct $s_3$ directly.
If $n$ parties have shares for secrets $s_1$ and $s_2$, $2t+1\leq n$ parties can come together to obtain secret $s_3$ using only their shares for $s_1$ and $s_2$. This would of course require that $t < n/2$ at the beginning. Further, it is not ideal that required number of participants are now almost double the original $t+1$ for reconstructing secrets.
---
<span style="color: #0; font-family: Times New Roman; font-size: 1.1em;">A secure degree reduction for multiplicative homomorphism
</span>
---
Clearly, using $f_3(x) = f_1(x) \cdot f_2(x)$ directly is not viable if we need to keep the required number of participants at $t+1$. In this section, we present a method where a new polynomial of maximal degree $t$ is communicated and shared between $n$ parties s.t. it passes through the multiplicative secret $s_3 = s_1 \cdot s_2$ at the origin.
This is accomplised by each party making computations on the shares they already have for $s_1$ and $s_2$ and communicating to other parties some of the results of these computations without revealing their original shares.
It starts with each party $i$ generating a new polynomial $f'_i(x)$ of maximal degree $t$ that pass through $f_1(x_i) \cdot f_2(x_i)$ at its origin. Then the evaluations of these $f'_i(x)$ at $\{x_1, x_2, ..., x_n\}$ are computed by each party, which we will refer to as $\{z_{i1}, z_{i2}, ..., z_{in}\}$. All of this can be represented as a matrix of values:
\begin{matrix}
& 0 & x_1 & x_2 & ... & x_n\\
\hline
f'_1(x) |& f_1(x_1)\cdot f_2(x_1) & z_{11} & z_{12} & ... & z_{1n}\\
f'_2(x) |& f_1(x_2)\cdot f_2(x_2) & z_{21} & z_{22} & ...& z_{2n}\\
... & ... & ... & ... & ...& ...\\
f'_n(x) |& f_1(x_n)\cdot f_2(x_n) & z_{n1} & z_{n2} & ...& z_{nn}
\end{matrix}
<!-- f' corresponds to $\sigma_{3,4,5}$
z_11, z_12,... z_1n corresponds to (3,6,9) subshares
z_21, z_22,... z_2n corresponds to (8,3,9) subshares
-->
As it stands, the information in the $i$th row is known only to the $i$th party. A fair (non-adversarial) distributor/dealer sends information across parties such that the $i$th party now also knows all the information from the $i$th column. Particularly, $[z_{1i}, z_{2i}, ..., z_{ni}]$ becomes known to the $i$th party alongside $[z_{i1}, z_{i2}, ..., z_{in}]$ which they already knew. This does not reveal the original shares of the parties to each other.
<!-- "Thus, after distributing the shares, party 1 holds (3,8,7); party 2 holds (6,3,9); and party 3 holds (9,9,0)." This corresponds to column values, i.e. $z_{1i}, z_{2i}, ..., z_{ni}$ -->
Now, it is possible to create a new set of polynomials $f^*_i(x)$ that pass through $[z_{1i}, z_{2i}, ..., z_{ni}]$ at points $[x_1, x_2, ..., x_n]$. Note that each $f^*_i(x)$ is encoding some information from all the $n$ different random polynomials $f'$ constructed at the previous stage. This can be represented as kind of a transpose of the original matrix s.t. the columns are evaluations of polynomials $f^*$.
\begin{matrix}
& f^*_1(x) & f^*_2(x) & ... & f^*_n(x)\\
\hline
0 \hspace{0.1in} |& v^*_1? & v^*_2? & ... & v^*_n?\\
x_1 |& z_{11} & z_{12} & ... & z_{1n}\\
x_2 |& z_{21} & z_{22} & ...& z_{2n}\\
... & ... & ... & ... & ...\\
x_n |& z_{n1} & z_{n2} & ...& z_{nn}
\end{matrix}
<!--
"After interpolation" generates $f^*$, "the shares held by the parties are 3, 7, 0" which corresponds to evaluations of $f^*$ at the origin. -->
It is important to realize that unlike $f'_i(x)$ which are randomly chosen to be of maximal degree $t$ with only a single constraint at their origin evaluation, $f^*_i$ are determined through their $n$ constraints with Lagrange interpolation and have a maximal degree of $n-1$ as a result.
Now, the polynomial $f^*_i$, as well as its value at the origin $v^*_i=f^*_i(0)$, are known only to the $i$th party. It turns out that any distinct $t+1$ collection of $(x_i, v^*_i)$ identifies (through Lagrange interpolation) the same polynomial $f^+(x)$ of maximal degree $t$ and further this polynomial passes through $s_3=s_1\cdot s_2 = f_1(0)\cdot f_2(0)$ at the origin.
\begin{matrix}
& 0 & x_1 & x_2 & ... & x_n\\
\hline
f^+(x) |& f_1(0)\cdot f_2(0)=s_3 & v^*_1 & v^*_2 & ... & v^*_n
\end{matrix}
If this claim, which we will call, *origin transfer lemma*, is indeed true, all the parties now have updated shares $(x_i, v^*_i)$ for secret $s_3$ that they computed from their original shares for $s_1$ and $s_2$ (along with some information from others) without revealing those shares to others. Further, any $t+1$ of these new shares can be used for reconstructing secret $s_3$ as we wanted.
The only thing left now is to prove the *origin transfer lemma*.
---
**Proof of origin transfer lemma**: First, let us try to visually demonstrate what we are trying to prove. Consider an $(n+1) \times (n+1)$ matrix whose elements represents evaluations of a series of polynomials at points $[0, x_1, x_2, ..., x_n]$. \begin{matrix}
& & 0 & x_1 & x_2 & ... & x_n\\
& & f_3(x) & f^*_1(x) & f^*_2(x) & ... & f^*_n(x)\\
\hline
0 & f^+(x) |& f_1(0)\cdot f_2(0)? & v^*_1 & v^*_2 & ... & v^*_n\\
x_1 & f'_1(x) \hspace{0.05in}|& f_1(x_1)\cdot f_2(x_1) & z_{11} & z_{12} & ... & z_{1n}\\
x_2 & f'_2(x) \hspace{0.05in}|& f_1(x_2)\cdot f_2(x_2) & z_{21} & z_{22} & ...& z_{2n}\\
... & ... & ... & ... & ... & ...& ...\\
x_n & f'_n(x) \hspace{0.05in}|& f_1(x_n)\cdot f_2(x_n) & z_{n1} & z_{n2} & ...& z_{nn}
\end{matrix} Complicating the matters is the fact that both individual rows and individual columns are associated with particular polynomials. For example, the elements of the first row represents the evaluations of polynomial $f^+(x)$ at $[0, x_1, x_2, ..., x_n]$, while the first column represents the evaluation of polynomial $f_3(x)$ at $[0, x_1, x_2, ..., x_n]$ etc. Note that we can write this a bit more concisely as follows:
\begin{matrix}
& & 0 & x_1 & x_2 & ... & x_n\\
& & f_3(x) & f^*_1(x) & f^*_2(x) & ... & f^*_n(x)\\
\hline
0 & f^+(x) |& f_3(0)? & v^*_1 & v^*_2 & ... & v^*_n\\
x_1 & f'_1(x) \hspace{0.05in}|& f_3(x_1) & z_{11} & z_{12} & ... & z_{1n}\\
x_2 & f'_2(x) \hspace{0.05in}|& f_3(x_2) & z_{21} & z_{22} & ...& z_{2n}\\
... & ... & ... & ... & ...& ...\\
x_n & f'_n(x) \hspace{0.05in}|& f_3(x_n) & z_{n1} & z_{n2} & ...& z_{nn}
\end{matrix}
We obtained this matrix simply by combining all the information from the previous section. First column corresponds to evaluations of $f_3(x)$. $z_{ij}$ are random values picked by each party such that each row corresponds to evaluations of random polynomials $f'_i(x)$ of maximal degree $t$. Finally, $v^*_i$ of the first row are deterministically obtained from the Lagrange interpolations of columns of $z_{ij}$, which create $f^*_i(x)$ as described before, whose evaluations at $0$ corresponds to $v^*_i$.
What about $f^+(x)$ in the first row? It is defined as a polynomial obtained from any $t+1$ subset of $v^*_i$, and therefore has maximal degree $t$. But if it is defined through a particular subset, the rest of the evaluations that are outside of that subset must be shown to be correct. Further, we do not know its evaluation at $0$ which needs to equal $f_3(0)=f_1(0)\cdot f_2(0)$$=s_3=s_1\cdot s_2$ for the origin transfer lemma to be correct.
All row polynomials, $f'_i(x)$ and $f^+(x)$, are defined through Lagrange interpolation of $t+1$ evaluations. Specifically, we will define them through their evaluations at $[x_1, ..., x_{t+1}]$:
\begin{align}
f'_r(x) &= \sum_{c=1}^{t+1} \left(\prod_{j=0, j\neq c}^{t+1} \frac{x-x_j}{x_c-x_j}\right)f'_r(x_c)= \sum_{c=1}^{t+1} \left(\prod_{j=0, j\neq c}^{t+1} \frac{x-x_j}{x_c-x_j}\right)z_{rc}\\
f^+(x) &= \sum_{c=1}^{t+1} \left(\prod_{j=0, j\neq c}^{t+1} \frac{x-x_j}{x_c-x_j}\right)f^+(x_c)= \sum_{c=1}^{t+1} \left(\prod_{j=0, j\neq c}^{t+1} \frac{x-x_j}{x_c-x_j}\right)v^*_c\\
\end{align}
The column polynomials $f^*_c(x)$, on the other hand, use all $n$ evaluations available for a maximal order of $n-1$:
\begin{align}
f^*_c(x) &= \sum_{r=1}^{n} \left(\prod_{j=0, j\neq r}^{n} \frac{x-x_j}{x_r-x_j}\right)f^*_c(x_r)= \sum_{r=1}^{n} \left(\prod_{j=0, j\neq r}^{n} \frac{x-x_j}{x_r-x_j}\right)z_{rc}\\
\end{align}
The origin transfer lemma, as we called it (for the lack of a better term), is simply about proving two set of facts: that $f^+(0) = f_3(0)$ and that $f^+(x_c) = v^*_c \hspace{0.2in} \forall c \in \{t+2, ..., n\}$. If these can be proved based on the definitions above, we would be done.
---
<!-- the validity of this fact: $f^+(x)$ is a maximal degree $t$ polynomial that evaluates at $[0, x_1, x_2, ..., x_n]$ to the first row of the matrix (which passes through the relevant $f^+(0)= s_3$ as we give in the matrix).
-->
<!-- In order to accomplish this, we will simply show that the polynomial $f^+(x)$ is a linear combination of $f'_i(x)$, which themselves are maximal degree $t$ polynomials. This in turn ensures that $f^+(x)$ has the same maximal degree $t$.
-->
We begin by writing out the definition of $v^*_c$:
\begin{align}
v^*_c=f^*_c(0) &= \sum_{r=1}^{n} \left(\prod_{j=0, j\neq r}^{n} \frac{-x_j}{x_r-x_j}\right)f^*_c(x_r)\\
&= \sum_{r=1}^{n} \left(\prod_{j=0, j\neq r}^{n} \frac{-x_j}{x_r-x_j}\right)z_{rc}\\
&= \sum_{r=1}^{n} \gamma_r z_{rc}.
\end{align} As we can see from this result, $v^*_c$ is just a linear combination of the column of values $z_{:c}$, and the same weights apply to all $v^*_c$ regardless of which column $c$ it is.
<!-- This is crucially important, as we now know that at $n$ points $[x_1, ..., x_n]$, $f^+(x)$ evaluates to the same linear combination of the evaluations of $f'_r(x)$ at the same points. -->
<!-- Since the maximal degree of $f^+(x)$ cannot possibly exceed $n-1$ as it is determined -->
Mirroring this on the horizontal direction, we can also write the relationships between $f'_r(0)$ and row values $z_{r:}$, but for any $t+1$ points of evaluation:
\begin{align}
f'_r(0) = f_3(x_r) = \sum_{c=1}^{t+1} \gamma_c z_{rc}.
\end{align}
Further, since $f^+(x)$ is produced by Lagrange interpolation, we can compute f^+(0) directly,
\begin{align}
f^+(0) &= \sum_{c=1}^{t+1} \left(\prod_{j=0, j\neq c}^{t+1} \frac{-x_j}{x_c-x_j}\right)v^*_c\\
f^+(0) &= \sum_{c=1}^{t+1} \gamma_c\left(\sum_{r=1}^{n} \gamma_r z_{rc}\right)\\
f^+(0) &= \sum_{c=1}^{t+1} \sum_{r=1}^{n} \gamma_c \gamma_r z_{rc}\\
f^+(0) &= \sum_{r=1}^{n} \gamma_r \left(\sum_{c=1}^{t+1} \gamma_c z_{rc}\right)\\
f^+(0) &= \sum_{r=1}^{n} \gamma_r f_3(x_r) = f_3(0).\\
\end{align}
Note that in the last step, we use a similar reasoning to the one we used for $v_c^*$ computations, to obtain $f_3(0)$, which is exactly what we wanted. The first part of the lemma is now proven.
---
The second part, that $f^+(x_c) = v^*_c \hspace{0.2in} \forall c \in \{t+2, ..., n\}$, is a bit more difficult. It is possible to prove this directly from the definitions above but a simpler argument goes as follows.
We argue that, \begin{align}
f^+(x) = \sum_{r=1}^{n} \gamma_r f'_r(x)
\end{align} now that we know for a fact that this relationship holds at points, $x\in \{x_1, ..., x_{t+1}\}$ as we explained earlier in our discussion of $v^*_c = f^+(x_c)$ and $z_{:c}$ which correspond to $f'_:(x_c)$. Given this and the fact that the maximal orders of $f^+(x)$ and $f'_:(x)$ are $t$, it is easy to see that the above equality holds for all $x$, as there are $t+1$ contraints for $t+1$ degrees of freedom.
Given the above equation, it is trivial to show that $\forall c \in \{t+2, ..., n\}$,
\begin{align}
f^+(x_c) &= \sum_{r=1}^{n} \gamma_r f'_r(x_c) \\
&=\sum_{r=1}^{n} \gamma_r z_{rc} = v_c^*
\end{align}This concludes our proof of the origin transfer lemma.
<!-- This proof is wrong, problem with \gamma_r -->
---
<span style="color: #0; font-family: Times New Roman; font-size: 1.1em;">References
</span>
---
1. [AHS20] Jean-Philippe Aumasson, Adrian Hamelink, and Omer Shlomovits. A Survey of ECDSA Threshold Signing. https://eprint.iacr.org/2020/1390.pdf, in IACR, 2020.
2. [Sha79] Adi Shamir. How to Share a Secret. https://web.mit.edu/6.857/OldStuff/Fall03/ref/Shamir-HowToShareASecret.pdf, 1979.
3. Some relevant stackexchange posts: https://crypto.stackexchange.com/questions/15395/can-i-use-shamirs-secret-sharing-scheme-for-multiplicative-homomorphism-for-sec
https://crypto.stackexchange.com/questions/13088/secure-degree-reduction-for-shamirs-secret-sharing