---
title: TADA Part 1, Time-Assured Deniable Authentication
tags: [cryptography, protocols]
---
# TADA 🎉 : Time-Assured Deniable Authentication (Part 1)
> *thank you to Emma, Enrico, Zoey, and various others for thoughts and conversations, and to the [Community Privacy Residency](https://community-privacy.github.io/) for creating the space for this project to come to life :)*
<!-- This project introduces the following:
- **TADA**, an **adaptable protocol for deniable authentication**, and its different flavors!
- A **Rust primitive** for **[Boneh-Naor timed commitments](https://crypto.stanford.edu/~dabo/abstracts/timedcommit.html)**.
--- -->
## Why is deniable authentication important?
<!-- In conventional authentication protocols, like OAuth tokens or TLS client certificates, successful authentication produces a verifiable paper trail of who authenticated, when, and to what. Verifiable authentication has become the standard in modern information ecosystems and even essential in areas as financial auditing. However, many sensitive and important contexts have struggled to adapt precisely because verifiable authentication offers the wrong guarantees.
-->
In a [recent article on the state of digital privacy](https://vitalik.eth.limo/general/2025/04/14/privacy.html?curius=2011), Vitalik discusses that in order for social structures to remain aligned and stable, strong limits must be imposed on the amount of coordination between its constituents, and consequently, **strong promises must be granted about the deniability of their actions.** Democracy relies on voting systems that use *blind ballets*, and lives are saved by calls to emergency services protected under [Good Samaritan laws](https://www.ericratinoff.com/understanding-californias-good-samaritan-law/). Conversely, individuals should not be forced to condede their health data to the risk of getting leaked (to future employers, or data brokers for profit, for eample) as the only way to gain access to medical care.
In general, these types of interactions should be *unlinkable.* No two service providers should be able to collude and learn more information about someone by comparing records of their independent interactions. In fact, such a record should not even *imply the service provider learned anything about whether or not you were able to successfully authenticate.* If you decide to go to a 21+ queer bar, no one should be able to tell whether or not you even got in.
We often refer to this concept as ***plausible deniability***, and a protocol that upholds plausibly deniable interactions as **deniable authentication**. In deniable authentication, a verifier should only authenticate valid provers, and no one but the verifier should know which provers are actually valid.
Formally, a deniable authentication protocol should have the following characteristics:
- **Deniability:** A verifier $V$ cannot convince any third party that a prover $P$ has provided a valid credential.
- **Completeness:** For any prover $P$ with a valid credential, $V$ should always accept after following this protocol.
- **Soundness:** Through repeated rounds of the protocol with a prover $P$, a malicious verifier $A$ does not learn anything that would allow it to falsely autenticate to another verifier $V'$ by pretending to be $P$.
<!-- Interestingly, DA almost involves this notion of "*less* than zero knowledge". -->
## The Designated Verifier Protocol
A vanilla deniable authentication protocol relies on the idea of a [designated verifier (DV)](https://github.com/enricobottazzi/designated-verifier-proof), which receives and verifies zero-knowledge proofs $\pi_{DV}$ of the following circuit:
- $P$ knows some witness $w$ for a statement $x$ such that $x \in \mathcal{L}$, described by the circuit $C(x, w)$, OR
- $P$ knows the $\mathsf{sk}_{DV}$, the secret key of the designated veriifer.

The intuition here is that if $DV$ attempts to convince another party $A$ that $P$ has successfully validated, it will be unclear to $A$ whether $\pi_{DV}$ was a proof made by $P$ with a legitimate valid witness, or by a fraudulent $DV$ using only their private key.
However, anyone with knowledge of the designated verifier's secret key can create false proofs without consequence. Furthermore, the designated verifier could also always authenticate themselves. These vulnerabilities compromise the legitimacy of practical usages for this scheme as an ironclad deniable authentication protocol. Security of this construction hinges on the assurance that this key remains uncompromised, which often results in involving a root of trust in some external key management system.
More subtly, its deniability depends on only the ability of *one party*, the designated verifier, to generate fake proofs. This limits plausible deniability to solely be enforced by the designated verifier and makes the true validity of proofs easier to infer, as one party can only generate so many fake proofs at a time.
But plausible deniability should not require any roots of trust to be enforceable. **The ability to deniably authenticate should be incorruptable.**

Furthermore, it would be nice for the (designated) verifier to have the ability to deny authentication to provers who have been suspected to have acquired an accepting proof through the bottom branch.
## An Introduction to Timed Commitments
A [**timed commitment** (Boneh, Naor)][1] is a commitment scheme where after some time, the receiver can forcibly retrieve the committed value on their own. In other words, the hiding property is *ephemeral*.
More formally, a timed commitment $\zeta(T, t, \epsilon)$ is a cryptographic scheme involving a setup algorithm $\mathsf{Setup}(1^{\lambda}) \rightarrow pp$ , and the following three stages:
- $\mathbf{\mathsf{Commit}}(pp, T, m) \rightarrow \left<c, \pi_{0}\right>:$ The sender commits to a message $m$ which takes $T$ to recover and sends a proof $\pi_0$ that the commitment $c$ was computed correctly.
- $\mathbf{\mathsf{Open(c)}} \rightarrow \left<m, \pi\right>:$ The sender reveals $m$ to the receiver, along with a proof that $m$ is the correct committed value.
- $\mathbf{\mathsf{ForceOpen}}(c) \rightarrow \left<m, \pi\right> :$ The receiver executes an algorithm on $\zeta$ which runs in at most time $T$ to open the committed value $m$ and produce a proof that it was calculated correctly.
- Any algorithm with runtime at most $t$, even with access to polynomially many parallel processors, only has a negligible probability of at most $\epsilon$ of extracting $m$.
Timed commitments work because exponentiations in a compositive field are hard to parallelize, but always possible to eventually compute. We'll colloquially say that $\zeta (T, t, \epsilon)$ **expires** after time $t$ has elapsed since it's been sent.
:::spoiler **How does forced opening work?**
The security of timed commitments relies on the **generalized BBS assumption**, where for $g \in \mathbb{Z}, N = p_1p_2$ for primes $p_1 \equiv p_2 \equiv 3 (\text{mod } 4)$, and $k<n$, where n is the number of bits of $N$, given
$$W = \left< g^2, g^{4}, ..., g^{2^{2^{k}}}\right> \mod N,$$
the element $g^{2^{2^{k+1}}}$ is indistinguishable from a random quadratic residue in $\mathbb{Z}_N$ for any psuedo-random algorithm with runtime $t<< 2^k.$
To make a commitment to $S$, the committer generates the two secret primes $p_1, p_2$, and broadcasts $N$, as well the XOR of each index $S_i$ with the LSB of $g^{2^{(2^{k}-i)}} \mod N$. $\mathsf{ForceOpen}$ requires the receiver to compute each $g^{2^{(2^{k}-i)}}$ herself.
:::
## Timed Commitments Make Proofs Ephemeral
Imagine you're a verifier and send some randomness $r$ in a timed committment $\zeta(T, t, \epsilon)$ to a prover. After $t$ you can anticipate they can guess $r$ with nontrivial advantage, and after $T$ you can expect they know $r$ entirely. It follows then, that a proof for the following circuit should be **automatically invalid** if received after time $t$:
- $P$ knows some witness $w$ for a statement $x$ such that $x \in \mathcal{L}$, described by the circuit $C(x, w)$, OR
- *$P$ knows the committed value of $\zeta$.*
Sharing the transcript of this interaction would fail to convince a third party that the proof was not generated simply by extracting the committed value, since the third party is even able to extract this value herself within time $T$ by running $\mathsf{ForceOpen}(c)$. What about network loggers? As long as the addresses of the message senders and receivers is encrypted, no one looking into the verifier's network can tell how much time has elapsed between the proof being received and the sending of the timed challenge to the prover.
This construction is at the heart of **time-assured deniable authentication (TADA).** Adding a timed commitment to a circuit results in an *ephemeral proof*.

The core shortcoming of the designated verifier protocol comes from how deniability is fully conditional on an uncompromised designated verifier and their sole ability to present false-looking proofs. By replacing this arm of the designated verifier circuit with **knowledge of the opening to a timed commitment $\zeta$**, accepting proofs are only whitelisted if they arrive *while it is computationally infeasible* to create an accepting proof without a valid witness. Conversely, TADA further improves upon this by enabling anyone to generate fake proofs using the lower branch of the circuit once the commitment expires, enhancing deniability by *vastly expanding the space of plausible fake proofs*.
Is it reasonable to expect a prover to provide a valid proof before some "expiration time"? Well firstly, the verifier can just alter the public parameters of the timed commitment so that $t$ reflects what a realistic client-side proof generation time may be for a given circuit. Furthermore, I think this expectation is not only reasonable, but one that ***we already have*** in existing interactions with real-world authentication services. When you want to check your bank account balance through an ATM and go idle, for example, the machine will ask if you're still there and make you tap on the screen to confirm this. This makes sense, since by starting an interaction with the ATM you're signaling that you're prepared to show your personal credentials and therefore able to provide them in a timely fashion. From the user side, a lot of sensitive information is changing hands, so it is reassuring that this interaction is done in an *ephemeral communication channel*.
## TADA!
**TADA** lifts this idea of ephemeral communication channels into a programmable piece for preserving plausible deniability in a variety of applications. Before diving into examples of what those may look like, we formally present the vanilla TADA protocol.
Assume that the prover $P$ wants to deniably authenticate to a verifier $V$ with a proof that they know some witness value $w$ such that the statement $x$ satisfies the constraints $C(x, w) = 1$. These constraints are equivalent to membership in language $x\in \mathcal{L}$ and can be treated as preprogrammed, app-specific authentication logic.
### Vanilla-TADA

1. $V$ generates some random value $r\leftarrow \mathbb{Z}_N$ for some security parameter $N$, and runs $\mathsf{Commit}(N, T, r)$ to generate the tuple $\left<c', \pi_0\right>$ of timed commitment $c' := \zeta(r)$ and its corresponding proof.
2. $V$ sends $\left<c', \pi_0\right>$ to $P$.
5. $P$ checks $\pi_0$ and provides a zero-knowledge proof $\pi$ of the following relation:
- $P$ knows some witness and statement (w, x) satisfying constraints $C(x, w)$ **OR**
- $P$ knows the opening value of the commitment $\zeta$.
6. Before time $t$ has elapsed for the verifier since the completion of step 2, $P$ sends $\pi$ to $V$.
7. $V$ verifies the proof and accepts if received before time $t$.
Completeness follows immediately, and soundness derives from the zero-knowledge property of $\pi$. We've touched on deniability above already, but more formally, any polynomial-time simulator can extract the opening of $c'$, create a valid proof $\pi$, and attach it to the transcript.
## Look Ma, No Client-side Proofs!
Client-side proof generation is expensive, and perhaps inappropriate in situations when only simple conditions need to be checked, like being able to generate signatures from a particular public key. Applications such as message authentication require deniable authentication but naturally involve many user interactions. Fortunately, timed commitments lets us get around this as well.
The magic of timed commitments is still in their ability to create these ephemeral communication channels. We've used them so far to mask proof generation, but a more generous interpretation is that they can obscure any message sent to verifier if it can appear to be derived by anyone who knows the opening of the timed commitment. If we imagine this to scale with a large enough network of individuals to generate forced openings for each verifier interaction, ***any communication mediated through timed commitments is deniable.***
Essentially, the verifier, after generating some timed commitment to $r$, can hide $r$ behind some trapdoor function with a trapdoor value the prover wishes to demonstrate knowledge of. The verifier then sends both the trapdoor function output and the timed commitment (to the same value) to the prover, and only accepts if the prover sends back the correct $r$ (retrieved through opening the trapdoor) quickly. Of course, the verifier also needs to generate a NIZK proof that the committed value and preimage of the trapdoor function are the same. And there's deniable authentication without client-side proofs!
### Encryption-TADA
**Encryption-TADA** uses public-key encryption for the trapdoor function. Thus, provers carrying out this protocol are attesting to knowledge of a speciifc private key.

1. $V$ generates some random value $r\leftarrow \mathbb{Z}_N$ for some security parameter $N$ and creates the timed commitment $c' := \zeta (r)$ which can be force opened in time $T$.
2. $V$ additionally generates an encryption $\mathbf{e} := \mathsf{Enc}_P(r)$ of $r$ to some public key $\mathsf{pk_P}$.
3. $V$ generates a zero-knowledge proof $\pi$ that the opening of $c'$ is equal to the value encrypted by $\mathbf{e}$.
4. $V$ sends to the prover $P$ the tuple $\left<c', \mathbf{e}, \pi\right>$.
5. $P$ checks that $\pi$ is valid and decrypts $\mathbf{e}$ to $\mathsf{Dec}(\mathbf{e}) = r'$.
6. Before time $t$ has passed for $V$, $P$ sends $r'$ in the clear.
7. $V$ verifies that $r' == r$.
Completeness follows directly, and soundness holds by the soundness of the algorithms $\mathsf{Enc, Dec}$. Deniability follows from a similar argument to that for Vanilla-TADA.
Note that in this construction, the only proof generation is being done from the ***verifier.*** The client only needs to verify $\pi$, decrypt a ciphertext, and send randomness in plaintext, which can all be done on a personal device!
### On-chain Deniable Authentication
If we're in a setting with some online database linking public keys to the credentials they possess (eg. verified ECDSA signatures stored on-chain, an issuer's record of the anonymous credentials it has generated), we can use Encryption-TADA instead of Vanilla-TADA to authenticate!
1. Run **Encryption-TADA** to the end of step 5.
2. $P$ produces a signature $\sigma_P(r')$ of the recovered $r'$.
2. Since everything is on-chain, $P$ hides their address by using a relayer to send $\mathbf{e_P} := \mathsf{Enc}_V(\sigma_P(r'))$ to $V$.
3. $V$ decrypts $\mathbf{e_P}$ to $\sigma' := \mathsf{Dec}_V(\mathbf{e_P})$ and verifies the signature is both valid and the signed message $r' == r$.
4. Additionally $V$ checks that $P$'s public key is linked to an existing valid credential and rejects otherwise.
4. If the check passes, $V$ accepts.
The relayer even pays the gas fee, so the prover's authentication should be untraceable!
Despite its simplicity, the encryption flavor of TADA can already be melded in creative ways to make secure whistleblowing platforms and traceless chatrooms. It's also worth noting that this public-key encryption algorithm can be swapped out with a **witness encryption scheme** to give more expressivity in the attributes that a prover can demonstrate possession of. As witness encryption matures, even more new types of deniable authentication protocols will become unlocked.
Let's go explore what some of these constructions could look like in [part 2](https://hackmd.io/@topo/real-world-tada)!
[1]: https://crypto.stanford.edu/~dabo/abstracts/timedcommit.html