# Offchain and Scriptless Mixer An offchain mixer for account based blockchains, similar to [CoinJoin](https://en.bitcoin.it/wiki/CoinJoin) for Bitcoin, would provide benefits such as greatly increased privacy over on-chain solutions such as Tornado Cash, as well as gas savings. It would also be chain agnostic, as the protocol can be implemented for various curves and signature schemes. We propose a design that utilizes a threshold signature scheme (TSS) to achieve coin mixing without using a smart contract. Instead, off-chain coordination is used. The downside of such a design is that the comforting on-chain security is no longer within reach. Hence, for an independent protocol targeting a wild realm of permission-less P2P, we should consider worst-case scenarios when evaluating its safety. The purpose of this document is to provide a comprehensive feasibility analysis and evaluate the practicality of the proposed design considering privacy implications and comparing it with existing on-chain and contract-based solutions. ## Scenario Assume we have $n$ parties who wish to participate in a coin mix. Assume all parties want to mix `1 coin` - all parties must use the same amount, otherwise confidentiality of coin amounts is broken. Assume we have some threshold signature scheme (TSS) which can generate a shared secret between parties as long as $t$ out of $n$ parties are honest and remain online. For this protocol, we assume $t$ out of $n$ parties are honest and will remain online, otherwise the protocol breaks. #### **Phase 1 (off-chain)** - All parties meet and agree on participating in a mix, as well as the amounts to mix and the total size of the mix. - They also generate withdraw accounts `[withdrawAccount_i]` , which will be used for the withdrawal transaction. Each party must use a different nonce for the withdrawal. #### **Phase 2 (off-chain)** - Parties run TSS Keygen ceremony to produce: - secret key distributed across all parties (each of them end up with its own share), - public key and corresponding `mixAddress` in which all parties will lock `1 coin`. #### **Phase 3 (on-chain)** - Each party then locks `1 coin` in the `mixAddress`. This step needs to be fair and ideally happen in an "atomic" fashion: either all lock or no one at all. #### **Phase 4 (off-chain)** - Parties jointly sign $n$ transactions that transfer from `mixAddress` to `withdrawAccount_i` with nonce `nonce_i`. - We note that if some parties refuse to sign or go offline, the protocol can still continue as long as $t + 1$ of them reveal partial signatures. - Note also that parties can begin the `KeySign` algorithm from Phase 2 onwards, as they already know withdraw addresses. They will pause execution right before revealing partial signatures and continue after locking their coins (Phase 3). - Depending on the fairness guarantees Phase 3 provides, there exist two possible scenarios for further actions: - If it happens atomically and everyone locks, then each node can simply release all partial signatures at once. This is the best-case scenario. - If this couldn't be achievable, then each node waits for a lock transaction on-chain and only then releases the partial signature corresponding to the locker's withdraw address. - This of course relies on the parties knowing which deposit corresponds to which withdrawal, which is obviously not ideal for privacy. - Potentially there could be another step between Phase 3 and 4 where, if by some time $\Delta$ not all users have locked, everyone who is still online reveals their deposit address, so the other participants can then check if they deposited and if so, sign their withdrawal tx. - Each party then combines the partial signatures received to create a valid transaction transferring `1 coin` out of `mixAddress`, as it is signed by `mixAddress`'s private key using TSS. - Finally, they submit the transactions in order of nonce to withdraw their funds to a fresh account. - Interestingly, since each party receives all partial signatures, it will allow any one of them to compute all transactions for the group and potentially submit on their behalf. - In an implementation of this, it would be nicest for privacy if the user submits all txs as soon as they get them, as it doesn't cost them anything and will obfuscate more on the network level who the tx is from. # Feasibility Analysis ## Assumptions - Parties that locked their coins have an incentive to stay online - **Note:** this isn't necessarily guarantee their honesty, as they might be under collusion (see ["Rogue sets"](#Rogue-Sets-Sybil-Attack) attack). - Parties are rational: if incentivized to stay online and not collude, they won't. ie. we assume parties are not willing to burn their funds. - The network is synchronous: all parties are expected to broadcast messages in a timely manner before proceeding to subsequent rounds. ## Adversarial Model ### Fault Tolerance Generally, MPC algorithms provide security guarantees based on the thresholds of the number of corrupted parties. Oftentimes, this fraction is at least half of the participants, ie. _honest majority assumption_. However, in real threshold signing or secret sharing deployments, an honest majority is often too strong of an assumption and may go against operational requirements. Hence, realistic TSS protocols assume a _dishonest majority_, security of which holds when an adversary can corrupt up to all but one participant. We analyzed commonly used protocols such as [GG20](https://eprint.iacr.org/2020/540), [CMP](https://eprint.iacr.org/2020/492), and [GKSS20](https://eprint.iacr.org/2020/498) and find out that their distributed key generation is safe as long $1$ parties are honest. As for distributed signing, protocols are able to guarantee unforgeability with $n - t$ honesty, where $t$ is the threshold parameter set during the Keygen. The first trade-off needed to be addressed is the one between risk minimization ($t$) and practical liveness. On the one hand, the higher $t$ we set, the lesser chance for parties to a forge malicious signature. At the same time, higher $t$ leaves less room for unforeseen failures (peer goes offline). This is important since after parties lock their coins inside `mixAccount` the only way to withdraw is for $t+1$ parties to jointly compute valid signatures. Another trade-off separates group sizes ($n$) and performance efficiency, where bigger $n$ can sufficiently decentralize trust and minimize the risk of collusion, but at the expense of increasing performance overhead. We discuss the performance implications of this in the later section, but generally, we want to minimize trust requirements to avoid dealing with Sybil attacks or other threats alike. > **How to choose the threshold value?** We will leave this question hanging for now, but will return to it at the end of our feasibility analysis once all other considerations are covered. ### Adaptive Corruption The "adaptive" is a realistic corruption model (in contrast to "static" one), were adversaries are able to corrupt parties during the computation, potentially in a course of many sessions. We don't need to care about this as much, since for current use case, sets are rather ephemeral and gathered for one time use. We can, however, use a common proactive security measure, that is key refresh, to minimize trust and privacy requirements. More on this the [Cross-mix accounts](#Cross-mix-accounts-with-Key-Resharing) section. ## Threat Models ### Unfair Withdraw (Front-running) Consider a threshold signing protocol where in the final online round each party $P_i$ computes additive signature $s_i$ and broadcasts it to others. When summed together these produce a valid signature $\sigma = s_i + \sum_j^{t+1} s_j$. This signature can then be submitted on-chain as means of withdrawing funds out of `mixAccount` as a part of the current mixer protocol. Consider a case when $P_2$ deposits `1 coin` into `mixAccount` prior to others. If a $P_1$ (who haven't yet made their deposits) already received all additive shares $[s_{1j}; j:= 0..t+1]$ to construct valid signature $\sigma_1$, it would be able to steal `1 coin` deposited by $P_2$ and run away. **Naive approach:** Enforce order of deposits (same as the order of nonce), which will obviously break any anonymity properties of the mixer. **Approach taken:** Require all parties to lock (Phase 3) their coins prior to the final round of signing (Phase 4), so that when valid signatures $\sigma_i$ are constructed no one party will be able to withdraw unfairly. > **Note:** the approach above is prone to griefing (e.g. by not revealing partial sigs griefer prevents submitting txns in the order of nonce). The way to avert this is by running the last round of singing in parallel for all withdrawals. Once all parties have locked, they will broadcast vectors of partial sigs $[s_{i1}, s_{i2}, …]$. ### Rogue Sets (Sybil Attack) Consider a case when $n-1$ parties collude in creating a "rogue set". They then wait for one honest party that wishes to mix `1 coin` with them. Parties honestly run Keygen ceremony, signing, and all deposit funds into `mixAccount`. Then (or in parallel with "honest" signing) they use an alternative implementation of TSS to create a valid signature with nonce=0 for the transaction that withdraws all funds from `mixAccount` into their malicious account. They then front-run honest protocol execution and submit malicious signature before the "honest" one (also with nonce=0). > **Note:** the obvious mitigation is to never write key shares onto the disk or keep the file locked for the entirety of the mix. But since the goal for the current protocol is to remain permission-less, we cannot assume anything about the hardware (e.g. oblivious RAM). Hence, we should consider the possibility of the side-channel attacks that make key exfiltration utterly possible (check [this](https://hal.archives-ouvertes.fr/hal-03176249/document) survey for details). Malicious parties will be able to pull this attack in any group where they hold a majority, i.e. $T+1$. And if there won't be enough honest users on the network this could happen quite easily. An adversary could even spawn a large number of dishonest "shadow" nodes to render a Sybil attack. At the same time, it is impossible to tell whether some group is honest or not before starting the mix. The main challenge is that for a protocol to meet any reasonable degree of privacy it cannot remain collusion-free. Despite the severity of this threat and the absence of a single and full-proof way to protect (common for distributed systems), we can employ a layered defense approach to tackle this problem from many angles. Following are some measures that could be put in place (consider a rule that the more defense layers the higher overall security, but some layers may have a bigger impact than others): #### **Defense layer №1: Random mix set allocation** - Selecting parties (from a large pool of players) to the mix sets randomly should make it harder for an adversary to coordinate malicious nodes during the computation. - Note: this approach is only "truly" efficient when parties have ephemeral identities that are created at the start of the mix and abandoned after - Currently, `libp2p` doesn't explicitly support [ephemeral peer ids](https://github.com/libp2p/libp2p/issues/37), however the peer ID is derived from the node's private key so any node can just generate a new private key and get a new ID - Some considerations should be taken to prevent collusion through side-channel communication, eg. adversary forking implementation and adding custom channels for identifying corrupted nodes. - Since an attacker would have to establish malicious connections through trial and error, any attempts to negotiate an unknown protocol should be seen as a red flag and lead to remote peer blocking. - Network-layer privacy may also play an important security role in preventing peers to establish connections through arbitrary networking solutions. - Note: there would still be a possibility for nodes to communicate in a subliminal (aka steganographic) fashion. This could be remedied by introducing a communication mediator that detects unusual messages [\[KL08\]](https://cyber.biu.ac.il/wp-content/uploads/2016/06/533.pdf) or adding PoW-like challenges [\[SLZ20\]](http://eprint.iacr.org/2020/497). #### **Defense layer №2: Proof of Minimal Capital** - To make Sybil attacks economically less viable to pull off, a requirement can be enforced for each computing party to prove they own a unique address with a sufficient amount of coins (at least for the mix, maybe more akin to over-collateralization). - One method this could be done with zk-SNARKs is creating a Merkle tree proof of a certain balance at a certain account at a certain block, without revealing the balance or the account. this can be done since state roots are publicly available. however, since state roots are not directly accessible via EVM (see [docs](https://docs.soliditylang.org/en/v0.8.13/units-and-global-variables.html#block-and-transaction-properties)), there would need to be some oracle contract that relays them. - Another method that could be used is ring signatures; for example, the prover wants to prove they have at least 1 ETH. They would simply find `n` other public keys with >=1 ETH and then sign a ring signature with those public keys and their own account's private key. The verifier would then verify the ring signature and check that all members of the ring have >=1 ETH balance. #### **Defense layer №3: Contract-facilitated fraud prevention** - Inspired by Optimistic Rollups, we could use a smart contract to incentivize honest behaviour in the network. - To join the network, you would need to lock some collateral in a smart contract. If some node performs dishonest or malicious behaviour, such as going offline after keygen but before locking, or signing a malicious withdraw message, then a "fraud proof" can be submitted to the contract in the form of the messages that were signed by the bad node. - The contract will then verify these messages and if the node did in fact act badly (going offline, signing malicious messages) then their collateral will be slashed and re-distributed to any parties that were victimized. - With this method, privacy can be preserved on-chain as the account that locks with the collateral contract doesn't need to be the same one that participates in the mix. The node would simply need to provide a signed message from the account that locked, showing that they do have collateral locked up. This way, users still have plausible deniability; even by collateralizing, there is no way to prove they actually participated in a mix. - Additionally, fraud proofs could be submitted by any user, not necessarily the one affected. - The user would be able to withdraw from the contract at any time, but they would no longer be able to participate in mixes, and would need to re-collateralize if so. - The downside is that there is now an on-chain component, potentially reducing privacy and re-deployability to chains that use different VMs. It also requires users to have some additional amount of funds they can lock for the duration of the mix. #### **Defense layer №4: Introduce semi-trusted (incentivized) nodes** - A special role in network could be placed on the "guard" nodes, which "stake" certain amount of coins (more that join collateral) and could be slashed if misbehavior is proved. - The roles that these nodes could take are as follows: - act as a "dishonesty diffusion" lowering the chance of colluding majority appear in the mix set; - act as a communication mediator in a star topology preventing subliminal collusion; - act as a "ballot box", eg. for putting withdraw addresses without revealing its connection to the participated user. - Trust can be further minimized if a special hardware (Intel SGX) is enforced. However, this would inevitably hurt decentralization. - Note: although such an approach is sort of a "low hanging fruit" that could significantly simplify implementation, the underlying incentivization game is so easy to get wrong that we should try avoiding it at all cost. ### Participants backing out (Griefing) There is the chance that after phase 2, users simply back out of the mix before locking any funds, while still having taken part in the key generation ceremony. This is since at that point there is no incentivization to follow through. If a significant amount of users back out, potentially the threshold for signing may never be reached, so all users should exit at that point. However, there is no way from a single node's point of view to know whether others have backed out or not before they lock, so there is a chance they might lock their funds, and too many others have left, and then their funds are locked forever. If only a smaller amount of participants leave, then the mix can still continue, but there will be a degraded anonymity set - ie. it will be lower than initially agreed upon in phase 1. This problem seems somewhat related to the "rogue sets" problem, where instead of funds being stolen, they are simply griefed. One way to analyze this issue is through the "simulation proof" technique. Consider the "ideal world" where parties can simply send signed deposit transactions to the "trusted" party, who promises to proceed to submit them on-chain only if enough of them are received. A protocol is secure if any real-world adversary cannot do more harm (prevent parties from proceeding mix with all needed funds locked and leak their private inputs) than is possible in the ideal-world. Since it is parties who choose whether to send transactions or not in both the real and ideal world, we only need to make sure that they appear on-chain in an "all-or-nothing" fashion. #### **Option 1: Contract-facilitated optimistic liveness** - Assuming that "contract-facilitated fraud protection" is adopted as one of defense layers against sybil attacks, we can also rely on its slashing mechanism to incentivize proper liveness as well. - The way that someone would accuse party of going offline before depositing is by submitting signed (by the misbehaved party) proof of a collateral (recall how parties authorize themselves to access mix sets) and a proof that they deposited to the mix. - Before slashing, there must be a time window allowing appeals against a false accusation. Falsely accused party would simply reveal the transaction that they indeed have deposited into the mix. - To disincentivize false accusation contract may require locking small amount that if proven will be refunded along with compensation. #### **Option 2: Asymmetric threshold decryption** - Two round subprotocol that builds upon previously performed Keygen ceremony and relies only on the security of Pallier threshold cryptosystem. - *Round 1:* Each party encrypts their deposit transaction $m$ with a public key $pk$ generated during Phase 2. It will then broadcast this ciphertext $ct$ to all other parties. If some party refuses to broadcast of fails doing it - protocol aborts. - *Round 2:* Parties will use their key shares $sk_i$ to compute partial decryptions $\alpha_i$, which they then broadcast to each other. The fairness of this step relies on the cryptographic guarantee that at least $t+1$ parties are required to act honestly for everyone to obtain enough $\alpha_i$ to decrypt full message $m$ through homomorphically addition $m = \sum_i^{t+1}\alpha_i$. - *Final:* Each party is now able to submit all deposit transactions on-chain. - Whether it will be a coordinated effort (leader submits for all) or each individual (each party submits its own tx) isn't important for this method. - This seemingly redundant interaction is only there for parties to be sure that even if a portion of them will go offline, those who are still online will be able to lock funds on their behalf and proceed with a mix. - The downside is that there is no obvious way to preserve privacy of deposit addresses as each ciphertext is linked to its sender. - Network-level privacy could come in handy for this one. For example, when Tor or gossip transport is used, recipients won't know whether the sender is the originator of the message, or simply relaying it. - But even in the worst case, when no sender obfuscation mechanism available, this still isn't too bad since there will be no public record of these connections and participating users are interested in preserving anonymity set intractability. - Potentially some other cryptographic tricks may help here, e.g. secrets shuffle \[[AW07](https://www.freehaven.net/anonbib/cache/adida07.pdf), [Groth'05](https://eprint.iacr.org/2005/246.pdf)\], private set operations [\[KS05\]](https://www.iacr.org/archive/crypto2005/36210235/36210235.pdf), [oblivious transfer](https://en.wikipedia.org/wiki/Oblivious_transfer), etc. - More details about Paillier threshold cryptosystem can be found [here](https://eprint.iacr.org/2019/1136.pdf). #### **Option 3: Fair exchange** - Each party broadcasts its deposit transaction over the "fair exchange" channel that guarantees that either all parties receive transactions or none of them obtains anything useful. - Note: having fairness completely and efficiently without a trusted third party (TTP) is shown to be impossible, but it is possible to have an optimistic protocol, where the TTP is involved in the protocol only when a violation of fairness occurs [\[KK15\]](http://eprint.iacr.org/2015/064). - Similarly to threshold decryption above, fair exchange cannot guarantee privacy for user inputs out of the box. #### **Option 4: Incentivized nodes** - Have incentivized nodes to act as a "trusted" party implementing a "ballot box", to where parties can send deposit transactions without revealing connection to their identities. ## Performance The inevitable tradeoff for MPC is that between computation scale and communication complexity. As the number of nodes in the network increase, communication complexity increases as $O(n^k)$, where $n$ is the messages and k is the number of nodes, so does the time taken to respond to the request. While usually more parties lead to higher security guarantees, in the current case it is also stronger anonymity. That is, the bigger the anonymity set, the less likely anonymity will be compromised for each individual participant. The communication overhead is jointly defined by the number of messages peers are required to send each round of interaction, the number of rounds, and the number of connections each peer need to maintain open. Operations in MPC usually take the form of broadcast (all-to-all), point-to-point (1-to-1). ### Gennaro & Goldfeder '20 For the most commonly used - ["GG20"](https://eprint.iacr.org/2020/540) - some rough performance estimations were conducted by the protocol designers and independent researchers ([report](https://is.muni.cz/th/fuwu0/thesis_Archive.pdf)). Their observation state that for Keygen ceremony the increases in time is linear (about 1109 ms every four parties) and the distributed signing follows a low-degree polynomial trajectory. Larger-scale and seemingly more realistic analysis was published by [THORChain](https://twitter.com/THORChain/status/1230031671015010304?s=20&t=39IcjsjM5OvGOFCeR8EDVw) team. Their nodes use a fork of the [tss-lib](https://github.com/bnb-chain/tss-lib) and their numbers are as follows: ![](https://i.imgur.com/bJ1ybF3.png) ### Gągol et al. '20 The protocol above is more of an industry standard but is not the only one. One recent contender - ["GKSS"](https://eprint.iacr.org/2020/498), shows some good promise in terms of bandwidth and running time. ![](https://i.imgur.com/5TWG0iY.png) ## Other Challenges ### How to choose the threshold value? Powerful leverage made possible by the threshold cryptosystems is the ability to perform reactive risk minimization by tweaking threshold requirements, i.e. value of $t$. Yet, it is not entirely obvious how to determine this value and what other parameters should contribute to it. Assuming relevant trade-offs described [here](#Fault-Tolerance) and identified threats models, in particular the ["rogue sets"](#Rogue-Sets-Sybil-Attack) attack, there is a clear need to balance risks of nodes collusion and users loosing funds due to network faults. As proposed above, the backbone of our defense model relies on the effort to make collusion and Sybil attacks economically (or computationally) impractical or inefficient to carry out. The main parameters derived from such an approach are the collateral amount and the proof of minimal capital requirement $PoMC$. The former is a static value imposed once to join the network, whereas the latter can vary from mix to mix depending on the mix denomination (e.g. `1 coin` mixset requires $PoMC$ to be `1.5 coin`). For simplicity, we will only consider the collateral amount in further explanations. When these parameters are known in advance, one can simply calculate the threshold value such that $(t + 1)*collateralAmount > n * mixAmount$, where $mixAmount$ is the current mix denomination. With such a requirement any adversary, hoping to control the majority of parties in the set, will be at risk to lose much bigger collateral than it can possibly gain from any malicious activity. ### The Oracle problem with "fraud proofs" Consider a deployed smart contract that handles fraud prevention via slashing. Consider it having a Merkle tree structure where hashed user identities are written once they make a collateral deposit and are removed after withdrawals. In case some computing party misbehaves, any victimized (or else) users will be able to submit "fraud proof". **Challenge:** Such a proof has to be bounding in regards to an on-chain identity (one that locked collateral) and offchain one (that participates in the MPC). In other words, smart contract expects whistleblowers to verifiably link a connection between misbehavior committed offchain and the identity stored inside the Merkle tree, otherwise it anyone could abuse the fraud prevention itself and steal honest parties' collateral. This poses a serious challenge in terms of solving a rather classical "Oracle problem", but also preserve privacy doing so. > **Note:** this won't be a problem if users deposit to the mix from the same address they locked collateral. Such use, however, leaves unwanted traces, so we shouldn't assume it and possibly even prohibit this. **Solution:** One such link can be established using `libp2p` peer ids and the private key backing it. Protocol can enforce each node to use their `libp2p` key to sign both the proof that they have collateralized and that they deposited in the mix. Note, this does not necessarily require users to lock a collateral and deposit to the mix for the same addresses, only to sign proofs of those operations with the `libp2p` key. That way proofs are easily linked to the same peer, but this connection is marked on-chain only in the case of disputes. _Extension:_ Possible improvement in terms of privacy could be made by doing "fraud proofs" with ZK. For instance, to prove locked collateral user would need to construct a proof of a Merkle tree inclusion with a corresponding circuit and path elements to the valid root. In such way proof won't reveal anything off-chain (between users in the mix) and on-chain (in case of disputes). Existing ZK signaling solutions like [Semaphore](https://medium.com/coinmonks/to-mixers-and-beyond-presenting-semaphore-a-privacy-gadget-built-on-ethereum-4c8b00857c9b) can be leveraged for elegant authorization, but we won't get into details of this here. ### Nonce and ordering selection Since account-based blockchains have a transaction nonce associated to each account, the mix account must have some ordering of the withdrawal transactions by nonce. This poses a few problems; firstly, how to select the ordering of withdrawals; secondly, ensuring that each withdrawal happens in an agreed-upon time frame, as otherwise, all withdrawals can be delayed (potentially indefinitely). For selection of withdrawal ordering, a few methods can be considered: - lexicographical selection. In the first phase of the protocol, once each party sees all the other participating parties, the nonce can be selected by lexicographical ordering of peer IDs or withdrawal addresses. - time-based selection. In the first phase of the protocol, parties also agree on when they will make their withdrawal transaction. The party that wants to withdraw first has the lowest nonce, and the one that wants to withdrawal last will have the highest. This method may also help with timing analysis, since otherwise a mix will have an identifiable fingerprint where many accounts sends to one address within a certain amount of time, then that account sends many transfers out to fresh accounts within a certain amount of time. - random selection. This may be harder to implement, as each party needs to ensure they do not have the same nonce as any other party. To ensure that each withdrawal happens in a timely manner, the protocol requires that each node broadcasts their final partial signatures for the withdrawals all in one message. This way, each node is able to calculate every party's withdrawal transaction independently. Thus, if one party does not withdraw in a timely manner, another node can simply submit the transaction for them. # Privacy Analysis & Considerations On-chain, all that's seen is `n` accounts transferring funds to some account, then this account transferring funds out to other (new) accounts. This is scriptless, so it's less traceable than a solution like Tornado Cash which requires a smart contract, as a CeX can easily choose to ban all accounts that have ever been traced back to using the Tornado Cash contract. If some parties in the mix happen to link their withdrawal accounts back to their deposit accounts, then the anonymity set decreases by 1. Tornado Cash also suffers this problem, and it isn't really preventable except but improving documentation and common knowledge around privacy. However, since there's no contract on-chain, it's harder to determine whether the mix transactions were actually part of a mix, or just "normal" value transfers. As well, as implementation of this should consider network-level privacy, such as Tor, I2P, or a mixnet. ## Anonymity set considerations "Anonymity loves company" so we should aim to have as large of an anonymity size as possible without having significantly degraded performance. We can look to existing anonymity solutions for an acceptable set size. - Monero has a current ring-signature size of 11 and will soon upgrade to ~30. - Tornado Cash has anonymity sets on the order of 10^2 - 10^3; for example, the [Tornado Cash 10 (10ETH) contract](https://etherscan.io/address/0x910Cbd523D972eb0a6f4cAe4618aD62622b39DbF) currently has 16,810 ETH in it at time of writing, meaning it has an anonymity set of 1681. The [Tornado Cash 0.1 contract](https://etherscan.io/address/0x12d66f87a04a9e220743712ce6d9bb1b5616b8fc) currently has 350.5 ETH, so it has an anonymity set of 3505. ## Proving source of funds Some concerns with mixers are the need to sometimes prove source of funds, for example if you move them through a CeX or such. Tornado Cash provides this capability through being able to link receipts and withdrawals. For this proposal, since there is no on-chain contract component, users have plausible deniability of using a mixer. However, in case they do need to prove linkability for some reason, they can simply sign a message with both their deposit and withdrawal addresses to link them. ## Fingerprintability of mix accounts Although there is no smart contract deployed, mixes can still be fingerprinted, as they will always look somewhat unique. What someone analyzing the chain will see is many accounts sending the same amount of coins to one account, then this account transferring the same amount to many fresh accounts shortly afterwards. There is not a lot of room for timing-based improvements here, as for a blockchain with account nonces, each withdraw needs to happen in a timely fashion otherwise all other users's withdrawals will be held up. Potential mitigations could be having some set of users agree to withdraw at a later time, so one batch could withdraw immediately after, then another batch one day later, for example. However this would require more agreement between parties. However, even if mixes can be fingerprinted to some extent, there is still plausible deniability of mixing which is not possible at all with explicitly on-chain mixers. ## Fingerprinting through mix values Since every mix set can determine its own value to mix, this might result in the overall network's anonymity set to become divided in between many different sets of various values. For example, Tornado Cash has 0.1, 1, 10, and 100 ETH pools. If there were also 3, 6, 9, 33, etc. ETH sized pools, then the users would be divided amongst more pools and the anonymity set of each would decrease. ## Privacy with active adversary An active adversary is a node or set of nodes that actively take part in the mix, as opposed to a passive adversary which only monitors the network. The protocol needs to preserve privacy with the existence of an active adversary, as we can assume there will be adversarial mix nodes. There are two main sets of information an active adversary could obtain: - the deposit address corresponding to a withdrawal - the IP of nodes participating in the swap The first could be obtained depending on how the protocol is implemented. If we enforce that users must reveal their deposit address to prove they deposited, then an adverary can trivially link accounts. However, if we implement a method for proving in zero-knowledge a deposit into the mix account, then an adversary will not be able to link the accounts. > A method for proving a deposit into the mix account could be by using linkable ring signatures. For a deposit, a user can create a ring signature using their deposit account's private key, as well as the public keys of other users who have deposited. Thus, they can prove they own at least one of the accounts that deposited without revealing which one. The linkability property can be used to check that if two ring signatures were constructed with the same private key, without revealing the signer. This can prevent double-signing and double-spends. To obfuscate the IP address of nodes particpiating in the swap, various network privacy methods can be considered. - VPN: users could be encouraged to use a VPN when possible. However, their VPN provider would still see their network traffic. - Tor: users could be encouraged to route their connection to the network through Tor. Other more intensive solutions could also be used, such as a mixnet, but that is out of the scope of the mixing protocol. ## Benefits and drawbacks over existing solutions Benefits: - generic and can be deployed to any blockchain, as long as the software to support that chain's specific curve and signing algorithm is implemented. - no on-chain contract, so harder to censor and analyze, and provides plausible deniability of using a mixer for users. - no need for withdraw relayers (like Tornado Cash) as the withdrawals come directly from the pool's account. Drawbacks: - withdrawals must happen in a timely fashion (if withdraw txs include nonces), as otherwise other users will be held up from withdrawing. However, this leads to mixer transactions being identifiable, as every mix account will see many deposits of the same amount going in, and many withdrawals of the same amount going out shortly after. - this is a drawback compared to Tornado Cash where you can leave your funds in for an indefinite period of time. - requires trusting `t` of `n` of the other mix participants. - requires that `t` of `n` mix participants remain online for the duration of the mix. - requires all users to mix same amount of funds. - Tornado Cash currently has fixed-size pools but Tornado Cash Nova (v2) allows for variable sizes. ## Protocol improvements ### Variable-size mix amounts The current protocol requires every user to mix the same amount within the given mix. However, each mixing set can agree on its own mixing value. Ideally, each user would be able to mix any amount they wish within any mixing set. Since most L1s provide no confidentiality of values, this poses a problem, as any unique amount can trivially have its deposit and withdrawal addresses linked. ### Cross-mix accounts with key resharing Consider a threshold cryptosystem with support for "key reshare", such that parties from the old set can rotate key shares with new participants. For the cryptocurrency mixer, this can be leveraged by reusing the `mix account` from one mix to another. For example, one batch of users will withdraw first, but others will wait for newly deposited coins. For the outside observer, this behavior will look chaotic and can be only explained as one "trusted" party mixing coins for different groups of users, which is the end goal of this protocol. This could make the whole process even less traceable, but at the same time drastically complicate security design, trust model, and implementation. This should not be considered for the first version per se. ### State-based MPC Traditional MPC protocols assume synchronous communication model and are structured in rounds after which parties exchange messages in timely manner. Although such model proved to be practical for real-world MPC applications, it nonetheless drastically limits their scalability. One promising alternative is a _state-based computation_ that operates under the Partial Synchrony assumption. The idea is to structure MPC on top of Byzantine Fault Tolerant state replication like [Tendermint BFT](https://github.com/tendermint/tendermint). It allows larger-scale computation with parties that progress at their own pace and could potentially scale up to thousands of them. Furthermore, this approach also solves two issues: 1) Sybil attacks and 2) crash recovery for parties during the execution. One known line of work in this direction is [ZenGo-X/White-city](https://github.com/ZenGo-X/white-city).