owned this note
owned this note
Published
Linked with GitHub
*Note that PoR (proof of reserves) is the term used in the rest of this doc to mean 'proof of solvency', see [this doc](https://hackmd.io/p0dy3R0RS5qpm3sX-_zreA) for more details on nomenclature*. Many of the attacks described here are inspired by [Chalkias, Chatzigiannis, Ji - "Broken Proofs of Solvency in Blockchain Custodial Wallets and Exchanges." (2022)](https://eprint.iacr.org/2022/043.pdf)
## Proof of Assets (PoA) - attacks by the exchange
Common attacks that exchanges can carry out on PoA protocols:
- **Collusion**: 2 or more exchanges sharing funds/addresses in their PoAs.
- **Friend attack**: exchanges using a friend's funds in their PoA. A friend is not another exchange, or it is another exchange that is not doing PoR (to distinguish this from collusion).
Note the friend attack can happen in 2 ways:
1. the friend sends funds to the exchange
2. the friend shares a signature with the exchange
The first case can be defended against by requesting the PoA in the past, because the exchange cannot change its balances in the past. The second case is harder to defend against.
### Public list of addresses
All exchanges that currently do some form of PoR do their PoA by making their addresses public. There are 2 ways that exchanges prove ownership of the addresses they claim are theirs:
1. Producing signatures of some arbitrary data
2. Using the same set of addresses for a long period of time
Collusion in either of the top 2 cases is basically impossible because anyone can check that one exchange's set of addresses has an empty intersection with any other exchange's. And as long as the history of addresses provided for PoA is never erased then anyone can check these intersections at any point post the PoA.
The problem with #1 is that it's easy to share signatures, so the friend attack becomes possible because an exchange can claim ownership of a friend's account without them actually having control over it (the friend just produces a signature and shares it with the exchange). The friend attack is not full-proof if PoAs are produced regularly because if the friend decides at some point to stop being a friend then the exchange will be in trouble.
#2 actually provides better protection against the friend attack because on-chain analysis can be used to check that the same set of addresses are continuously used by the exchange (as long as the addresses are continuously active).
### Keeping addresses private
Protocols like Provisions aim to to do PoA while preserving the privacy of the exchange's public keys. In this case the collusion and friend attacks are much easier to carry out by the exchange.
Provisions provides a defense for collusion but it requires all exchanges to use Provisions protocol for PoA. Arguably any defense against collusion would require exchanges to have some form of similarity between their PoA protocols because there must be some form of uniqueness associated with each exchange's addresses. Note that in Provision the private keys of the exchange are required as input to the PoA protocol, which is a defense of some sorts to collusion because generally private keys are not the things that you freely share, even to friends. If the PoA has signatures as inputs (as apposed to private keys) then collusion becomes easier because one can freely share a signature without compromising control over the wallet. ECDSA signatures in particular make it very difficult to construct a defense against collusion--the indeterministic nature of ECDSA make extratable uniqueness based only on the signature not possible (see [this doc](https://hackmd.io/iezJ3Z0dQQmgHO3RicNxzg) for more details).
The friend attack is much harder to defend against in general. In the Provisions PoA protocol the only defense is the unlikelyhood of someone sharing their private key with an exchange. In the case of signature-based PoAs it is very easy to cheat using the friend attack.
## Proof of Assets (PoA) - attacks against the exchange
For public PoA protocols (#1 above) there are no clear attacks that would be specific to PoA. Example of a non-specific attack: breaking ECDSA would obviously be an issue but this is general and not related solely to PoA.
For private PoA protocols any method that could extract the public key, or the number of public keys, or the total asset sum, can be counted as an attack.
### Multiple PoAs could leak public addresses
Suppose the anonymity set for the first PoA is $\{A, B, C\}$ where $A$ and $B$ are addresses controlled by an exchange. Consider the following 2 scenarios for the second PoA:
1. Funds in $A$ & $B$ don't move, but $C$ moves to $Z$. For the 2nd PoA the anon set must include $A$ & $B$. If $C$ is not included (because it has 0 balance or something) and some other address is used (to keep size at 3) then one has some evidence that $A$ & $B$ belong to the exchange.
2. Funds in $A$ and $B$ are combined and go into $X$. We have to include $X$ in the 2nd anon set. If $A$ $B$ are removed from the set and some other, unconnected address $D$ is added (to keep size 3) then it's fairly evident that $A$, $B$ & $X$ belong to the exchange.
#1 can be generalized by saying that the intersection of all PoA anon sets could leak information about an exchange's addresses. One way to defend against this is to have every $N^{th}$ anon set be a super set of the $N^{th}-1$ set.
#2 can be generalized by saying that an exchange will always have to include a new address in the 2nd anon set if that address received funds from addresses in the 1st anon set. One way to defend against this is to apply the same drop/add logic to all addresses in the anon set.
One can defend against the above 2 by doing the following:
1. For the 1st PoA choose any random padding addresses for the anon set, but only ones that have balance higher than $x$.
2. For the 2nd PoA use all addresses from the 1st anon set that have balance above $x$, as well as all addresses that were recipients of transactions (of amount $>x$) made by addresses in the 1st anon set.
Ideally one does not want the anon set size to grow uncontrolalbly so that is why the bound $x$ is introduced.
## Proof of Liabilities (PoL)
Suppose our PoL is a summation Merkle tree.
### Dispute resolution
If the exchange holds the tree and only gives inclusion proofs to authenticated users then its possible that the exchange just refuses to give a certain user their inclusion proof (maybe because they were left out of the tree). The idea is that the user would make a public statement about this, but why would anyone believe that someone claiming to be a user of an exchange is actually one? The process would need some rigor otherwise anyone who is not a user would be able to claim that they are a user and shed a bad light on an honest exchange.
Another user-exchange dilemma occurs if the user's balance is misrepresented in the PoL. How can the user convince the public that their balance is correct and the one given by the PoL is incorrect?
One method to help solve these disputes would be to have the exchange sign user balance or transaction data. If the user can present a signed balance sheet to the public, then the exchange would have to answer by giving an inclusion proof for that user that matches the balance. Without the signed sheet nobody needs to believe the user (so one cannot give a bad name to an honest exchange) and without the inclusion proof nobody needs to believe the exchange (so the exchange cannot exclude a user). The problem is not solved entirely because the exchange could just refuse to give a user a signed balance, but this is an arguably better situation because signed balances can be requested asynchronously and as frequently as the user wants. This makes it harder for the exchange to refuse 1 particular signed balance request and get away with it because the user can present previous signed balances as evidence that they still have funds with the exchange. You might think that this can be taken advantage of by a malicious user who has just left the exchange and claims to still be a part of it, presenting past signed balances. But in this case the exchange can give evidence of a bank transfer or crypto transaction that removed the funds from the exchange.
Note that this defense would work while even keeping the user's balance & ID hidden by using Pedersen commitments for the balance, hash for the ID, and snarks to show the user knows how to construct these cryptographic elements.
An alternate defense would be to have the user use TLSNotary while interacting with the exchange's webpage to produce a proof that they were shown a particular balance at a particular time.
### Collusion with a large user
An insolvent exchange could cheat by colluding with a large user. For ease of calculation suppose this user holds half the total liabilities, and suppose the exchange only has half of the assets needed to pay back their liabilities. The exchange can promise to this user that they are going to recieve all of their funds back as long as they keep quite. The exchange then produces a PoR and excludes this user from the PoL. Even if all the other exchange's users verify the PoL on their side the exchange will not be caught.
### Uniqueness of user IDs
If user IDs are not verifiably unique then 2 users that have the same ID and balance can be mapped to the same leaf node in the tree, which decreases the total liabilities. Requiring the same balance _and_ ID does make this attack rare but it's still worth noting as a possibility.
### Safely excluding users from PoL
It is up to each of the users to perform verification on their side for the PoL protocol to work. Over many PoLs the exchange may learn which users perform verification and which don't. They can then safely exclude users who do not perform verification without getting caught.
One way to solve this would be to have the tree live in a public space or with a trusted 3rd party. The exchange would then not know which users perform verification. It doesn't solve the problem entirely because the exchange could still rate the likelihood that a user is performing verification based on how active they are on the exchange; example: dormant users may be a good candidate for excluding from the PoL.
### Users with negative balances added to the tree
An exchange could lessen their total liability by adding users with negative balances to the tree. If the liability values in the tree are in plain text then this is easy to detect as long as the right user does the verification. If the liability values in the tree are encrypted (e.g. Pedersen commitment) then a range proof will be have to accompany each inclusion proof and again it relies on a specific user doing verification in order to catch an exchange cheating in such a manner.
The reliance on a specific user catching a negative balance can be done away with by having a range proof for all the leaf nodes in the tree. It does mean the tree needs to be exposed, however. In the case of plain text balances the range proof is simply the leaves, and verification is building the whole tree and checking it matches the root hash committed to by the exchange. In the case of encrypted balances the range proof would have to be some cryptographic protocol (e.g. Bulletproofs for Pedersen committments). Either way a trusted 3rd party (such as an auditor) would have to perform the verification to avoid leaking private information to the public.
### Hash Collision
In the case in which the result of a hashing function is truncated to a smaller number of bits, there might be cases of hash collision, namely that two users entry will result in the same hash input. When discovered such collision chance, the Exchange performing the Proof of Solvency can assign them to the same leaf node to these 2 users. Therefore some liabilities that had to be accounted for twice, are accounted for only once, resulting in an under-reporting of liabilities without being detected by any user.