The collusion problem of MPC-TLS makes it so we need decentralized nodes that act correctly. The use of an AVS is requred to circumvent the collusion problem of MPC-TLS.
We lay out a high level description of a protocol that circumvents the collusion issue of zkTLS.
Building a general purpose verifiable bridge between web2 and web3 is something the founding team of Opacity has been obsessed with for many years now.
There are generally three ways to verifiably port web2 data:
By far zkTLS is the most powerful. It allows for extracting data from any existing https call and do rich ZKPs on the transcript.
Example use cases:
The use cases range from net new consumer products to oracles with better trust assumptions. We are trying to build a firehose of web2 data into web3 WITHOUT extra trust assumptions. Any web or mobile app is fair game.
HTTPS- HTTP + SSL + TLS
SSL - End to end encryption, but is susceptible to a man in the middle attack.
TLS - Centralized system that solves the man in the middle problem using a certificate authority. This is how your browser knows it's actually talking to your bank.
To understand why zkTLS without trust assumptions is until now unsolved, we must talk about the details of how SSL/TLS work.
Server has:
Client has:
Steps of SSL/TLS
There are two main reasons zkTLS is difficult:
There are only three architectures to achieve zkTLS:
❌TEE/SGX - Clique
Offload SSL/TLS to a TEE, and have the enclave sign the request was encrypted/decrypted correctly.
❌Proxy witness - Reclaim protocol
The client routes the request through a proxy, and the proxy signs a statement of what traffic they saw go b/w client and server. At scale this architecture will be blocked.
✅MPC - TLSNotary
An MPC node helps the client make the request so no one knows the shared secret until the session is finalized. This approach is undetectable by the target server. We must use garbled circuits and oblivious transfer MPC schemes.
In the MPC architecture of zkTLS a fundamental issue arises. The client and the MPC node can collude to reconstruct the shared secret, and forge arbitrary proofs from the target server.
In a shamir secret sharing MPC scheme it is very efficient to scale the number of parties involved with the MPC. SSS is what is commonly used in MPC-wallets like Lit Procotol. Unfortunately these MPC schemes are not as effective in MPC-TLS.
SSS-MPC is unfortunately unavailable to us because of the use of a cryptographic hash to derive the symmetric key(s) from the shared secret. In a SSS-MPC scheme hashing a value where the pre-image cannot be constructed by anyone has extreme overhead. The request may timeout before the request is actually finalized.
Another way is to extend garbled circuits and oblivious transfer to work with more than two parties. Sadly this scales very poorly, and is extremely complicated to engineer securely. There are a lot of optimizations for 2-party GC/OT that are unavailable for MPC with more parties.
Another approach is to have the user generate multiple proofs from many nodes before a proof is valid. Unfortunately even this doesn't work in the general case.
To explain why imagine we have a committee of 10 nodes, and we want to prove a bank balance. For the first 6 proofs can be done as normal. After the 6th the user can make a debit card TX to change the balance. So the last 4 disagree with the first 6.
This leads to a situation where nodes can get slashed even though they did nothing wrong.
By committee with slashing ONLY WORKS if the user can't change the trust value in b/w proofs. Otherwise a malicious user can slash honest nodes.
In the case of a price feed we run into issues because the trust changes too quickly for a proof by committee to work. This presents significant issues with using zkTLS based oracles.
Here we present the main innovation of the Opacity team in the zkTLS space. An innovation that allows us to trust arbitrary proofs with a minimal number of proofs.
Fundamentally, it is a subtle shift in perspective. We are confident it's impossible to solve the collusion problem directly within the time contraints of a good user experience. However, if you have a reliable way to detect if someone is trying to collude, then indirectly we can solve the collusion problem.
The solution is simple, and has four parts:
A key part of the solution is a mapping between a wallet address and a web2 account. A UUID is most common, but not universal. Twitter uses a large number as their primary ID in the database.
It's standard practice to never change a account ID in a web2 system because it's the primary identifier in the database. This makes it so we can use a committee to prove ownership of a web2 account. As a user, I might generate 6 identical proofs from different nodes in order claim a web2 account.
Having a mapping of web2 accounts to wallet addresses is a essential part of how we solve the collusion problem.
Assume:
The protocol to generate a proof is as follows:
Lets walk through the example of notarizing a bank balance to help us understand. Say I have $100 in my account, but I am trying to forge a proof that I have $1m in my account using a colluding MPC node.
The wrong MPC node was selected, and in the verifiable log we have a failed attempt. Let's say we tried 5 times until we got the colluding node. Our verifiable log of attempts would look like:
Even though the user generated a forged proof, they have left behind a verifiable log of failures. As such, the protocol can reject a proof if it is preceded by a string of failures.
Naturally there is a sybil issue here that is taken care of by the web2 identity contract. Since we have a mapping of web2 accounts to wallet addresses, only 1 wallet address is allowed to present proofs for any web2 account. So a user CANNOT rotate wallets until they succeed.
QED
In many cases an oracle is trying to port public data which is not associated with an account. Say a public price or weather feed.
In this case we no longer have the web2 identity contract, and so we must solve the sybil issue seperately.
We can limit the number of oracles, and have them stake. We can also be a lot stricter with the verifiable log of attempts. The oracle won't have random access ability to forge a proof. They will leave behind a log of failures before they can forge a proof.