Opacity AVS Slashing Conditions

See: https://hackmd.io/SpNwBquVReqrVJzloT_esA

For EigenLayer zkTLS we actually need two separate AVS systems:

  1. MPC-TLS Nodes (primary)
  2. zkTLS Oracles (secondary)

The zkTLS oracle AVS will make use of the MPC-TLS AVS. This distinction is done for simplicity, and credible neutrality. If someone doesn't like our oracle implementation, then they can make their own that uses the MPC-TLS AVS directly.

To ensure security while the network grows, we will require the use of a TEE in order to operate a node. This makes it possible to launch production trust minimized notaries ASAP.


Witness Protocol

Our AVS will make use of https://witness.co in order to timestamp actions taken by AVS operators. Time is measured by membership in a on-chain Merkle mountain range, and cannot retroactively be forged.

The timeliness is important for slashing conditions. For example we will require the use of an SGX unit where each action must be associated with a valid SGX signature. In the case an operator is challenged, they can present a timestamped signature that proves they behaved correctly at the time in question.

We will be working with Sina Sabet of Witness for this aspect of our protocol.


MPC-TLS AVS Slashing

SGX Key Binding

The operator will bind themselves to a specific SGX device that is identified through the key on the enclave that is signed by the Intel certificate authority. Once an operator registers they will have to publicly commit which key they will use. Only a valid SGX key can be used to become an operator.

At a high level, to avoid slashing the node operator MUST store SGX signatures for un-finalized jobs they’ve completed. If they can’t prove they used their SGX, then they risk being slashed.

Once the node starts processing jobs, each part of the job must have the result associated with a valid SGX signature:

  1. Committing MPC key share
  2. Signing selective disclosure merkle root
  3. Etc

The slashing conditions for this part can be done optimistically in order to save on on-chain costs. After N blocks (arbitrary amount of time) a job is finalized and cannot be challenged.

NOTE: Optimistic slashing does not introduce a settlement time for zkTLS proofs, since the MPC-TLS node can directly provide the SGX signature at session finalization. The optimistic part is just to avoid verifying each SGX signature on-chain. It’s a more simple way to implement slashing conditions.

Here is the successful challenge flow:

  1. Challenger deposits bond and challenges an operator for a particular job.
  2. The operator has N blocks (time can be adjusted) from the time of challenge to provide a timestamped SGX signature for the challenged job.
  3. If the operator FAILS to respond in time, the challenger can submit a proof of non-response to the slashing contract, and can withdraw initial bond +slashing reward.

Here is the unsuccessful challenge flow:

  1. Challenger deposits bond to challenge an operator for a particular job.
  2. The operator provides a valid SGX signature and Witness timestamp for the job, and collects the bond.

FAILURE MODES (very low)

  • Compromised SGX Enclave

Prevent reconstruction of shared-secret

The first step of a MPC node completing a job is to commit a hash of their key share. If the client can prove knowledge of the MPC-node's key share before the session is finalized, then the client can reconstruct the shared secret and forge a zkTLS proof.

Luckily, there is no need for anyone except the MPC-node to know their key share. We can use the generalized Fiat-Shamir principle to add a deterministic nullifier to the commitment to protect against rainbow attacks.

We again use witness to be able to measure relative time.

Proof of knowledge of key share slashing validation steps:

  1. Verify key share is associated with node's commitment,
  2. Verify knowledge of key share is before session was finalized (witness proof).

FAILURE MODES (very low)

This crypto-economic fails in the case that the client and MPC-node are the same person. There is no incentive to claim your own money.

This is essentially a degenerate case of the collusion problem, and so this case is handled by the commit and reveal aspect of the solution to the collusion problem.


MPC node took user down malicious path

The MPC architecture of zkTLS has the benefit of being privacy preserving for the user. No info except the public key of the target server is leaked to the MPC-node. This also makes it much harder for MPC-TLS nodes to blacklist users.

As a reminder we use garbled circuits for the MPC scheme. We make use of a recent advancement in GC in order to significantly speed up our proof generation. This is why we can notarize HTTPs transcripts in less than a second.

At a high level can simplify our garbled circuit for speed. However, risks letting an adversary of the MPC force their opponent down a path that leaks sensitive information. The advancement we use lets us prove at the end of the circuit that we did not try to do this. So we can safely simplify our garbled circuit.

So one of the slashing conditions for the MPC node is if they fail to prove they didn't try to take their counterparty down a malicious path.

The proof is quite complicated, but we can use AlignedLayer to simplify verifying a complex proof on-chain.

Here is the successful challenge flow:

  1. Challenger deposits bond and challenges an operator to prove they didn’t act maliciously.
  2. The operator has N blocks (time can be adjusted) from the time of challenge to provide a timestamped proof.
  3. If the operator FAILS to respond in time, the challenger can submit a proof of non-response to the slashing contract, and can withdraw initial bond + slashing reward.

Here is the unsuccessful challenge flow:

  1. Challenger deposits bond and challenges an operator to prove they didn’t act maliciously.
  2. The operator submits a valid proof to AlignedLayer, and claims the bond.

Failed job execution

Tracking failed jobs serves to incentivize a node to stay online and help notarize zkTLS proofs. The production criteria for determining if a node has too many failed jobs will be an ongoing process for the general case. We will lay out the different ways to achieve failure slashing.

Attributing failure to the MPC-node or the client is quite complicated in the general case, however since we will use a TEE the operator will often be able to prove they performed each step correctly, and to disclose which step has a failure.

  • Attributable failures for MPC node
  1. Node is consistently not committing their key-share within time (node is offline),
  2. Client committed merkle-root of selective disclosure, but MPC node never signed it,
  3. MPC-node fails to present SGX signature of moving forward one step in the Garbled circuit.

Non-attributable failures

NOTE: User failing to satisfy pre-commitment is not a failure attributable to the MPC-node.

In the case where the client loses connection to the MPC-node, it may not be possible to prove who is at fault. If the client is a zkTLS oracle, then it becomes easier to prove who is at fault because of the use of TEE on both sides.

The slashing criteria for these cases are trickier to define. However, our protocol is designed to be robust against these situations.

The main threat of concern is a coordinated attack on the AVS to generate a lot of failures in order to slash nodes.

This would be users trying to use Opacity to generate a proofs on their device. Keep in mind the business model for our AVS is around charging the VERIFIER of the proof and not the user directly.

One simple way to handle this situation is to have the application developer and/or verifier stake to get cheaper rates. This has the added bonus of aligning incentives. So if we see a lot of failed requests coming from a single application or intended verifier we can determine if there is a coordinated attack. In this aspect, we can make the apps/verifiers to ensure legitimate use of the AVS or they risk getting slashed.

Since we use a web2 identity contract for sybil resistance, each client request (unless a zkTLS oracle) is attributable to a web2 identity. So malicious clients trying to slash nodes by forcing failures can be detected based on the associated web2 identities used.

For example, if we see there are a lot of failed requests coming from free, unverified twitter accounts with no other web2 identities linked then governance might choose to not slash the nodes. If instead it is a bunch of bank accounts (that require KYC), then it's more likely the MPC node(s) are at fault.

In order to coordinate an attack that the governance layer would accept, you would need access to many legitimate web2 identities, and be willing to pay the upfront cost of minting all those identities in our contract. Even with that this attack seems difficult to pull off.


zkTLS Oracle AVS Slashing

A zkTLS Oracle will use the MPC-TLS AVS in order to generate proofs for public API feeds (eg price feed) with minimal trust assumptions.

The slashing conditions are very similar to the main MPC-TLS nodes. In fact it is easier to attribute fault since both sides of the MPC use a TEE.

SGX Key Binding

Same as MPC-TLS AVS

Prevent reconstruction of shared-secret

Same as MPC-TLS AVS

Failed job execution

This is where the two AVS differ.

Our solution to the collusion problem requires the client (in this case an oracle) to pre-commit what they wish to prove before the MPC-Node is selected. The proof is assumed to have failed unless the client can successfully prove they satisfied the pre-commitment.

So one of the extra slashing conditions for the zkTLS oracle is if the oracle is unable to generate proofs for their pre-commitment. In the beginning we may require 2/3 failures in a row to be slashed, but long term the goal is to make it so these oracles should never fail to satisfy the pre-commitment.

Select a repo