# cve/acc
Vulnerability disclosure hyperstructure - infinite game or infinite pain? 🏴 (or who will pay the price of anarchy in vulnerability disclosure)
## < v0
Traditional dynamics of exploit marketplaces already allow for an imperfect ability to get a gauage on whether a given disclosure is real, i.e. membership in a known threat group, vx-underground bona fides, or even more explicit reputation systems on message boards.
The goal of the protocol is to allow for an even more trustless operation, and therefore larger volume and more efficient exchange of warez.
### Proof of ability to exploit a vulnerability
Ability to illustrate the ability to attack the target and accomplish specific tactical objectives, defined within a well-defined framework, such as a subset of **MITRE ATT&CK** / **ATLAS**.
More precisely, we can imagine a naive and a pessimistic / ZK view of how the "labs" allowing for disclosure process to proceed can be established.
#### Naive (~=hardware): an indirect "detonation" inside of TEE
Presents an indirect bounty on TEE itself. Absent open hardware TEEs, and slow-moving updates to secure enclaves (e.g. AMD SEV) out in the field. For instance, Intel SGX is _to this day_ used as the enclave of choice by a major chain touting secrecy.
Pros:
- easier to understand conceptually for broader audiences
- given the experience is that of, say, GCP Confidential Computing, and the TEE is as widely accessible as AMD SEV, easier to find a model for detonation lab operators to provide hardware
Cons:
- ultimately an extended bounty on TEE itself, leading to chaos around the transition to new hardware
- researchers in Germany smashing stacks
- opaque hardware supply chain issues and slow-moving upgrades
#### Pessimistic / ZK (~= software):
https://blog.trailofbits.com/2020/05/21/reinventing-vulnerability-disclosure-using-zero-knowledge-proofs/
https://www.risczero.com/docs/explainers/proof-system/
By following a zero-knowledge model, perform a verifiable computation and use receipt (verifier further confirmed using replicated computation?).
### Censorship resistance
A topic in its own right, one can assume the payloads associated with the protocol will be of the highest interest to threat actors up to jurisdictional authorities that will seek to shut the payloads down.
A solution within the broader set of censorship resistance will emerge, but, in the meantime two approaches:
- 🎭 encryption, "differential privacy" masquarade as other payloads using the same footprint and patterns of distribution, but benign intent, or indeed engineer incentives for storage such that including or removing a piece of malware into any given data stream is not identifiable as such
- maximalist approach that uses a censorship-resistant key-value store and content-addressing system https://github.com/angrymouse/elymus that then spreads any payload as wide as possible to the extent systems are accessible within a given jurisdiction
### Disclosure and incentives
In order to ensure that the leading malware researchers (the hard side of the network) come to cve/acc to deal and not the traditional purveyors of malware, the guarantees (plausible deniability, structured communication to the public, and eventual unconditional disclosure) are some of the configurable primitives which need to constitute the configurable, opt-in aspects of the protocol. The protocol and its uses need to have legitimacy with broad swaths of network participants (those affected, those discovering and disclosing, those applying vulnerabilities in the tactical setting).
It is extremely unlikely that the right set of incentives will be arrived at right away, but it is also not difficult to imagine a structured disclosure process operated by a mechanism
#### Timelocks!
One of the ways to ensure that the payload is disclosed after a certain period of time (subject to protocol instantiation parameters) is to use **Timelock Encryption**, like that found in https://github.com/drand/tlock.
This ensures that one eager buyer does not accumulate a stash of vulnerabilities and sits on them indefinitely (though, of course, the tactical superiority of having done so for a period of time is what brings the buyers to marketplaces in the first place). Rather than leaving this to opaque considerations of governments and NSOGroup, the market can have a structured default, e.g. **120 days** to disclose the vulnerability no matter what.
Somewhere along the way, the **viewing of the payload** would be availablre to the marketplace participants who win the bid to buy the vulnerability and no one else. However large the window of unique access to the payload can also be subject to configuration.
This removes choice from the researcher, and prevents idiosyncracies or trigger-happy legal departments from deviating from agreed-upon "fair or as best as we can make it" series of steps.
As a simplistic example, nested timelocks can provide for gradual disclosure, i.e. a timelock of 30 days containing a plaintext scope of impact (Remote Code Execution, Privilege Escalation and so on) + timelock of 60, which in turn can contain more plaintext evidence + tlock of 90 and so on.
# Payout
In order to simply discover the price, subject to a more efficient market where both malware-seeking capital and malware can be deployed more efficiently, a variety of auction mechanisms for the "uniquely viewable by the winner of the bid" period can be applied.
But what happens if the target fails to fix the vulnerability in time, after being notified? Can the knowledge of impending fallout from disclosure be integrated for additionally bolstering the cve/acc protocol?
## When bounties are not enough
https://docs.hackerone.com/programs/bounties.html
In some sense, the bounties in a variety of settings operated by HackerOne are simply a side-effect of needing to have a proxy and deciding authority in gate-keeping access to what a true value of disclosure could be worth. Is there still room for bounties, or will market pressures alone provide enough incentives?
## PUT option?
https://primitive.xyz/whitepaper-rmm-01.pdf
If the target does not co-operate, having a instrument with deep enough liquidity that appreciates as the target's tokenized or otherwise financialized integrity go down.
## `$RISK` token
https://www.cybergrx.com/platform/cybergrx-exchange
If successful, the protocol would have the earliest risk indicators for specific systems and can participate in well-established risk exchange. If a portion of that value of the network can be translated into a tokenized instrument, its ownership and issuance subject to gradual optimization for the objectives of the marketplace - then this very token can be used to reward participants in disclosure for whom the happy pathways (target fixes the vulnerability or the buyer pays up) did not work out.
Varying amount of `$RISK` token at different stages an for different attack surface can then be used for more dynamic experimentation with incentives.
## Plausible deniability, meta-bounty for hacking it all
Operating a decentralized protocol using the best payments mechanism available at the time (Penumbra, zCash, Namada etc.) and pseudonymous, cryptographically-enabled identity will in themselves form an attack surface that should have external authority that can deliver the payout when the protocol itself is hacked.
In this case, maintaining the most reputable / legitimate jurisdictional bounty operator or even adjacent protocol (ImmuneFi?) can serve as a fall-back mechanism, and the learnings from failure in code - the substance of v(n+1).
## Big Questions (ultimately, "should we?")
Can we discover a set of incentives and guarantees for software vulnerability disclosure to accelerate the discovery of bugs, and do so _responsibly_ in ways that ultimately make our software systems safer?
Is such a (transparent, opt-in) decentralized vulnerability marketplace preferable to the existing disclosure procedures, and by which measure (e.g. loss of life, property), if at all, can one measure the ethical dimension of the choice?
Given a configurable evolutionary protocol, how are we to make sure that the most predatory but also lucrative set of instantiation parameters is not the one that comes to dominate?
On a practical level, if such protocol is inevitable, what set of guarantees / incentives can bring together the hard side of the network to deal? (malware researchers driven by intrinsic and extrinsic incentives)