# How to verify ZK proofs on Bitcoin?
Polyhedra has been expanding itself to work on emerging areas of the entire blockchain landscape with zero-knowledge proofs. Previously, we have been focusing on Ethereum and have [research](https://www.polyhedra.network/research) on how zero-knowledge proofs can contribute to the Ethereum ecosystem, and we built zkBridge (https://zkbridge.com/), which provides zero-knowledge proofs for securing the [LayerZero](https://layerzero.network/) cross-chain messaging protocol.
Today, we turn our attention to Bitcoin. In particular, we study how to verify ZK proofs on Bitcoin. We are not the first to explore this problem. So, we start this article with a brief history of how human beings have tried to bring zero-knowledge proofs to Bitcoin.
## A brief history
The most ambitious attempt in recent years was to [verify a BLS12-381 proof](https://xiaohuiliu.medium.com/guest-post-elliptic-curve-bls12-381-support-on-bitcoin-bc7bfa605135) on Bitcoin SV, by the team at [sCrypt](https://scrypt.io/), but this attempt doesn’t work for Bitcoin—Bitcoin SV (BSV) is a hard fork of Bitcoin, an entirely separate chain today, with many differences. Particularly, BSV supports [new opcodes](https://wiki.bitcoinsv.io/index.php/Opcodes_used_in_Bitcoin_Script) that Bitcoin doesn’t support, and has higher script size limits. The transaction that sCrypt used to verify a BLS12-381 proof, which can be found [here](https://test.whatsonchain.com/tx/eba34263bbede27fd1e08a84459066fba7eb10510a3bb1d92d735c067b8309dd), lavishly uses these new opcodes, such as OP_NUM2BIN, OP_SPLIT, OP_CAT. This transaction is fairly big—26MB—which is not possible in Bitcoin, as the maximum possible block size is 4MB, and the Bitcoin block interval is about 10 minutes—we need to leave space for other transactions to get settled as well.
![image15](https://hackmd.io/_uploads/B1UjDr9A6.png)
Before this ambitious attempt, Bitcoin developers were in fact among the first group of people that looked into zero-knowledge proofs, even before Ethereum existed. Back in 2011, Gregory Maxwell, a former Bitcoin core developer and former CTO of Blockstream, proposed [“Zero Knowledge Contingent Payment”](https://en.bitcoin.it/wiki/Zero_Knowledge_Contingent_Payment). After a few years when zero-knowledge proofs became practical enough, ZKCP was first implemented on the Bitcoin network (see [this announcement](https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/)). This, however, didn’t verify ZK proofs on the Bitcoin network, but the proofs were verified off-chain. This is good enough for the ZKCP application, but it does not work for some other applications where on-chain ZK proof verification is necessary.
The most recent discussion about verifying ZK proofs on Bitcoin happened on [Bitcoin StackExchange](https://bitcoin.stackexchange.com/questions/120203/verifying-a-zksnark-on-bitcoin-hope-for-a-future-with-op-mul), where Carter Feldman from [QED Protocol](https://qedprotocol.com/) found the lack of OP_MUL makes it difficult to verify modern ZK proofs. Pieter Wuille, an active Bitcoin core developer and former co-founder and core engineer of Blockstream, answered as follows.
> “It appears completely infeasible to me to implement a SNARK verifier in Bitcoin Script, even if an OP_MUL were to exist. Without loops, elliptic curve operations, or even modular exponentiation, you'd be forced to completely unroll the computations to individual multiplications. I can't imagine that can realistically be done in Bitcoin Script without making the transaction exceed size limits. Even if it were somehow just doable, it'd be so ludicrously inefficient that I can't imagine you'd want to use that for anything but experimentation.”
He continued to add:
> “If your goal is practical SNARK verification in Bitcoin script, perhaps you should work on a proposal to actually add SNARK verification opcodes instead?”
This is a good starting point for our discussion. To know why verifying ZK proofs is going to be difficult on the Bitcoin network, we need to revisit what the Bitcoin script is—which is very different from Solidity, very different from EVM.
## Bitcoin has a limited set of opcodes
Unlike Ethereum’s smart contracts and its ecosystem surrounded by [Solidity](https://soliditylang.org/), [OpenZeppelin](https://www.openzeppelin.com/), and [Reth](https://github.com/paradigmxyz/reth), everything in Bitcoin is just simpler. Bitcoin has its own script language and a simple stack machine. We don’t call it “VM” because it lacks simple control-flow primitives like loops and conditional jumps, and it simply executes the opcode one by one.
The major challenge of verifying ZK proofs in Bitcoin is that Bitcoin script supports [a small list of opcodes](https://en.bitcoin.it/wiki/Script) that enables only a very limited form of computation. This is not always the case. Before August 2010, Bitcoin supported a lot more opcodes, including OP_CAT and OP_SUBSTR that Bitcoin SV later picked up and added back. These opcodes were, however, disabled by Satoshi due to concerns of bugs that may present in these opcodes.
These opcodes have not been enabled. As a result, people have been trying to “simulate” the disappearing opcodes with existing opcodes, but there is rarely any success. We now use OP_MUL and OP_CAT for examples to show how inconvenient it has become.
Previously, one can use OP_MUL to multiply two numbers (up to 4 bytes) in the stack and store the result back to the stack. After OP_MUL is removed, multiplication of numbers has to be simulated by OP_ADD and OP_GREATERTHANOREQUAL using the [double-and-add](https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication#Double-and-add) algorithm, which would require hundreds of instructions.
Another opcode that gets disabled and has been the hot topic on Twitter is OP_CAT. You may see some Bitcoin enthusiasts including OP_CAT in their Twitter profile to showcase their support of bringing back OP_CAT. The following picture from this [Blockchain Beach article](https://www.blockchainbeach.com/satoshi-thought-it-was-too-powerful-why-are-the-taproot-wizards-reviving-op_cat/) summarizes its history the best.
![image10](https://hackmd.io/_uploads/HyV3uHcRa.png)
But, what is OP_CAT? Internet memes seem to put a cat everywhere (OP_CAT), but what CAT means is “concatenation”. It gets two strings from the stack and combines them into a single string. This is a very simple functionality, and the fact that OP_CAT is still being disabled has caused a lot of inconvenience.
First, we haven’t found a way to circumvent or simulate OP_CAT. Robin Linus has a collection of [exotic bitcoin scripts](https://github.com/coins/bitcoin-scripts) that talks about all the Bitcoin script tricks, but it doesn’t have any magical solution to emulate OP_CAT even for special cases. Existing opcodes are helpless here. OP_ADD only works for short input and cannot deal with even a slightly longer string. As a result, in Robin’s script for [Merkle tree path verification](https://github.com/coins/bitcoin-scripts/blob/master/merkle-path-cat.md), he has to put forward and admit that “This implementation requires nothing but OP_CAT.” In 2021, Andrew Poelstra from Blockstream shared his [thoughts](https://medium.com/blockstream/cat-and-schnorr-tricks-ii-2f6ede3d7bb5) on realizing “covenants” but requiring OP_CAT, which can be found on Blockstream’s Liquid Network. He also doesn’t have a convenient way to bypass the requirement of OP_CAT on the Bitcoin network.
Second, OP_CAT happens to be indispensable for some key and fundamental cryptographic primitives, including efficient Merkle trees. To make Merkle trees efficient, we should leverage the hash opcode, such as OP_SHA256, in Bitcoin script. The challenge now appears in the compression step of the Merkle trees, which computes the parent node’s hash by hashing the concatenation of the hashes of the children nodes, and without OP_CAT it seems completely not doable—all the arithmetic opcodes are not helpful here because they cannot handle longer inputs like a SHA256 hash, and the rest of the opcodes can easily be confirmed to be infeasible to help. With OP_CAT, however, it becomes [trivial](https://github.com/coins/bitcoin-scripts/blob/master/merkle-path-cat.md).
It happens that most modern zero-knowledge proofs based on hash functions all rely on a Merkle tree. Without OP_CAT, it is very painful.
## The need of OP_CAT and its progress
Like many other Bitcoin developers, our team has been trying to find creative ways to verify ZK proofs using the existing opcodes but found no good solution. This leads us to feel that re-enabling OP_CAT might be essential.
There have been many affirmative opinions to re-enable OP_CAT, including from Blockstream, and many forks of Bitcoin have already enabled it. People are getting more comfortable with OP_CAT for four reasons:
- The C/C++ code for OP_CAT is fairly simple in that it is basically just invoking the C++ standard library’s vector implementation, as one can see below and [here](https://github.com/bitcoin/bitcoin/blob/4bd188c4383d6e614e18f79dc337fbabe8464c82/script.cpp) in the Bitcoin code repository.
![image9](https://hackmd.io/_uploads/rybHKrc0T.png)
- OP_CAT was not really related to the OP_LSHIFT crash that was behind why these opcodes were disabled back in 2010.
- Security concerns that an attacker can use OP_CAT to create a very long string (by doubling its length every time) can be effectively prevented by limiting OP_CAT’s output length. The C++ code shown above, back in 2010, already has enforced such a limitation of 520 bytes.
- Re-enabling OP_CAT only requires a soft-fork.
Traditionally, one would tend to disbelieve that Bitcoin will ever enable OP_CAT since in the last ten years, no disabled opcode has been resurrected so far. But, in reality, the progress toward re-enabling OP_CAT seems to be very positive. The [bitcoin-dev](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2024-February/022327.html) mailing list is a place where a lot of Bitcoin core development discussion happened. Ethan Heilman started a proposal in October 2023.
![image14](https://hackmd.io/_uploads/HkowYB5A6.png)
The responses to this proposal were positive. Andrew Poelstra, a key Bitcoin developer, said the following.
> “...When spitballing about ways to do cool stuff with Bitcoin Script, I'd say about 90% of the time it ends with "we could do this if only we had CAT". And the remaining 10% usually don't need much more.”
> “No single opcode comes close to the power of CAT (except super general-purpose opcodes like OP_ZKP_VERIFY :))”
In December 2023, a draft BIP was created (https://github.com/bitcoin/bips/pull/1525). Although a BIP number has not been assigned, several Bitcoin developers have reviewed it and some iterations of requested edits have been done.
![image13](https://hackmd.io/_uploads/Syi5FB5Cp.png)
The effort to implement this into Bitcoin Core—the reference Bitcoin client implementation—has been kickstarted. The discussion is currently in the [Bitcoin Core PR Review Club](https://bitcoincore.reviews/). A pull request that re-enables OP_CAT (https://github.com/bitcoin-inquisition/bitcoin/pull/39) has been actively updated and discussed in the past few weeks. On March 6, 2024, the Bitcoin Core PR Review Club hosted a meeting to discuss OP_CAT.
If the pull request is merged, discussion will move to the Bitcoin Core main repository (https://github.com/bitcoin/bitcoin/pull/29247), shown as follows. The OP_CAT, if passing all the necessary processes, would have this PR merged and be released in a new version of Bitcoin Core client, and a soft fork would turn on this feature on Bitcoin.
![image12](https://hackmd.io/_uploads/By-6FBcAa.png)
It is hard to predict how long the entire process would take. This is the first time an opcode that was disabled back in 2010 be re-enabled, and this is the second time major opcodes are added into Bitcoin (the last time is OP_CHECKSIGADD for Tapscript).
We are, however, confident that the Bitcoin community would bring back OP_CAT soon.
First, Bitcoin at this moment doesn’t have another major change in the roadmap, and OP_CAT has been the current spotlight.
Second, OP_CAT has been discussed for many years, has been implemented in variants of the Bitcoin network, and has been well studied. This can be observed from the conversation among the Bitcoin core developers, while everyone seems to be on the same page.
To summarize, we are bullish that OP_CAT will be brought back to the Bitcoin network, and we believe that this would be an important moment for the Bitcoin ecosystem.
In the rest of the article, we will share our thoughts on how to verify zero-knowledge proofs in the Bitcoin network, assuming that OP_CAT is enabled.
## Define Bitcoin-friendly proof systems
The previous attempt by sCrypt, which verifies a BLS12-381 Groth16 proof on Bitcoin SV, still doesn’t work for the Bitcoin network—sCrypt’s implementation already assumed the existence of OP_CAT. The main reason that sCrypt’s implementation doesn’t work is that the script is way too long, and the corresponding transaction, shown below, cannot fit into one Bitcoin block. The script size must be smaller than 4MB, otherwise miners could not include it in any block.
![image16](https://hackmd.io/_uploads/SkKycrq06.png)
This allows us to precisely define what it means for a zero-knowledge proof system to be “Bitcoin-friendly”. Since the SegWit upgrade in 2017, the Bitcoin network measures the size of a block through [“weight units”](https://river.com/learn/terms/b/block-size/) and a block is limited to have 4 million weight units. Our observation from the Bitcoin block explorer suggests that many blocks have used up their 4 million weight units quota, and each has size between 1MB to 2MB.
In a pay-to-taproot (P2TR) transaction which we will use, the number of weight units that a transaction consumes is calculated through a set of [rules](https://en.bitcoin.it/wiki/Weight_units) so that some bytes in a transaction would cost 4 weight units, while some other bytes would only cost 1 weight unit. The rationale between giving some bytes a lower weight unit cost (aka, a “discount”) is that these bytes are used to determine whether a transaction is valid, and once the transaction has been validated, such bytes do not need to be permanently stored by all the full nodes, and therefore they contribute less to the storage overhead of the Bitcoin network. Fortunately for us, the Bitcoin script, which will be a witness script in P2TR transactions, enjoys this discount.
Another thing that P2TR brings to the Bitcoin script is that it lifts most of the size limits of the Bitcoin script. As long as it can fit into a block, nothing stops a miner from including it. However, it does not mean that a regular user can do a Bitcoin script of 4 million bytes. Per [the default policy](https://github.com/bitcoin/bitcoin/blob/master/src/policy/policy.h#L27), many nodes will only relay transactions that take less than 400k weight units (1/10 of the block limit), and if one sincerely wants to submit a transaction that is larger, it needs to collaborate with the miner, as a few posts on Bitcoin StackExchange ([[1]](https://bitcoin.stackexchange.com/questions/85080/can-i-send-almost-1mb-transaction), [[2]](https://bitcoin.stackexchange.com/questions/35570/what-is-the-maximum-number-of-inputs-outputs-a-transaction-can-have)) pointed out.
P2TR plays an important role in making ZK proof verification in Bitcoin possible. The picture below from [@therationalroot](https://twitter.com/therationalroot) summarizes its benefits.
![image1](https://hackmd.io/_uploads/H1XS5rcRp.png)
Another limitation that we should be aware of is the stack size. Bitcoin script currently limits the stack size to be 1000. A program cannot create more than 1000 stack elements. This roughly implies that the “working memory” that a Bitcoin script can theoretically access is 0.5MB. We should make sure that the program can work within the memory limit. There is, however, no need to optimize it further, as the transaction fee does not depend on the peak stack size.
Now, we have all the information that we need to define Bitcoin-friendliness. Basically, we want a proof system whose verification algorithm can be expressed in as few as possible weight units in the Bitcoin script and are executable within the stack limit and other rules. This translates to the desire of a proof system whose verification algorithm requires as few as possible bytes to be expressed in the Bitcoin script.
A secondary measure for Bitcoin-friendliness, in case we have two proof systems that have similar weight units, is whether the verification algorithm can be conveniently split into several transactions where each transaction completes a few steps of the proof verification. This can help if the standalone Bitcoin script necessary for a single-shot proof verification is still exceeding the limits. In such a case, we may need some sort of [covenants](https://bitcoinops.org/en/topics/covenants/) that enforce relations between these transactions, which would benefit from OP_CAT, but it requires more research.
The secret to express computation succinctly in the Bitcoin script—i.e., without using too many weight units—is to leverage the opcodes whenever possible. For example, the Bitcoin script has native support for hash functions through five opcodes (with the hex representation 0xa6-0xaa, respectively):
- OP_RIPEMD160
- OP_SHA1
- OP_SHA256
- OP_HASH160
- OP_HASH256
A hash operation, regardless of the length, only takes one weight unit. If one has to simulate the hash operation using the rest of the opcodes, assuming that OP_CAT is available, it may still require at the very least 60k weight units (to run SHA256 as a boolean circuit), and there are other costs such as accessing the stack. The current stack may not be sufficient to run this program. One can see the importance of using existing opcodes whenever possible.
We specifically look at “hash” because many modern zero-knowledge proof protocols happen to rely on hash functions.
## Find a Bitcoin-friendly proof system
Production-grade modern zero-knowledge can be largely separated into two families:
- Proofs based on pairing on elliptic curves (such as BN254)
* Examples include [Groth16](https://eprint.iacr.org/2016/260.pdf), [Plonk](https://eprint.iacr.org/2019/953.pdf), and [Marlin](https://eprint.iacr.org/2019/1047.pdf)
- Proofs based on hash functions
* Examples include [STARK](https://eprint.iacr.org/2018/046.pdf) and its many variants and implementations
Our first thought is that it is very unlikely that pairing-based proof systems will work for Bitcoin, for a few reasons.
- *computation over a larger, less structured prime.* Pairing-based proof systems rely on an elliptic curve. For an elliptic curve to be secure, its prime field modulus must be at least around 256 bits. Oftentimes, we construct pairing-friendly curves through explicit formulas such as in the case of a [BN](https://eprint.iacr.org/2005/133.pdf) curve. The shortcoming of this constructive approach is that we cannot choose small prime numbers, and we may need to implement things such as [Montgomery reduction](https://en.wikipedia.org/wiki/Montgomery_modular_multiplication) to work with these unstructured large primes, which adds complexity.
- *difficulty to simulate.* To verify a pairing-based proof, the verifier needs to perform several scalar multiplication of points on the curve and the so-called “pairing” step. These can be very heavy. Recall that the [sCrypt](https://test.whatsonchain.com/tx/eba34263bbede27fd1e08a84459066fba7eb10510a3bb1d92d735c067b8309dd) example far exceeds the standard block limit, even though it leveraged new opcodes such as OP_MUL available on Bitcoin SV.
So, our focus goes to hash-based proof systems. Robin Linus had recently made a [tweet](https://twitter.com/robin_linus/status/1767941121273897397) asking if anyone has done so.
![image8](https://hackmd.io/_uploads/r1O7orqRa.png)
The core idea is that, with OP_CAT, the part of the computation that relates to Merkle trees can, as we mentioned above, use the existing hash opcodes that take very tiny weight units. The rest of the computation is comparably much easier than that of pairing-based proof systems, as it can involve small primes. This flexibility is key to the performance.
We find certain primes that have been used in STARK to be Bitcoin-friendly. For a prime to be Bitcoin-friendly, it needs to satisfy one requirement: The prime is smaller than 2^31. This makes sure that two numbers adding up together would not exceed 2^32.
A number of prime numbers satisfy this requirement.
- M31 (2^31 - 1), which has been used in Circle STARK.
- BabyBear (15 * 2^27 + 1), which has been used in RISC Zero.
For Bitcoin, M31 and BabyBear are not too different from each other, as their arithmetic operations will be emulated in an almost identical way.
The benefit is that this allows us to use OP_ADD and OP_SUB to perform all the additions and subtractions. We can detect an overflow by “0 OP_LESSTHAN” and we can quickly correct the overflow through OP_ADD and other opcodes.
To multiply two numbers, since OP_MUL is still disabled, we would need to implement it through double-and-add. BitVM has [implemented](https://github.com/BitVM/BitVM/pull/12) this technique.
Readers may wonder if another prime, Goldilocks64 (2^64 - 2^32 + 1), is Bitcoin-friendly. Since this prime is 64-bit, we could not easily emulate its computation using the existing Bitcoin opcodes, and we feel that Goldilocks64 can be excluded.
Now, a remaining question is that given STARK has so many variants, which one should we choose? This is in fact asking about what arithmetization should we use in the proof system.
One candidate solution is R1CS, which has been used in [Groth16](https://eprint.iacr.org/2016/260.pdf). The [Fractal](https://eprint.iacr.org/2019/1076.pdf) paper (EUROCRYPT 19) shows how to do R1CS in hash-based proof systems and makes its verifier as efficient as possible.
Another candidate solution is the textbook Plonk, in which each gate has three wires, two for inputs and one for outputs, and each gate only performs one simple linear combination of the two inputs. We feel that this may be simpler than R1CS in terms of the final verifier cost.
Some work is needed to experiment with the proposal that we provide above, but based on our understanding of the ZKP landscape, our discussion should be comprehensive enough.
One additional remark that may be helpful to practitioners is that, in order to achieve a sufficient level of security in hash-based proof systems, there are a few parameters to play with. Starkware uses a technique called [grinding](https://eprint.iacr.org/2021/582.pdf), which adds a proof of work component in order to boost security, and by doing that, one may be able to make fewer queries or use a more verifier-friendly rate of the encoding while still having the same level of security. Since verifying the grinding is easy for a verifier in Bitcoin, we particularly like this idea. A recent work, called [STIR](https://eprint.iacr.org/2024/390), is very relevant, as it provides a technique that can further reduce the verification cost.
Last but not the least, one benefit of STARK over pairing-based proof systems is that STARK is believed to be post-quantum secure, while pairing-based proof systems are known to be not post-quantum. In other words, STARK may be a more suitable long-term choice.
There is a lot of work to do to figure out the concrete cost of a verifier, and we believe that one may actually need to learn it by implementing it. But I think the overall roadmap is clear, and it is only a matter of time. Note that we still have the concern that in the end maybe even this more Bitcoin-friendly proof system does not fit into a transaction, in which case we need to do more optimization. This article will discuss some of them.
Now, one may wonder if Bitcoin-friendliness, which focuses on “verifiers”, comes at a hefty cost to the “provers”. This is a potential issue of the textbook Plonk proof system that we mentioned above—because of its simplistic structure, the same computation may incur a lot more redundancy in how it represents the program and be slower (think of a CPU with simpler instructions set, one may need to use a lot more instructions to achieve the same functionality). The gap in terms of prover efficiency between the textbook prover and a highly optimized one (such as Halo2 used in Scroll, Polygon Miden, RISC Zero) can be significant.
This can be solved by a design philosophy called “[SHARP](https://dci.mit.edu/zksharks)”, proposed by Mariana Raykova, Eran Tromer, and Madars Virza back in 2019. The idea is that one can get the best tradeoff between prover efficiency and verifier efficiency by using two (or more) proof systems:
- A few proof systems whose prover efficiency is good, but the verifier efficiency may be suboptimal (often due to the arithmetization being complicated but therefore expressive).
- One final proof system whose prover efficiency may be mediocre, but the verifier efficiency is good.
The way to put these different proof systems together is through [recursive verification](https://eprint.iacr.org/2014/595.pdf). To prove something is true, a proof system can either verify its computation or, instead, perform verification on an existing proof that asserts that the computation is true. For computation that is large enough, the latter would be more efficient, as generally speaking, verifying a proof is asymptotically easier than verifying the corresponding computation.
This has been used in production. The following picture shows how [Polygon zkEVM](https://docs.polygon.technology/zkEVM/architecture/zkprover/stark-recursion/proving-architecture/) uses different proof systems to produce a single proof that verifies a large amount of computation. The several recursive provers shown in the picture are not the same. For example, the final stage uses a recursive prover that switches to a different finite field, in order to make proof verifier-efficient for the final prover in the SNARK stage.
![image3](https://hackmd.io/_uploads/H1IRjr906.png)
So, what we will do is to run the actual proof verification using the prover-efficient proof systems and also aggregate these proofs into a single proof, using recursive verification, also with a prover-efficient proof system. Finally, we use a verifier-efficient (more specifically, Bitcoin-friendly) proof system to convert it into a Bitcoin-friendly proof.
This allows us to avoid the penalty on the prover efficiency caused by Bitcoin friendliness.
## Transaction introspection using OP_CHECKSIG
If the solution works out, one may be able to implement very complicated covenants with the help of ZK proofs as it enables general-purpose transaction introspection.
[Covenants](https://bitcoinops.org/en/topics/covenants/) in Bitcoin refers to the ability for a script pubkey to enforce some rules about a transaction that wants to spend a UTXO under that script pubkey. This relies a lot on OP_CAT. And the idea is that the script pubkey can use OP_CHECKSIG to verify a signature of the current transaction that includes information such as the list of outputs, and the script pubkey can decide whether or not to invalidate this transaction if the script does not want to allow such outputs.
A simple example, which demonstrates the power of OP_CAT and OP_CHECKSIG, is from a [tweet](https://twitter.com/rot13maxi/status/1748906834365227258) by Rijndael (@rot13maxi).
![image7](https://hackmd.io/_uploads/rylG3Hq0a.png)
@rot13maxi basically creates a UTXO that requires the spending transaction to follow a given format. This example is great since it only relies on existing opcodes as well as OP_CAT, and the script is short enough. The limitation is that it might not be able to support more complicated rules or transactions with more flexibility.
There have been efforts from [sCrypt](https://medium.com/coinmonks/stateful-smart-contracts-on-bitcoin-sv-c24f83a0f783) to use covenants to pass state from one transaction to another, therefore making the script stateful, on Bitcoin SV. The idea is similar here by using OP_CAT and OP_CHECKSIG, as well as the [Schnorr trick](https://medium.com/blockstream/cat-and-schnorr-tricks-i-faf1b59bd298) that Andrew Poelstra discussed.
If we can practically verify ZK proofs in Bitcoin, and such verification becomes efficient enough, we can have general-purpose transaction introspection, as follows.
- Compute a hash of the transaction.
- Use OP_CHECKSIG to verify this hash against the transaction, via the Schnorr trick.
- Produce a zero-knowledge proof given the hash as the input, asserting that:
* Let **b** be the transaction body under this signature.
* Verify that the signature is a valid signature over message **b**
* Check that **b** satisfies certain rules
The benefit that ZK brings in is in the words “satisfies certain rules”. Because zero-knowledge proofs today are extremely general purpose, it can basically enforce any rules that a regular computer can do. For example, [RISC Zero](https://www.risczero.com/) is a general-purpose virtual machine that runs RISC-V instructions. As long as a program can be compiled into RISC-V, it can compute a proof of its execution. Nowadays, people can easily cross-compile Rust and C++ code into RISC-V (see how the game [Doom](https://www.risczero.com/blog/when-the-doom-music-kicks-in) can be verified in ZK today). Below is a table of RISC Zero’s proof generation performance. An M2 MacBook can prove RISC-V CPU execution at a speed of 100kHz. With cluster computing, this can linearly scale up. This allows us to handle many complicated programs that were not thought about before in the Bitcoin ecosystem.
![image2](https://hackmd.io/_uploads/SJAv3HqA6.png)
General-purpose ZK can do more. For example, with transaction introspection, the ZK proof can know the txid of the inputs, and since the txid is a hash of the information in the previous transaction, this allows the ZK to read the previous transaction.
- Let **prev[0]** be the previous transaction that creates the 1st input.
- Verify that **prev[0]** corresponds to the txid **h[0]** that appears in the new transaction’s body.
- Use data in **prev[0]** as a reliable reference to a previous transaction.
Since each transaction can have more than one input that comes from different transactions, there can be **prev[1]**, **prev[2]**, **prev[3]**, … that represent the transactions of the other inputs.
This, however, does not stop here. Now that we have access to the previous transaction, we can access the previous transaction of the previous transaction. This can go all the way back to the genesis. What this allows us to do is to introspect part of the Bitcoin history data that are ancestors to the present transaction.
There are many possible applications that are enabled by this design. If we think about it, the game changer is the zero-knowledge proof, which provides a few useful features.
- *General-purpose computation.* If verifying a zero-knowledge proof becomes feasible, the transaction introspection will no longer be constrained by the power of the Bitcoin script. Modern zero-knowledge proofs that are used in production can already and practically verify programs compiled from high-level languages such as Rust and C++, and the programs have a program counter, can have dynamic control flows, and have a large accessible random access memory. This basically removes the necessity to make Bitcoin Script Turing-complete or general-purpose because such computation can be delegated to ZK, which further reduces the network overhead.
- *Incremental computation.* Today zero-knowledge proofs can be generated in a very composable manner because we can practically verify a zero-knowledge proof inside another zero-knowledge proof. This means that, if we consider a state machine, proving its state transition from A to Z can be done by verifying a proof that already verifies state transition from A to Y and continuing the computation from Y to Z. Here, verifying a proof from A to Y would be much faster than redoing the computation from A to Y. A light client for the Bitcoin blockchain can be seen as a state machine, and it means that one can generate a ZK proof of the Bitcoin history and refresh this proof with new blocks through incremental updates.
- *Succinct verification.* A key property of zero-knowledge proofs, through recursive composition, is that we can make the verifier overhead constant regardless of how much computation is indeed verified in the proof. So, even if a proof may introspect years and years of the Bitcoin history, its proof verification cost can be practically the same as another proof that does very simple stuff. This is also the reason why zero-knowledge proofs have been useful in the Ethereum ecosystem for building ZK-Rollups.
The ability of transaction introspection can also go beyond the Bitcoin network, through the use of [ZK bridge](https://dl.acm.org/doi/10.1145/3548606.3560652). The idea of the ZK bridge is that one can effectively run a light client for another chain, make queries about the state, and verify all of the execution in ZK. Many L1 blockchains are based on PoW or PoS, and therefore, ZK bridges can verify the PoW and PoS to detect and reject an incorrect fork, and after the correct chain is selected, it can use ZK to read the blockchain in a verifiable manner, just like Bitcoin’s [Simplified Payment Verification](https://bitcoinwiki.org/wiki/simplified-payment-verification) (SPV).
Nevertheless, as we mentioned before, this is only possible if the zero-knowledge proofs are efficient enough that we can fit it within all the Bitcoin script restrictions (stack size, weight units). And this is only possible if the cost of a ZK proof verification on the Bitcoin network is reasonable.
We cannot have a conclusive answer whether it is practical or not. Robin Linus suggested that 4 million weight units are enough to do a lot of computation. But, we feel that we need to invest in actual engineering effort first. If the cost is too high, we may need to employ more optimization, and we will now discuss some of our thoughts.
## Split the proof verification in multiple transactions
Assume that the ZK proof verification cannot be done within one single transaction, we need to find creative ways to chain a bunch of transactions together and let them each perform part of the ZK proof verification.
Several articles from sCrypt ([[1]](https://medium.com/coinmonks/turing-machine-on-bitcoin-7f0ebe0d52b1), [[2](https://xiaohuiliu.medium.com/op-push-tx-3d3d279174c1)], [[3]](https://medium.com/coinmonks/stateful-smart-contracts-on-bitcoin-sv-c24f83a0f783)), although relying on Bitcoin SV, provide the general idea on how to do so. Using OP_CHECKSIG and—of course, OP_CAT—an input whose public key is a script can enforce how other inputs and the outputs are when this input is being spent.
The idea is as follows. Consider ourselves as a prover. The goal of the prover is to convince a verifier about the computation that has been done.
First of all, the prover can split the computation into **N** scripts. Each script performs part of the verification. Then, the prover creates its first transaction that looks as follows.
[input] => out[1], out[2], …, out[N], out[N+1]
Here, out[1] to out[N] will be pay-to-taproot (P2TR) that will perform the computation. out[N+1] is an output that records a public key of the current prover. So, it can be a script as simple as:
OP_RETURN {prover public key}
This transaction will not be expensive, because in this transaction, no matter how complicated the scripts in out[1] to out[N] are, what this transaction includes is only a hash of the script. Therefore, this transaction is small in size.
Now, for each of the out[1] to out[N], they will perform the computation as required, as well as enforcing the following conditions using OP_CHECKSIG and OP_CAT.
- The transaction has only one input and only one output.
- The input’s txid is re-calculated given the first transaction: [input] => out[1], out[2], …, out[N], out[N+1].
- A signature from the public key of the prover about this transaction, as given in out[N+1], is required and checked by OP_CHECKSIG. This is to prevent another person on the Bitcoin network from spending out[1] to out[N].
- The output would be a very simple one that is spendable only with a valid signature under the prover’s public key—which would use OP_CHECKSIG to avoid malleability—and the script contains two additional data elements: one represents a hash of the shared data that should be the same across all out[1], out[2], …, out[N], and an unused random stack element, which is to facilitate the [Schnorr trick](https://medium.com/blockstream/cat-and-schnorr-tricks-i-faf1b59bd298) that Andrew Poelstra discussed, as we need to make the hash of the preimage that OP_CHECKSIG validates ends with 0x01.
So, we will create **N** transactions, each resolves one of the out[i] and produces exactly one output, which we can call it cert[i]. Note that cert[i] contains a hash of the shared data, and this hash should be the same across all these transactions, even though they may be in different blocks. These **N** transactions look as follows.
out[1] => cert[1]
out[2] => cert[2]
…
out[N] => cert[N]
Now, assume that the prover needs to convince a verifier (which is a pay-to-taproot UTXO, and the prover has no control over it), the prover presents to the verifier with all the cert[i], by creating the following transaction.
verifier-in, cert[1], cert[2], …, cert[N] => verifier-out
Some small adjustment would be made to enable the Schnorr trick to help introspection. The overall transaction flow looks as follows.
![image4](https://hackmd.io/_uploads/HyiLTHc0a.png)
verifier-in will do the following:
- Through OP_CHECKSIG and OP_CAT, obtain the txid of cert[1] to cert[N] and obtain the two outputs.
- Check that verifier-out is as expected.
- Using the txid of cert[1], cert[2], …, cert[N], recover the transactions out[1] => cert[1], out[2] => cert[2], …, out[N] => cert[N], thereby obtaining the txid of out[1], out[2], …, out[N].
- Check that out[1], out[2], …, out[N] have the same txid.
- Using the txid of out[1] to out[N], recover the transaction [input] => out[1], out[2], …, out[N], out[N+1].
- Check that correct pay-to-taproot hashes that correspond to an accepted ZK proof verification procedure are used in out[1], out[2], …, out[N].
- Check that cert[1], cert[2], …, cert[N] contains the correct shared data. Note that the shared data is necessary because if verifier-in doesn’t check the shared data, then the prover can provide any proof, and the thing being proven can be irrelevant to what the verifier is looking for.
It is important that we are using Taproot since OP_CHECKSIG behaves very differently in Taproot and non-Taproot transactions—for example, it now uses the Schnorr signature, which is crucial to the Schnorr trick that enables transaction introspection, and the way that the transaction hash is being calculated is different, as it no longer includes the script code.
We need to make a disclaimer that we haven’t really implemented the idea above, and a full implementation is needed to see if all the scripts can be done within their limits (this, of course, would require a full implementation of the STARK prover). In particular, we want to see if we can make sure all the transactions here are “standard” (following the soft limits set on Bitcoin Core’s [policy.h](https://github.com/bitcoin/bitcoin/blob/master/src/policy/policy.h)), so that we don’t need to coordinate with miners in order to submit the transaction. It is not impossible to coordinate with miners. One such service is [Slipstream](https://slipstream.mara.com/) by Marathon Digital: “Slipstream is a service for directly submitting large or non-standard Bitcoin transactions to Marathon… If your transaction meets Marathon’s minimum fee threshold and conforms to the consensus rules of Bitcoin, Marathon adds it to its mempool for mining.”
Another consideration is that we want to lower the waiting time for the proof to be settled. The design above indeed tries to facilitate this, allowing all the proof generation transactions to be submitted to the P2P network at the same time.
Readers may wonder if submitting all the transactions at once is okay because some of the transactions depend on the other transactions, and such transactions will fail if the preceding transaction has not been finalized. Readers may wonder if a miner can indeed put these transactions in the same block, or that a transaction in the Bitcoin network must only use a preceding transaction in the previous block.
This, however, is not a major issue for the Bitcoin network because Bitcoin allows such flexibility. Indeed, Bitcoin has a technique called [Child Pays for Parent (CPFP)](https://bitcoinops.org/en/topics/cpfp/) that already leverages the fact that transactions with dependency can be submitted together to the mempool. A transaction can use outputs of a previous transaction as long as it satisfies the consensus rule:
> Bitcoin consensus rules require that the transaction which creates an output must appear earlier in the block chain than the transaction which spends that outputs—including having the parent transaction appear earlier in the same block than the child transaction if both are included in the same block.
So, one can submit all these transactions at once, and wait for the P2P network to resolve them. Note that the transactions have been designed to make it easy for the network to resolve.
- The first transaction, [input] => out[1], out[2], …, out[N], out[N+1], only depends on the proof system as well as the public key of the prover, so the prover can indeed make this transaction ahead of time. If not, the prover can also submit this transaction with the other transactions.
- The next **N** transactions, out[i] => cert[i], only depend on the first transaction and do not interfere with each other. Miners in the Bitcoin network can accept these transactions in an arbitrary order, but we imagine that, with a reasonable fee, miners are encouraged to include these transactions as soon as possible.
- The last transaction, verifier-in, cert[1], cert[2], …, cert[N] => verifier-out, can be submitted to the P2P network together with the other transactions, and it can be expected to execute when all the **N** transactions have been confirmed.
However, there is always an option to work with miners to get the transaction accepted, and we imagine that there could eventually be a blockspace reservation system, for more predictable latency and outcome, and it may save us from doing more hacking around.
## Fraud proofs over split verification
Another idea is to borrow the idea from optimistic rollup and make it a fraud proof, similar to what BitVM does, which can start from the split verification implementation that we just present. Fraud proofs avoid the need to actually do the computation, and it relies on third parties—the challengers in the public—to invalidate any claims of the fraud proofs on the Bitcoin network.
An important subtlety is that fraud proofs introduce another requirement—data publication. In order to convince a verifier that a claim is true because no fraud proof has been generated by the public, the verifier needs to be confident that any data necessary for the public to challenge the claim has been provided—specifically, published on the Bitcoin network.
We present the high-level idea, and we admit that a detailed implementation is needed to verify if this idea would work out. The prover, in the first step, creates a transaction as follows.
[input] => out[1], out[2], …, out[N], out[N+1], da[1], da[2], …, da[N]
where each of out[1] to out[N] is a pay-to-taproot that can be spent in two ways:
- Either, for out[i], supply the witness described in da[i] and the public input of the ZK proof and show that this witness does not work, or that the data in da[i] is inconsistent with the hashes specified in out[N+1]. If this is satisfied, it would use OP_CHECKSIG and OP_CAT, together with the Schnorr’s trick, to make sure that the output is a simple payment to the challenger. It uses the same transaction introspection technique to read out[N+1] in the parent transaction. This would spend the output, and therefore prevent the same output from being spent again.
- Or, require a signature under the prover’s public key (which can be encoded in this tapleaf of out[i]). This tapleaf is the happy path, and It doesn’t make additional requirements on the transaction that spends this output.
The special output, out[N+1], will encode information such as:
```
OP_RETURN {public input} {hash of da[1]} {hash of da[2]} … {hash of da[N]}
```
And each of da[1] to da[N] are pay-to-taproot that can only be spent by providing the preimage to the hash given in out[N+1].
- It uses the transaction introspection technique to go back to the parent transaction and read the hash in out[N+1].
- It checks that the initial stack (after proper concatenation with OP_CAT) contains data that can be hashed into that hash outlined in out[N+1]. Note that due to stack size limit, the computation of this hash needs to be customized to cater for the fact that OP_SHA256 can only hash up to 520 bytes of data, such as splitting the data into multiple chunks, hashing them individually, and then hashes the resulting hashes iteratively, in a Merkle-tree style.
- It produces a very simple output that is spendable only with a valid signature under the prover’s public key—which would use OP_CHECKSIG to avoid malleability. We call this output timelock[i]. This output, however, would have a timelock, such as it can only be spent after the transaction is confirmed for a sufficiently long time. The timelock would allow some randomness to facilitate the Schnorr technique.
- The spending of da[i] therefore publishes the necessary data on the Bitcoin blockchain.
So one can perform transactions of the form:
da[i] => timelock[i]
Now, the prover is asked to submit the following transactions to the Bitcoin blockchain:
[input] => out[1], out[2], …, out[N], out[N+1], da[1], da[2], …, da[N]
da[1] => timelock[1]
da[2] => timelock[2]
......
da[N] => timelock[N]
There could be an alternative design that tries to avoid the need of a transaction introspection of da[i] by deferring it to the verifier, which may help lower the cost since da[1], …, da[N] all have the same txid. We will leave it for future research.
This would now allow a challenger to submit a fraud proof if anything doesn’t look right. For example, if the challenger, after obtaining the data from the Bitcoin network, believes that out[i] will not match, then by definition of out[i], the challenger can spend out[i] using the first tapleaf, which would prevent the prover to spend out[i] using the second tapleaf later. For example, if the challenger finds out[j] to be challengeable, a transaction as follows can be submitted.
out[j] => $
Here $ represents the deposit that the challenger can seize as reward for challenging. To summarize, challenging works as follows.
![image6](https://hackmd.io/_uploads/BkhHABcRT.png)
If enough time has passed but nobody can challenge any of the outputs out[1] to out[N], the prover can now submit the proof to the verifier.
verifier-in, out[1], …, out[N] timelock[1], timelock[2], …, timelock[N],
=> verifier-out
The happy path of the workflow would now look like this.
![image5](https://hackmd.io/_uploads/HJIPRSqCT.png)
The verifier-in will perform similar checking:
- Through OP_CHECKSIG and OP_CAT, obtain the txid of timelock[1] to timelock[N] and obtain the two outputs.
- Check that verifier-out is as expected.
- Using the txid of timelock[1], timelock[2], …, timelock[N], recover the transactions da[1] => timelock[1], da[2] => timelock[2], …, da[N] => timelock[N], thereby obtaining the txid of da[1], da[2], …, da[N].
- Using the txid of out[1] to out[N], recover the transaction [input] => out[1], out[2], …, out[N], out[N+1], da[1], da[2], …, da[N].
- Check that out[1], out[2], …, out[N], da[1], da[2], …, da[N] have the same txid.
- Check that correct pay-to-taproot hashes that correspond to an accepted ZK proof verification procedure are used in out[1], out[2], …, out[N] and an accepted data publication procedure are used in da[1], da[2], …, da[N].
- Check that the public input data in out[N+1] is relevant to the verifier.
- Check that da[1], …, da[N] have a correct relative time lock in place, which means that the out[1], …, out[N] are available, and da[1], …, da[N] has been published. Additional rules may govern how long the relative time lock needs to be, depending on the use case and taking into account the potential network congestion in Bitcoin.
## Fraud proofs with BitVM
Another solution, which may be more general-purpose and can be programmed at a higher-level language, is to use [BitVM](https://github.com/BitVM/BitVM).
We want to emphasize, based on our conversation with Robin Linus, that the BitVM design currently in the repository is very different from the version present in the whitepaper that many of us are familiar with. Readers are encouraged to find out the latest BitVM design from the GitHub repository. This [graph](https://raw.githubusercontent.com/BitVM/BitVM/24bf29304d0a951106d907e7bcd43f4a37a8feb7/docs/bitVM_graph_v2.svg) summarizes the new design of BitVM, which should be self-explanatory.
If you go to the BitVM repository, you can find that today’s BitVM is similar to [an RISC-V VM](https://github.com/BitVM/BitVM/blob/main/src/bitvm/constants.rs#L23C34-L23C39) that tries to emulate the riscv32i standard. BitVM supports 20 instructions: ADD, SUB, MUL, AND, OR, XOR, ADDI, SUBI, ANDI, ORI, XORI, JMP, BEQ, BNE, RSHIFT1, SLTU, SLT, SYSCALL, LOAD, and STORE. The instruction set is indeed sufficient to run, for example, a Turing complete machine.
- This includes logic and flow controls such as JMP, BEQ, BNE. Specifically, remember that the Bitcoin script runs a stack machine. One can use this to do function calls.
- Basic arithmetic operations over u32 have been emulated with the Bitcoin script. Note that with SLTU, it is possible to simulate u64 integer operations, as in RISC Zero’s VM.
- An addressable memory is available.
Each of the RISC-V CPU cycles can be rerun/challenged in the Bitcoin script.
A challenger can challenge an execution in BitVM by showing that in a specific RISC-V CPU cycle of its execution, one of the following six events (i.e., “fraud”) happens.
- faulty current program counter
- faulty next program counter
- faulty instruction
- faulty read of operand 1
- faulty read of operand 2
- faulty write
If no cycle can be challenged by a challenger, then by the definition of fraud proofs, the execution must be correct. The challenger can search for the discrepancy by locally running the same BitVM given the program and some input data and then engaging in a challenge-response game that points out the discrepancy on the Bitcoin network.
This therefore provides us with a very simple strategy, as follows.
- Compile the verifier to BitVM’s dialect of RISC-V, which may reuse an existing effort of customizing RISC-V, [Valida LLVM compiler](https://www.lita.foundation/infrastructure#compiler).
- Create the BitVM execution claim as a taproot and publish it on-chain.
- The taproot is now ready to be challenged.
Note that with BitVM, the definition of Bitcoin friendliness is now completely different. Previously, we wanted to make sure that the verifier is small enough and is efficient enough to be verified in the Bitcoin script, especially that it needs to fit into all the stack limits and weight unit limits.
None of these matter now for BitVM. In fact, since proof generation for the trace can be fairly efficient and parallelized, the proof generation cost is less of a concern. The cost to submit a challenge is, however, almost constant. BitVM does not have to use OP_CAT, but without OP_CAT the overhead will be very high, as the overhead related to hash functions will significantly go up.
BitVM, is, however, not without overhead. Like optimistic rollup, the proof needs a withdrawal period to allow challengers to come in. Notice that a fully on-chain challenge-response can require tens of roundtrips between the prover (called Paul in BitVM) and the challenger (called Vicky), and since Bitcoin has a block time of 10 minutes, it can be quite a long time. It is also a little bit unsure what would happen if many challengers want to challenge at the same time and whether it would affect the latency and the finality.
![image17](https://hackmd.io/_uploads/Bk5ek8qA6.png)
However, many optimistic rollups in Ethereum have a withdrawal period of 7 days, so there is still enough time for the challenge-response to happen, but it does not work for use cases where the withdrawal period has to be short. We also need to make sure that multiple challengers can work independently and in parallel. This prevents a dragging challenger from blocking other serious challengers, and it avoids the need for the prover to commit additional security deposits when handling multiple challengers at the same time.
A recent solution to lower the impact of the withdrawal period is to use restaking protocols like [EigenLayer](https://www.eigenlayer.xyz/) and [Babylon](https://babylonchain.io/) to recruit validators to pre-verify the proof and announce the result. [AltLayer](https://altlayer.io/) has been working to provide a bundled solution. This works in practice, but it does bring in additional assumptions.
We are closely looking at the development of BitVM. As a general-purpose RISC-V VM, BitVM may facilitate the development of programmability in the Bitcoin network. BitVM is still under development, but we think the overall idea makes sense—its non-interactive and close counterparts, [RISC Zero](https://www.risczero.com/) or [Valida](https://github.com/valida-xyz/valida), have been deployed in practice.
## Proof aggregation layer
A concern that people have—especially among core Bitcoin users—is that the increasing number of Bitcoin applications will oversubscribe Bitcoin’s resources—particularly after the Taproot upgrade which significantly relaxes the script size limits.
There have been efforts and campaigns in the Bitcoin community to restrict undesirable use of Bitcoin network resources. A website called [“WTF happened in February 2023?”](https://wtfhappenedinfeb2023.com/) represents a community proposal for nodes to voluntarily filter out some transactions, particularly those due to inscriptions and BRC-20, as spam.
Though verifying ZK proofs on Bitcoin is less likely to be classified as spam as it is indeed useful, if there are a dozen applications submitting ZK proofs for verification, and this happens frequently, it may contribute to network congestion. Last year, it was [reported](https://cryptoslate.com/bitcoin-network-congestion-eases-as-mempool-clears-in-february/) that Bitcoin mempool, at its peak, contained 194,374 transactions. This is undesirable because when a transaction is in the mempool, it means that this transaction has not been able to settle into the Bitcoin blockchain. The transaction either has to pay a much higher fee or just keeps waiting.
So, when we are designing ZK proof verification on Bitcoin, it is an important design consideration trying to control the overhead that it may bring into the Bitcoin network.
We propose an idea of adding a proof aggregation layer. With this aggregation layer, different applications can “combine” their proofs together and submit a single proof to the Bitcoin network. This has a number of benefits.
- It lowers the overall cost of the Bitcoin network to verify ZK proofs, which can be as small as if only one application is doing so.
- It lowers the cost for the different applications to use the Bitcoin network and allows them to better tradeoff between the cost and the frequency of publication by being able to split the cost among applications.
- If fraud proofs are used, it helps the challengers because all the challengers only need to challenge one fraud proof for all the applications.
In the Ethereum ecosystem, there have been a few projects working on proof aggregation—on Ethereum, the cost to settle ZK proofs is also high. This includes [Nebra](https://www.nebra.one/) and [Aligned Layer](https://alignedlayer.com/).
When ZK is ready, it may help relieve Bitcoin from the network pressure by offering a verifiable and decentralized storage layer, just like what [Celestia](https://celestia.org/) and [Nubit](https://www.nubit.org/) want to do. Nubit uses the following architecture that leverages Bitcoin for staking to make the storage layer decentralized, without dumping the data onto the Bitcoin network. ZK proofs can be helpful here because they can enforce that storage nodes have honestly stored the data and can enforce staking and slashing mechanisms, crucial to make the storage layer decentralized and reliable.
![image11](https://hackmd.io/_uploads/H1o5yL9Ap.png)
This can be used for data availability for fraud proofs and optimistic rollups, for which the data availability only needs to be temporary rather than permanent, and it should not use Bitcoin for storage. This can also help BRC20 and other inscription protocols so that they do not need to post the data on the Bitcoin network, which resolves a major concern from the Bitcoin community.
Having a data availability layer, secured by Bitcoin consensus through ZK, can lower the cost of creating BRC20, make BRC-20 more decentralized, and avoid the unnecessary overhead to Bitcoin.
## Tools and resources
The future of Bitcoin relies on developers to build on it, and we sincerely believe that there are a lot of things to do, a lot of topics to explore, and a lot of applications to build.
People who are interested in building programmability on Bitcoin can refer to the BitVM codebase.
- https://github.com/BitVM/BitVM
The BitVM team is also working on a Rust programming environment and toolkit for working with Bitcoin scripts in Rust.
Techniques to use OP_CAT to realize complicated functionalities in Bitcoin have been discussed by Blockstream and others.
- https://medium.com/blockstream/cat-and-schnorr-tricks-i-faf1b59bd298
- https://rusty.ozlabs.org/2023/10/20/examining-scriptpubkey-in-script.html
sCrypt, although relying on features in Bitcoin SV but not Bitcoin, can provide insights on how to use OP_CAT and other potential opcodes to realize very complicated features.
- https://scrypt.io/
A number of projects have been working with Polyhedra to extend modular infrastructure for Bitcoin.
- Babylon: https://babylonchain.io/
- Syscoin: https://syscoin.org/
- Merlin Chain: https://merlinchain.io/
- Bitlayer: https://www.bitlayer.org/
- B² Network: https://www.bsquared.network/
- BEVM: https://www.bevm.io/
- LumiBit: https://lumibit.xyz/
- bitSmiley: https://www.bitsmiley.io/
- Particle Network: https://particle.network/
## Acknowledgments
Special thanks to Weikeng Chen from L2 Iterative (https://www.l2iterative.com/), an investor and a technical partner to Polyhedra Network, and [Yupeng Zhang](https://zhangyp.web.illinois.edu/) from University of Illinois Urbana-Champaign for research collaboration on the Bitcoin ecosystem and zero-knowledge proofs.