# Hello, OlaVM!
![](https://hackmd.io/_uploads/ry7FuqkLo.png)
## TL;DR
1. We are working on building the first ZKVM based on a parallel execution architecture and achieving higher TPS through the improvement of ZK-friendly design and ZK algorithms. The technical features are as follows:
* Fast proof generation
* ZK-friendly: smaller circuit scale and simplified bottom constraint units
* Fast ZK: further optimization on Plonky2
* Fast execution: Utilizing parallel execution to significantly shorten the proof generation time
2. Current progress:
* In July 2022, we released the OlaVM Whitepaper.
* November 2022, completed instruction set design and development, and realized the OlaVM execution module of the virtual machine, you can check the link: https://github.com/Sin7Y/olavm to view our code, continuously updated.
* For the ZK algorithm with the fastest execution efficiency, we have completed the circuit design and algorithm research of plonky2. You can check the link: https://github.com/Sin7Y/plonky2/tree/main/plonky2/designs to learn more about the design of plonky2, we will optimize and improve it in the next step. Please stay tuned.
## What are we up to?
OlaVM is the first ZKVM that introduces parallel VM execution, it integrates the technical features of the two schemes to obtain faster execution speed and faster proof speed, thus bringing the highest TPS of the system.
![](https://hackmd.io/_uploads/r1NXaa9qj.jpg)
There are two main reasons why Ethereum has a low transactional throughput:
1. Consensus process: each node executes all the transactions repeatedly to verify the validity of the transactions.
2. Transaction execution: transaction execution is single-threaded.
In order to solve our first problem, whilst still possessing programmability at the same time, many projects have conducted ZK (E) VM research, that is, transactions are completed off chain, and only leave the state verification on the chain (of course there are other capacity expansion schemes, but we won’t go into depth on that in this post). In order to improve the systems throughput, **proofs must be generated as fast as possible**. In order to solve our second problem, Aptos, Solona, Sui and other new public chains introduced virtual machines with parallel execution(PE-VM) (Of course, it also includes a faster consensus mechanism) to improve the systems overall TPS.
At this stage, for ZK (E) VM, the bottleneck that affects TPS of the entire system is the generation of proofs. However, when Parallel Prove is used to accelerate the throughput, the faster the block is generated, the earlier the corresponding proof generation starts (with the evolution of ZK algorithms and the improvement of acceleration means, the shorter the proof generation time and more efficient and significant improvement provided by this).
## How do you improve the systems throughput?
Increasing the speed of proof generation is the single most important aspect to increasing the overall throughput of the system, and there are two means to accelerate proof generation, keeping your circuit scale to a minimum and using the most efficient ZK algorithm. You further breakdown the meaning of an efficient algorithm, as this can be divided into improving the tuning of parameters such as selection of a smaller field, and secondly, the improvement of the external execution environment, utilizing specific hardware to optimize and scale the solution.
1. **Keeping your circuit scale to a minimum**
As described above, the cost of proof generation is strongly related to the overall size of the constraint n, hence, if you are able to greatly reduce the size of the constraint, your generation time will be significantly reduced as well. This is achievable by utilizing different design schemes in a clever way to keep your circuit as small as possible.
* **We're introducing a module we'll be referring to as "Prophet"**
There's many different definitions of a prophet, but we've focused on "Predict" and then "Verifiy", the main purpose of this module is to, given some complex calculation, we don't have to use the instruction set of the VM to compute these calculations. Why this is, is because it may consume a lot of instructions, thus increasing the execution trajectory of VM and the final constraint scale. Instead, this is where the Prophet module would come into play, it is a built-in module that performs the calculation for us, sends the results to the VM, which will perform a legitimacy check, and verify the result. The Prophet is a set of built-in functions with specific computing functions, such as division, square root, cube root, etc. We will gradually enrich the Prophets library based on actual scenarios to maximize the overall constraint reduction effect for most complex computing scenarios.
* **ZK-friendly**
Dealing with complex calculations the Prophet module can help us reduce the overall size of the virtual machines execution trace, however, it would be convenient and preferred if the actual calculations themselves are ZK-friendly. Therefore, in our architecture we've opted for designing the solution around ZK-friendly operations(Choice of hash algorithms and so on), some of these optimizations are present in other ZK(E)VMs as well. In addition to the computing logic that the VM itself performs, there are other operations that also need to be proven, such as RAM operations. Given a stack-based VM, POP and PUSH operations have to be executed at every access. At the verification level, it is still necessary to verify the validity of these operations, they will form independent tables, to then use constraints to verify the validity of these stack operations. Register-based VMs on the other hand, executing the exact same logic, would result in a smaller execution trajectory and therefore a smaller constraint scale.
2. **ZK Algorithms & efficiency**
So far, ZK algorithm has made amazing progress in engineering feasibility. Scenarios are becoming more general and efficient, from R1CS to [Plonkish](https://zcash.github.io/halo2/concepts/arithmetization.html), from a larger field ([Cairo VM](https://starknet.io/docs/how_cairo_works/cairo_intro.html)):$$P = 2^{251} + 17 \cdot 2^{192} + 1$$ to a smaller field ([Plonky2](https://github.com/Sin7Y/plonky2/blob/main/field/src/goldilocks_field.rs)): $$P = 2^{64} - 2^{32} + 1$$ acceleration from CPU to GPU/FPGA/ASIC implementation, such as [Ingonyama](https://github.com/ingonyama-zk/cloud-ZK) FPGA accelerated design and [Semisand](https://semisand.com/) ASIC design, etc.
Due to the amazing performance of Plonky2, we temporarily use Plonky2 as the ZK backend of OlaVM. We've conducted an in-depth analysis of Plonky2's Gate design, Gadget design and core protocol principles, and identified areas of design where we can contribute and further improve efficiency. Check out our Github Repo: [Plonky2 designs](https://github.com/Sin7Y/plonky2/tree/main/plonky2/designs) for more information.
**Faster transaction execution (Currently not a bottleneck at this stage)**
In OlaVM's design, the Prover is unlicensed and anyone can access it, therefore, when you have many Provers, you can generate proofs for these blocks in parallel, and then aggregate these proofs together and submit them to the chain for verification. Since the Prover module is executing in parallel, the faster the block generation(the faster the transactions in the corresponding block are executed), the corresponding proof can be generated in advance, resulting in the final on-chain verification time being significantly reduced.
![](https://hackmd.io/_uploads/HJxRaK18j.png)
When the proof generation is very slow, e.g several hours, the efficiency improvement from utilizing the design of parallel execution is not obvious. There are two scenarios that can improve the effect of parallelism, one being that the number of aggregated blocks becomes larger, so that quantitative change causes qualitative change, and another is that the proof time is greatly reduced. Combined, this can greatly increase efficiency.
## What about compatibility?
In the context of ZKVMs, achieving compatibility is to facilitate the connection to the development efforts already made on certain public blockchains. After all, many applications have already been developed on top of the existing ecosystems we have today, e.g, the Ethereum ecosystem. Therefore, if we can utilize these abundant resources already present by achieving compatibility with these already developed ecosystems, enabling projects to migrate seamlessly, it will greatly increase the speed of adoption of ZKVMs and scale those ecosystems.
OlaVM's main objective is currently to build the most efficient ZKVM with the highest transactional throughput. If our initial development turns out well, our following goals will be considering achieving compatibility with different blockchain ecosystems, aside from the Ethereum ecosystem, which is already included in our roadmap, supporting Solidity at the compiler level.
**All Together, with all the above modules integrated, the dataflow diagram of the whole system is shown in the figure below.**
![](https://hackmd.io/_uploads/Bk_-RtyIj.png)
## What's next?
![](https://hackmd.io/_uploads/rJFY0qyUi.jpg)

This article is the 34th series of the Sin7y Tech Review and will mainly interpret SuperNova, which is a new recursive proof system for incrementally producing succinct proofs of correct execution of programs on a stateful machine with a particular instruction set. These seem to be fantastic features. This article will mainly interpret how these features are implemented. For easy understanding, all interpretations are based on the paper itself. What is folding? First, let's look at the definition in the paper:As shown by the green marker in the figure: input: two (instance, witness) pairs

5/23/2023In August 2022, the Office of Foreign Assets Control (OFAC) announced sanctions against Tornado Cash, which directly cast a shadow on protocols aiming to achieve privacy on public blockchains. This led to discouragement in the market towards privacy and raised regulatory concerns. However, Ola, an Ethereum Layer 2 network that supports programmable privacy utilizing its ZK-ZKVM architecture, still recognizes the urgent need for robust privacy across the blockchain space. Ola is not the first project dedicated to bringing privacy to blockchains, nor will it be the last. As a member of the blockchain community, Ola is committed to interpreting the privacy technologies used in most projects and the regulatory compliance issues involved in privacy transactions, in order to help the market understand privacy on top of public blockchains more comprehensively. 1. Why was Tornado Cash banned? The reason for Tornado Cash's blacklisting by OFAC is obvious: its transactions can not be tracked. This has made Tornado Cash widely used for illegal activities. The principle of privacy in Tornado Cash differs from that of Zcash, which is entirely based on ZK technology. Tornado Cash combines coin mixing and ZK technology, with coin mixing making transactions untraceable. As the coin-mixing pool grows larger, the chances of tracking it approach zero. ZK is only used to realize its own asset proof once the coin mixing is complete. Therefore, Tornado Cash is called a haven for hackers and black money, as it is impossible to track the addresses to which non-performing assets are withdrawn after entering the mixing pool. This is also the underlying reason for Tornado Cash's blacklisting. Many articles interpret the principle of coin-mixing in Tornado Cash, which readers can find for themselves.

5/22/2023Preface This research compares implementation systems similar to Ethereum and analyzes the difficulties and possibilities of achieving parallel execution of transactions. It's worth noting that the chains analyzed for this research are based on the Account model design scheme, not including the UTXO scheme. Research Objects FISCO-BCOS, one of the consortium blockchains that support parallel execution of transaction verification within blocks. Khipu public chain, scala implementation of the Ethereum protocol. Aptos public chain, Move Virtual Machine. Difficulties with Parallel Execution Let's take a look at the traditional transaction execution process.

1/4/2023TL;DR As mentioned in the previous article Hello, OlaVM!, OlaVM’s vision is to build a high-performance ZKVM, and this article will focus on one of the tools that make OlaVM high-performance, namely, Lookup Arguments. Lookup Arguments play an important role in reducing the size of the circuit, thereby improving Zero Knowledge efficiency, and it's widely used in the circuit design of ZKVMs. Throughout this article you'll learn more about the following: What role do Lookup Arguments play in ZKVM? Plookup protocol principles Lookup Argument protocol principle of Halo 2 The connection between the two Lookup Argument algorithms The roles of a ZKVM The ZKVM utilizes Zero Knowledge to constrain all the execution processes of the VM, and the execution process of the VM can generally be divided into: instruction execution, memory access and built-in function execution. It is somewhat impractical to execute constraints on these operations in one trace. First of all, in a trace, a row represents an operation type, and one operation type corresponds to multiple constraints, and different constraints correspond to different numbers of columns, resulting in different widths. If one of the rows is too wide due to one constraint corresponding to too many columns, then the width of the entire trace is affected, becoming too large. Resource usage of this design is wasteful when the rows corresponding to the remaining constraints do not require so many columns. Then, if there are too many different operation types in a single trace, more selectors will be introduced, increasing not only the number of polynomials, but also the order of the constraint. Finally, due to the order limitation of the group, the number of rows of the trace itself cannot exceed the order of the group, so the number of trace rows occupied by a certain type of operation should be minimized.

12/20/2022
Published on ** HackMD**