# Motivation
Blockchains moved from being solely an open and decentralized financial ledger with Bitcoin proposal and evolved into a finite state machine with the benefits of decentralization.
At the beginning of the Nanochain Proposal, Varun came up with an expression that fits its function remarkably: ****The Era of Social Computing****.
Public blockchain are often compared to decentralized global computers but they are far from being personal computers or application servers. The system wide computation is reality weak compared to the need of what could be a relatively simple DApp. Developers operate with a restricted set of tools to create their DApps and cannot perform even the simplest operations like automated operations, … [TODO], toolchains between a protocol and another are totally different making the adoption from a developer perspective tedious and not worth it.
These handicaps lead to bad UX and applications with an archaic touch to them and ultimately forms a big barrier for end-user adoption.
Until the canyon between centralized and decentralized software is finally bridged, the decentralized Web will remain an elusive dream.
> Nanochain create a bridge between Web Apps and Blockchains. It allows the growth of a new generation of applications, Decentralized Web Application with developer friendly programmability and end-user accommodation.
With HyperSDK, developers can continue to use the mature software infrastructure for web development they are already proficient with and build on top of decades of software development and combine existing components to build their Decentralized Web Applications.
Leveraging what WebAssembly has to offer, The Hyperspace Protocol allows portability between DWAs logic and on-chain computation thanks to its interoperable runtime.
Complex and intensive computations run off-chain while retaining the security guarantees of thanks to what we like to call $Hydra$ the aggregation of our protocol layers.
# Nanochain: Off-chain Computation
Hyperspace proposes composability between off-chain and on-chain components and that’s only possible because the stack is built end-to-end by the Hyperspace Team.
Off-chain computation usually throws horrible DX to the face of developers which leads to a UX just as bad, regardless of all the innovations and bright ideas put into such protocols and that’s because the off-chain protocols usually don’t have control on the on-chain protocol, so hacks and compromises are required to make such protocols work.
> (ChainLink and how expensive it could get to bring their oracles in play since it usually requires 3 rounds for an off-chain-on-chain transaction, …)
We solved both off-chain computation and UX by opening a new era for Decentralized Applications through Distributed Web Apps.
We believe that the key to unlocking new collaboration is to make browsers into full-peers in the off-chain overlay so that users can benefit from Web3 before they even know that they are using it.
## Networking
Having Distributed Web Apps as our off-chain computation layer happened to be quite the challenge from a Networking Perspective.
This was never a problem for most blockchains since the eligibility criteria of a block producer implicitly implies:
- A Static IP Address
- Static mDNS hostname entry
- Connections established through the same port
- Constant ICE username and password
- Well defined Certificate and thus Fingerprint
- Devices are behind NAT
- CHURN
The feasibility of a browser based computation layer is first defined by its Transport Layer, with WebRTC as the only option for Browsers we had to innovate and come up with workarounds.
### Solving for the browser
First and foremost the Nanochain has been designed such as a wider set of devices could be used to participate in the Web3 of tomorrow.
Getting inspiration from challenges met by many before us with some considered as wonderful success stories like WebTorrent we were able to:
1. First figured out how to establish a WebRTC Connection and renegotiate a WebRTC Data Channel by only exchanging the minimum amount of information needed to build a viable SDP.
2. Avoiding signaling glare and minimazing road-trips for connection establishment
1. Naive solutions don’t work since the DTLS mechanism still needs the concept of a client and a server
2. Double offer connections rely on ICE rollback conflict resolution but it’s not implemented by any browser as yet
3. Requires manual definition of which peer will be the DTLS server and which will be the client in terms of performing a DTLS handshake
4. **Solution 1:** Thanks to **[1]** we use Data Channel renegotiation with Minimum Viable Session Description Protocol and double offers to reduce the number of signaling messages from up to 3 to always 2.
5. **Solution 2:** Relying on a TURN server of our own when NAT settings are unfavorable. The load on the TURN server is overlookable since immediately after connecting, the peers perform an ICE Restart which will share both condidates and the TURN server would be replaced with a direct connection.
3. Using **SDP Munging** and by relying on $Peer Reflexive candidates$, we can reduce the number of signalling messages from 2 to 1 making establishing connections twice as reliable because we have half the number of messages that need to be delivered, it also means that if we aren’t using an ovelay network for routing the ConnectMessages, then only one of the twso peers needs to have a mechanism for receiving ConnectMessages and thus making the use of services like Mozilla’s AutoPush viable.
4. Establishing $Brower \rightarrow Server$ or $Server \rightarrow Server$ connections with no signaling.
1. This allows the construction of servers only overlays and having a reliable DHT maintained by those servers.
2. *Eventually* supporting $Browser \leftrightarrow Browser$ connections with the use of STUN to punch and maintaing a hole so that even common devices like phones and laptops could act like servers and operating with no signaling messages for WebRTC connections.
### SIMPL-CODING
There are subject specific to a device’s network card or their overall available bandwith that plays a factor in who can or cannot be a node in a distributed protocol.
As a continuation of the work of our Lead Network Engineer and highly inspired from their previous work, we introduce **SIMPL-CODING**.
The fundamental building block for **SIMPL-CODING** is BigEndian, 3 bytes-max, variable length integer. The design is patterned off of Git’s $varint$ which remove value duplication. These $varints$ can encode numbers in the range of $[0, 4198527]$
The Unicode codepoint space is 21 bits meaning a range of $[0, 2097151]. UTF-8 uses a max of 4 bytes to encode a single codepoint.
Instead of UTF-8 we can encode the code-points in our 3 bytes $varints$.
Additionally, we remap the bottom 32 code-points to past the end of the unicode space, we then use these bottom 32 code-points (ASCII Control Characters) to indicate special modes.
**SIMPL-CODING** has 4 special modes which *compresses;re-encodes* certain common patterns in the strings used in our protocol into more compact binary representations.
1. **Contiguous BASE10 digits** $\rightarrow tag(0x01)$ and one varint that represents the number
2. **IPv4 Addresses** $\rightarrow$ $tag(0x02) + 4 bytes$
3. **UUIDs** $\rightarrow tag(0x03) + 16 bytes$
4. **URLBase64** $\rightarrow tag(0x04)$ and one varint determining the length of the URLBase64 sequence + $len$ bytes
These encoding modes are used because:
- UUIDs show up in the MDNS addresses used by Chrome, Safari
- WebPush subscription endpoints are URLs mostly composed of URLBase64 characters, reducing the size of those URLs without having to do any kind of actual compression like $ZLIB$ or $GZIP$ who might add an unnecessary overhead.
### DHT Discovery
WIP
### Extensions
Extensions are able to add data to the peer-entry and they provide functionalities like sending ConnectMessages, filling empty fields when establishing a ConnectMessage from a $Peer Entry$
With a vision of community based improvement, extensions could be built by anybody, some would require that parties should have the same extensions loaded and some could operate on a single client basis.
****Built-in extensions:****
- The ICE-Lite extension comes with the SDK and is always enabled, it offers support for unsignaled connections directly to a server.
- WebPush extension uses WebPush services to send ConnectMessages.
- Currently, as a nightly feature, uses Mozilla Autopush Service to acquire a WebPush subscription endpoint.
- Thoughts and consideration are put to provided our own WebPush Service for our protocol usage but also for any Web Application wanting to leverage WebPush subscriptions.
## Proof of Execution Trace
The Nanochain $\Leftrightarrow$ Hydra relationship is built on a zero-knowledge model such as only proofs are sent from the Nanochain to the Hydra Stack.
The matter of the off-chain truthfulness is only scoped to the off-chain context and thus making the on-chain consensus layer nonchalant towards those transactions. This, argued by many, is considered a security issue when it comes to off-chain computation.
Previous work reached conclusive results and we’ve seen the birth of Zero-Knowledge Virtual Machines, Turing complete and using Domaine Specific Languages, those VMs allows privacy and secures off-chain computation. The proofs generated, if validate, can ensure efficiently the correct execution of a computation.
In this chapter we look at solving this with off-chain execution on the browser and a WASM VM and only requiring validation on chain.
### Architecture
The architecture of this whole system is based in the browser. If we are executing off-chain it makes sense to do so in the most accessible environment available to the user, and today that is the browser. With the introduction of WebAssembly (WASM) into browser runtimes, code written in systems languages like Rust can be run performantly in the browser, and it is only through this that we are able to consider such an architecture feasible.
WASM also serves well as the compile target in this case, since it gives smart contract developers the freedom to use any language they prefer (as long as it can compile to WASM). To describe using words the architecture allows a compiled smart contract (written in any source language) to run within the browser and generate a quickly verifiable proof of execution.

To generate a Proof of Execution from arbitrary source code we first need to generate an execution trace - allowing us to reuse a single STARK scheme which takes an arbitrary trace and generates a proof for it.
To generate a trace of execution we use a thin abstraction layer over the browser’s native Wasm runtime (serving as a custom runtime in context), this trace is then fed into our circuit created using Winterfell to generate the proof and public inputs (both of which are serializable and can be sent over the network directly).
A couple of technical decisions are to be justified here:
- **\*Why STARKs over SNARKs: \*** The most used forms of ZK proofs are SNARKs and STARKs. STARKs are built around generating proofs from traces, and are especially performant (in terms of size and proving time) at small trace sizes (which will be the majority of smart contract executions on the network). From our testing we found SNARKs would take longer to generate (since while performance scales less, it has a high start) and would not be performant enough for in-browser proving.
- **\*Why a nested runtime: \*** The nested runtime (i.e. embedding a Wasm runtime in the browsers native one) is required to generate the trace of execution (since browser runtimes don’t have this function natively). The performance penalty of doing this is definitely noticeable but from our testing (see metrics later) execution still happens in negligible time.
### Cross-Program Invocation
A smart contract can invoke instructions from another smart contract. This mechanism is called Cross-Program invocation.
Internally the implementation for the execution of such instructions lives in the `call` function.
The call function takes the smart contract `hash-based program address` to be called and the arguments to call it with.
It will then fetch the WASM Binary through the SDK from the specified address, creating an invocable function instance.
Post-Invocation, the trace table for each external call is stored in a list and the invoking program is halted until the invoked program finishes processing the instruction then the call trace list is merged into the top level trace table before proof generation.

### Recursive Proofs
Depositing these proofs directly on chain would be possible however exploiting the power of ZK we can generate proofs of proofs giving us further compression (at the cost of some computation load). We looked into many different methods of combining proofs but for our case simple batching was the most performant.

Considering the above diagrams, the metrics for each of those can be found below (all for compressing 10 proofs, bigger numbers were only feasible to test for the batched method):
```
Batched:
Proving time: 74 ms
Verification time: 4 ms
Proof size: 62680 bytes (compressed: 36324 bytes)
Left hanging tree:
Proving time: 17232 ms
Verification time: 12 ms
Proof size: 127184 bytes (compressed: 119104 bytes)
Two level tree:
Proving time: 3969 ms
Verification time: 11 ms
Proof size: 132816 bytes (compressed: 123215 bytes)
Tall tree:
Proving time: 13755 ms
Verification time: 24 ms
Proof size: 132816 bytes (compressed: 121103 bytes)
```
The library used for all of these was `plonky2` which allows for really efficient recursive proof generation. `plonky2` was created by Mir protocol and is used by them for recursive proof generation. Mir does recursive proofs in the variable height binary tree structure (”tall tree” as it’s called in our examples), this works for them since they are compressing a group of SNARKs and with SNARKs recursion is more efficient than batching.
However, since we are generating SNARKs of STARKs batching is much more efficient for us (since these SNARKs are tiny and very efficient to verify). This SNARK of STARK model gives us the best of both worlds, it let’s us generate proofs efficiently client-side and let’s us combine them efficiently on the server into a fixed size SNARK. The design works as we compromise the correct things in each environment, in the browser we compromise fixed-size proofs and accept a bit of scaling with computation size, on the server we compromise extremely fast proof generation to get fixed size proofs and optimal compression.
### Examples
All examples are written directly in WebAssembly Text format ***WAT***
#### NFT Transfer
```wasm
(module
(import "builtin" "get_tx_origin" (func $get_tx_origin (result i64)))
(import "builtin" "state_get" (func $state_get (param i64) (result i64)))
(import "builtin" "state_set" (func $state_set (param i64) (param i64)))
(func $transfer
(export "transfer")
(param $nft_id i64)
(param $current_owner i64)
(param $new_owner i64)
local.get $nft_id
;; push the actual current owner
call $state_get
local.get $current_owner
i64.eq
(if
(then
local.get $nft_id
local.get $new_owner
;; set the nft's new owner
call $state_set
)
(else
;; owner is not making this transaction (nop)
return
)
)
)
(func (export "_start")
;; try making a transfer here
i64.const 0
;; by default in our play sdk the transaction will always be made by account id 0
i64.const 0
i64.const 1
call $transfer
return
)
)
```
#### Cross Invocation
Program 1:
```wasm
(module
(import "builtin" "call" (func $call (param i64) (param i64) (result i64)))
(func (export "_start")
i64.const 42
i64.const 4
call $call
return
)
)
```
Program 2
```wasm
(module
(func (export "public") (param $num i64) (result i64)
;; this contract will be available at state in key 42
i64.const 2
local.get $num
i64.mul
return
)
)
```
## Hydra Commits
# Horbit Sharding
# Hynet
# Hyracle
# Hydentity