# Project Echo
The web3 ecosystem has seen tremendous development over the last few years, yet it's still not fully decentralized due to the industry's reliance on services such as Infura, Alchemy, AWS, and both frontend and backend deployments. It is nearly impossible for a DAO to deploy a frontend properly or to host servers, data jobs, and ethereum node services (Infura/Alchemy) in a distributed manner.
Echo is a new p2p protocol that focuses on the browser-layer, allowing any visitor of a web3 application to peer/host infrastructure needed to power the application. This includes hosting the frontend, application-specific data from different Ethereum nodes, and scheduled jobs (lambda function/ECS analogue) which calculate and upload data to various L1s, L2s or distributed data providers such as IPFS. Furthermore, it protects users by verifying the computation on both frontend and backend services. If a new deployment includes hidden changes, such as user tracking or malicious code, the user must explicitly opt in to the update.
Future iterations introduce greater security by extending existing L1 and L2 node networks to join in and take part in the peering. Restaking mechanisms, recently exemplified by [Eigenlayer](https://www.eigenlayer.com/), are a clear method to use the existing network security held across Ethereum nodes and other L1s and L2s to provide additional security and a fallback mechanism in case browser peering has issues.
## Problems in Current Centralized Web3 Ecosystems
In summary, Echo solves the following problems:
- An over-reliance on Eth node services such as Infura and Alchemy, causing a centralization bottleneck
- Difficulty for DAOs to host web services such as frontends, backends or scheduled jobs. Currently, these services require the DAO to register an entity, handing centralized control over to a few admins who may edit, remove and change the site as they see fit. As a result, most DAOs contract service providers
- Lack of decentralized web services, particularly around running scheduled data jobs (ie. Lambda, ECS)
- A lack of verified computation or opt-in versioning. How does a user know if an app suddenly updates and stops encrypting their information, starts sharing their private data, or sends transactions that are malicious
- High costs on node providers such as Infura/Alchemy, $25-100k is normal, and on other web infrastructure (AWS) costs
- Vulnerability to DNS takeovers or attacks
## Solutions
Echo solves these problems differently than other burgeoning solutions. Instead of focusing on the data storage aspect, such as optimizing IPFS, we choose to focus on application-specific browser p2p sharing and restaking mechanisms, such as seen by Eigenlayer, reducing reliance on centralized node networks. The p2p network is app-specific and builds on what some may call the Cosmos philosophy where some degree of consensus is tailored specifically to the application in use.
In short, Echo accomplishes the following:
- Reduces reliance and centralization on node providers such as Infura/Alchemy
- Saves on costs for Node Providers. It is typical to pay $25k-100k a year for these providers. It also saves on other infrastructure costs (AWS)
- Decentralizes webapp hosting services, frontend and backends. DAOs are able to deploy these services in a distributed manner
- Runs cron jobs or scheduled jobs in a distributed manner to upload data to L1/L2s and to distributed data solutions such as IPFS (think stateless Lambda functions or stateful ECS jobs)
- Provides something akin to servers/databases that can be distributed across the p2p network and verified
- Verifies computation on frontend and server code. Users must opt in to code updates that also undergo audits verifying changes in the web application. If a server or frontend suddenly updates and begins storing sensitive user information, the user needs to specifically opt in to the update
- Protection against DNS takeovers or attacks
Let's dive into each of these features.
### Decentralized Frontends (Hosting Services)
#### Problem
DAOs have many hurdles when hosting frontends directly. It requires registering with a cloud service and leads to a centralized entity in control of the domain and deployment process. In many cases, this process is too difficult to pursue and a centralized entity such as the protocol originator or third-party DAO service provider takes on the responsibility for hosting. From a practical stance, the DAO should have some sovereignty over this process. Liquity did create a decentralized frontend solution, however, it was more of a workaround as it did not allow the DAO to run its own frontend and infrastructure but it simply allowed others to create a frontend to interact with the protocol.
#### Solution
The p2p network shares the IPFS hash associated with the frontend or can share the HTML/payload directly. No traditional cloud hosting service is needed through a centralized provider, such as a CDN. Initially, we assume altruism from the peers. In the longterm, we'll include a consensus model to verify the payload, check if the hash is correct, and if the user is ulitmately being served the right frontend. Expanding the p2p network and hosting service to include Ethereum and other L1/L2 nodes via a restaking mechanism can also serve as a security backup or a means to serve the frontend as a fallback in case the p2p network encounters difficulties.
### Reduce Reliance on Node Providers
#### Problem
Currently, web3 apps bombard node providers such as Infura and Alchemy with multiple calls just for one user session. There are extreme cases where some apps can query an ethereum node 100+ times in a single session.
These providers are used across most web3 applications and have been a decentralization bottleneck. Additionally, redundancy on calls for different users leads to expensive bills. It is not uncommon to pay $25k-$100k a year to these node providers.
#### Solution
To give a simplified explanation, if there are 50 active peers operating on a specific web3 application, one peer would be chosen as the leader to distribute node/blockchain data. These calls can now be aggregated/batched and be made once, reducing redundancy. They can also be cross-checked against different providers or other independent nodes to ensure maximum security and decentralization.
### Decentralized Backends (Servers & Databases)
#### Problem
With traditional web apps, a user interacts with a backend server that makes any number of computations to retrieve or store resulting data in a closed database. The database and server are typically closed and inaccessible to the public. It forces users to trust that the centralized entity running these services are actually retrieving or uploading the correct data in a responsible way. It also assumes the centralized entity maintains basic security practices such as encrypting user data, restricting access/corruption from its internal employees, and replicating the data to avoid loss. Additionally, there is no way to verify or guarantee user data can ever be removed. It could go through any number of ETL pipelines, stored on logs across multiple layers of infrastructure.
#### Solution
IPFS provides a distributed network of nodes for serving and pinning data that is retrievable through the hash of the content that is uploaded. Providers such as IPFS, L1s, L2s, and future data availability networks like Celestia or EigenDA also serve as a mechanism to upload and retrieve data for users. These services are enough to operate as a simple database.
Backend servers and their associated computation can run on-chain, but can also be computed and uploaded directly by a selected leader in the p2p Echo network. In this model, the user is in charge of their encryption; the code that runs on chain or by the peer, is open and the resulting outcome in the storage on IPFS or other data networks is also open and can be verified. This reduces the need to have one centralized party maintain control of a traditional backend service and storage solution.
### Distributed Cron Jobs, Lambda Functions/ECS Analogue
#### Problem
This discussion is a corollary to the point above. Some applications require regular job calculations such as uploading a root hash or header, compressed zk proof, or other structured data. One interesting example is the [optimistic rewards](https://medium.com/element-finance/the-future-looks-optimistic-a-new-primitive-for-grants-and-rewards-2fc32f2d09a6) design by [Element](https://www.element.fi/) which allows for a new, more optimistic way to pay out grant recipients or create incentive programs in an efficient way. It requires regular uploads of a root hash to consolidate data from ranges of blocks; otherwise computations for retrieving data relevant to the user could take too long as the user may have to query large ranges of historical blocks.
A centralized entity running this infrastructure for the DAO may experience downtime or may even maliciously upload the wrong data or censor according to their will.
#### Solution
A dockerized container can be put in place for the p2p network to use. On the scheduled job times, the current leader in the p2p network would be responsible for running the necessary computations and uploading the results to the appropriate platform, IPFS, L1, L2, etc. In a restaking model, exemplified in [Eigenlayer](https://www.eigenlayer.com/), nodes on Ethereum and other L1s/L2s can run the server jobs while holding ETH at stake in case they fail to perform their duty honestly.
### Verified Computation on Frontends and Servers - User Security and Opt-in on New Deployments
#### Problem
In existing web applications, users have very little control or insight into changes that applications make. These changes can be detrimental to the application security. It is a pretty standard practice to have a fast release and deployment cycle on backend and frontend services without notifying users. These changes could suddenly store data in an unencrypted format, share that data with a third party or simply introduce serious vulnerabilities to the codebase. Additionally, when finances are involved, such as in web3, we have been far too trusting and lenient with current web3 applications and their frontends. How does the user properly verify the message they are signing or the transaction they are sending is correct without going through their own complex checks? How can they verify the site has not undergone a DNS takeover?
Let's provide a few examples:
*Example 1*
- A web3 application malforms a field in the transaction data, causing the user to lose money after a recent frontend deployment.
- A new library is installed in the codebase which introduces a security vulnerability and tracks user behavior.
*Example 2*
- A secure messaging application tells users they encrypt and only store DMs in a closed database until a user deletes their message. When a user deletes, they remove it from their database and have no replication service. They have committed to not storing users' encryption keys, so even when the data is stored, it is still not accessible.
- The application team, in secrecy, decides to suddenly release a new version of the application which stores unencrypted messages. Furthermore, they add significant ETL pipelines on the data and they do not delete messages when the user deletes.
*Example 3*
- In the current regulatory environment, biometric uploads and private KYC information on identity are more likely to be requested as part of certain applications.
- Using closed databases and backend servers, there is no way to guarantee your data is not encrypted and stored, compressed properly to belong to a ZK verified group, or shared with third parties.
#### Solution
In current web3 applications, it is common practice for the underlying smart contracts to undergo significant security testing and audits. If they do not, people who use these contracts understand their risk and it is usually publicized as unsafe. Contracts are typically versioned and new versions usually route to an entirely separate UI.
One may pose the question, if smart contracts require audits and a security review, why would the associated frontends or servers wrapping those contracts also not require an audit or versioning format? Perhaps the DAO deploying a new version of the frontend should complete an audit and then a user must explicitly opt in to using the new version of the frontend. This ensures the user's data is safe, their transactions are sent correctly, and no new malicious corruptions have been included. This also protects against a DNS attack.
In this case, versioning can be associated with the hash of the payload for the frontend. When the user has opted in, this user knows they are loading the version of the site they are comfortable with by cross-checking the hash of the payload received against the associated hash of the version. Version opt ins would likely link to further information on how the updates affect the user via a DAO or third party audit report. In the future, computation can undergo even further verification in realtime through creating ZK circuits around server computations and calculations.
**Caveats**:
One major pushback on this model is that frontends typically have a fast deployment cycle. Trivial changes and small bug fixes are common. This method could significantly delay bug fixes or other trivial updates the frontend needs. However, if we assume frontends operate in a pseudo-immutable versioning structure similar to smart contracts, this pushes development around frontends for financial applications in web3 to be more thoughtful, tested thoroughly and planned. In the case of an emergency bugfix, the DAO can produce a temporary internal audit of their own as fixes tend to not be long and styling updates tend to touch non-sensitive parts of the codebase. It can be assumed this audit or review process could move quickly.
### Protection Against DNS Takeovers or Attacks
#### Problem
A number of applications in the space have undergone DNS takeovers through their DNS hosting service. A fake version of the web application is displayed and then sends transactions which cause the users to lose funds.
#### Solution
This solution aligns with what is presented in the section above on verified computation. If a DNS takeover occurs, the impostor frontend code will not hash to an audited version and the user will be warned and not opt in.
## Launch Plan & Technical Details
The launch is planned to undergo 3 phases. A technical spec can be found [here](https://hackmd.io/@delve-labs-research/Hy2B0jUPj) (WIP) which also goes into how the p2p network operates along with its leader selection algorithm.
### Phase 1:
- Host frontends through a p2p network of clients through the browser. Any user loading the web application acts as a peer in the network
- This relies on altruism and non-malicious behavior, and will not be applied to highly sensitive data
- Other peers can be supported initially outside of the browser
- Reduce calls to Alchemy/Infura by peering this info
- Technical requirements can be found [here](https://hackmd.io/@delve-labs-research/Hy2B0jUPj) (WIP)
### Phase 2:
- Restaking mechanism to actually have existing ethereum nodes or other nodes use their current security guarantees to host frontends or run data services as a backup.
- Have peers or restaking mechanism run regular jobs to upload hashes, compressed data, or headers that are needed by the application (aka backend servers and basic jobs)
- Introduce security into the system through staking an erc20 for the p2p network
- Introduce basic consensus mechanism to secure the network
### Phase 3:
- Introduce capability for stateful & stateless data jobs. Lambda function style and docker containers in the ECS format