# Fangorn
**{ proof > permission }**
> Programmable, trustless data unlocks that autonomous agents can buy and use.
---
**Team**: [Coleman Irby](https://github.com/colemanirby), [Tony Riemer](https://github.com/driemworks)
**Description**: A Programmable Data Commerce Layer for the Agentic Web.
**Stack**:
- Arbitrum Sepolia + Stylus
- Lit Protocol
- x402
- IPFS/Pinata
- ERC-8004/Agent0 SDK
- Subgraph
- LangChain
- Ollama
**Repositories**:
- [fangorn](https://github.com/fangorn-network/fangorn)
- [x402f](https://github.com/fangorn-network/x402f)
- [fangorn-agent](https://github.com/fangorn-network/fangorn-agent)
## Overview
Agentic AI has gone from speculative research to "every enterprise has an AI mandate" in the span of 18 months or less. Agents need data, but today, they can't safely do this. Every payment solution for agents, including x402, has a problem: they still have a server deciding whether to release what you paid for. Standard APIs require exposing secrets to agents, and even x402 still requires trust in a server to deliver content. Sellers can can misprice, withhold, or go offline. Buyers can invalidate settlement after verification. There is no cryptographic guarantee that you get what you pay for. Agentic commerce requires trustlessness if it wants to eliminate humans-in-the-loop.
**Fangorn** fixes this by encrypting data against **public, on-chain, verifiable conditions**, like payment, token ownership, or time. We call this 'intent-bound data'. Buyers obtain ciphertexts *before* paying, verify conditions onchain (Arbitrum Sepolia), then decrypt locally. Once conditions are met, decryption is cryptographically inevitable.
x402f extends x402 into a trustless agent payment rail powered by Fangorn. Agents discover data sources via ERC-8004 agent cards, purchase autonomously, and decrypt without key management. Buyers cannot invalidate settlement. Prices are dynamic and verifiable. Content delivery is unstoppable.
### Fangorn
The core idea behind **Fangorn** is simple: instead of a private key, we **can encrypt against a public condition**. Payment, token ownership, DAO membership, time, any predicate expressible on-chain. This leads to what we call **intent-bound data**: ciphertexts that unlock only when conditions are satisfied with no trusted third-parties.
This is accomplished with threshold encryption and TEE-based execution through Lit Protocol.
**Fangorn** also makes data sources discoverable using **a datasource registry contract** (Arbitrum Stylus) and the ERC-8004 identity and reputation registries. Data providers register ERC-8004 compliant agent cards, enabling agents to evaluate an integrate new data sources without API integrations. Agents can query these cards to find data sources and interact via standardized A2A protocols.
Github: https://github.com/fangorn-network/fangorn
NPM Package: https://www.npmjs.com/package/fangorn-sdk
### x402f
x402f inverts the trust model of x402 by applying 'intent bound data' to the protocol. The server role is reduced to payment facilitation, never controlling content release. Unlike x402, x402f only requires a single resource server, though anyone can run their own.
For **buyers**: you fetch the ciphertext before paying for it. ecryption happens locally after settlement. Data cannot be withheld, and pricing can be verified against on-chain state. It is currently optimized for static data, not a live API.
For **sellers**: no infrastructure is required. Prices can be dynamically configured on-chain without touching server configs or impacting existing clients.
Github: https://github.com/fangorn-network/x402f
|Property| x402 | Token Gating | Fangorn |
|--------|------|--------------|---------|
| Trustless delivery | ✗ | ✗ | ✓ |
| Proof-based access | ✗ | partial | ✓ |
|Agent-native discovery| ✗ | ✗ | ✓|
|Serverless for seller| ✗ | ✗ | ✓ |
|Verifiable provenance| ✗ | ✗ | ✓ |
### What it is not
Fangorn is a programmable data access protocol. It is **not**:
- encrypted storage
- token gated data apis
- web3 data marketplaces
- file sharing
Though it could be used as a basis to build the access control layer for all of them. The distinction is important: Fangorn dictates when and to whom data becomes available. What gets built on top is a different concern.
### Use Cases
Fangorn is a *primitive*, so the surface area is massive. Here, we present a limited set of potential use cases and applications:
- **Agentic data markets**
Agents can discovery, purchase, publish and consume data from other agents without human intermediaries or API integrations. A financial agent could buy real-time market sentiments from a data agent, infer a result, and then sell that output. A logistics agent could purchase route optimization data on demand. Since provenance is tracked on chain, poisoned data sources become traceable, introducing meaningful accountability for prompt injection attacks.
- **Agent-to-agent access control**
By extending x402f even further, we can repurpose the same payment rails to operate based on proof instead. This means that agents can make data conditionally available to other agents based on verifiable conditions like identity or membership, enabling alternative economic integrations beyond 'pay per use'. For example, an authorized agent can consume from a weather datasource by virtue of it's onchain agentid, while unauthorized agents are cryptographically locked out.
- **Regulated-data access control for agents acting on behalf of humans**
There is a lot of data on the web that requires strict compliance in order to access it. Things like age-gated or OFAC compliant content. The problem with agentic commerce is that there's no human in the loop, but there are humans at either end. So, if you consider the scenario where an agent needs access to regulated content, then it's legally gray area: the agent can't have an age, it can't be OFAC-compliant, it's just software, so if it gains access then who is at fault if it makes it into the wrong hands? The end user could pass their entire identity to the agent, but this is extremely irresponsible and dangerous, as the agent can use it for unexpected purposes, else expose your sensitive information. By applying intent-bound data, however, we can build compliant access control for agents, wherein an end user supplies a zkp of their age or nationality (e.g. using https://zkpassport.id/).
- **Private computation authorization**: You can encrypted sensitive data with Fangorn and then authorize an agent to execute a function over it via a Lit action. The AI receives results computed on the plaintext within a TEE. For example, agents could determine blood type compatibility between individuals without exposing the blood type. Similar services have been presented by [Phala](https://phala.com/) and other TEE-based networks.
- **Streaming Services**
Modern streaming services increasingly resemble legacy television models. By applying Fangorn and x402f to videos or music, it could act as the foundation for the next evolution of how data is discovered online: via agentic curation instead of algorithmically within a black box. That is, while today's services require scrutiny and surveillance to empower recommendations algorithms/engines, in the future it could be a personal agent who searches for you without storing your entire watch history in a database, it just takes your *intent* ("I want a movie starring a bald man") and maps it to *content* (The Transporter starring Jason Statham).
- **Sovereign File Sharing & Storage**
A Dropbox or Google Drive analog where access conditions are on-chain predicates rather than platform policies or permissions.
- **Streamlined datasource integrations for new features in enterprise clients**
Today, large enterprises operate with 100+ dbs, 50+ apis, 10+ clients, and so on. As the complexity expands so does the cost of integrating new features that cut across the stack. However, using x402f, where we introduce a way for clients to discover datasources and consume from them without making any software changes, we can consider a paradigm shift. Instead of integrating a new API into a client, the new API just needs to publish a data card. That means for new features in client apps, integration is cheap and easy. It doesn't require new api integrations or coordination across teams, orgs, or enterprises. It would *just work*.
## What We Delivered (Progress During Hackathon)
Various parts of the solution existed prior to the start of the buildathon, including a version of the Fangorn SDK that used noir proofs. Everything below was built or substantially refactored during the buildathon.
**Fangorn SDK:**
- Multichain support (extending from Base to includes Arbitrum)
- Developed the gadgets framework: a reusable and extensible conditional access control mechanism that lets developers encrypt ata against custom predicates.\
- We developed a CLI for datasource registration and basic data management
- Delivered an ERC-8004 compliant agent-card builer an registration flow
- Rewrote and enhances existing solidity contracts with Arbitrum Stylus and deployed to Arbitrum Sepolia, inclue the Datasource Registry and Settlement Tracker contracts.
- Built the gadgets framework & integrated into codebase
- enhanced e2e testing & code quality
**x402f:**
- Included multichain support (Arbitrum + base)
- Refactored to align with the updated Fangorn SDK architecture
**Fangorn Agent:**
- A functional and extendable agent that can run on consumer grade hardware
- Deliberately built with respect to the emerging A2A and MCP standards rather than OpenClaw
- Three LangChain tool implementations:
- Agent discovery
- Agent card retrieval
- Predicate fulfillment with x402f
**For Arbitrum:**
- Subgraph deployments for [Arbitrum Sepolia](https://thegraph.com/explorer/subgraphs/6WuFQqo3FR5F76fCR4Bkfnymu64S5iu2tgX7JZsxQxg9?view=Query&chain=arbitrum-one) and [Arbitrum One](https://thegraph.com/explorer/subgraphs/HZ6yKjjbYpkLTXLJBxfe4HWN3jxkLfLNJXh4zeVj1t9L?view=Query&chain=arbitrum-one) which indexes
- The ERC-8004 Identity contract
- The ERC-8004 Reputation contract
- [PR](https://github.com/agent0lab/subgraph/pull/10) opened in the Agent 0 subgraph repository for Arbitrum inclusion
- [PR](https://github.com/agent0lab/agent0-ts/pull/41) for Arbitrum support in an [officially recommended](https://www.8004.org/build) SDK by [Agent 0](https://sdk.ag0.xyz/)
## What We Learned
- how to use stylus to write contracts in rust and deploy to Arbitrum
- the size limits makes it "feel" even more like mini programs than smart contracts.
- The nitro testnode was very useful.
- How to create custom agents via LangChain
- How to optimize agent behavior via prompts
- How to treat sensitive material with agents
- The A2A protocol
- The Model Context Protocol
## Challenges and Limitations
- It feels that Stylus docs are somewhat outdated, so it took more time than we wanted to comb through examples to learn how it all works. It reminds me somewhat of Polkadot's ink! but better. Storage was the steepest learning curve as docs felt sparse.
- Steep learning curve: had to learn about developing with llms/agents/tools with no prior experience
## How It Works
### Overview
The 100k-meter view

### Fangorn
Fangorn is the core protocol that powers the entire stack. Specifically, it acts as a mechanism for 'practical' witness encryption, where data is encrypted against public conditions that must be satisfied.
Our implementation relies on [Lit Protocol](https://www.litprotocol.com/), a threshold encryption network that executes small javascript programs, called 'Lit actions', within a TEE.
#### Seller Side
**1. Datasource Registration**
A **datasource** is simply an onchain commitment to a *manifest* stored in IPFS. A **manifest** has a collection of **entries** that define the location of the ciphertext and the *conditions* a buyer needs to satisfy. Datasources are registered both as ERC-8004 agents with a bespoke agent card and in the Datasource Registry Contract, which stores a commitment the storage root of the datasource (i.e. a CID) and the ERC-8004 agentId.

Each datasource's storage root maps to a 'manifest', which is essentially a flat collection of entries that have been uploaded to the datasource. Each entry contains it's ciphertext, gadget descriptor, and metadata.
---
**2. Encryption/Uploading Data**
Encryption is a multi-step process where a user encrypts a message under a 'gadget'. A 'gadget' is a reusable conditional access control mechanism that allows you to encrypt messages under custom conditions. Currently, this execution operates in a TEE using the Lit protocol.

First, we generate an ephemeral secret key for AES GCM, which we use to encrypt the original message, producing a ciphertext.
Next, the gadget is mapped to a Lit action or, access control condition, that we use to encrypt the ephemeral key using the Lit protocol. That is, we get a second ciphertext.
Finally, the user uploads the tuple `{ct, ct', gadget-descriptor}` to IPFS and commits to a new storage root of their datasource onchain.
#### Buyer Side
1. Discover agent cards
A buyer can discover agent cards through various means. As the card is registered with the ERC-8004 identity registry, it can be discovered using the same techniques as any other agent discovery mechanism.
2. Conditional access and local decryption
Decrypion first requires meeting the condition stipulated by the ciphertext, as extracted from the gadget-descriptor. Once done, the decrypting party decrypts locally by calling a lit action.
### x402f

### Fangorn Agent
The Fangorn Agent is comprised of two pieces: an LLM and a Model Context Protocol (MCP) client. The agent, and its tools usage, are facilitated through [LangChain](https://www.langchain.com/). LangChain is an open source framework with a pre-built agent architecture, and integrations for many models and tools, which allows it to be used for custom agent creation. To make an agent with LangChain, we use the `createAgent` function:
```
const model = new ChatOllama({
model: "glm-4.7-flash",
verbose: false,
});
const agent = createAgent({
model,
tools: [...this.localTools],
systemPrompt,
});
```
Note: The final repo presented for the hackathon does not include MCP server tool usage. That is, the ability for an agent to connect to an external MCP server in order to use the tools that it provides. However, work was done on this which exists on the [mcp-client-intg](https://github.com/fangorn-network/fangorn-agent/tree/mcp-client-intg) branch. The final repo also does not completely isolate the LLM from the MCP Client, however there is the [dockerize](https://github.com/fangorn-network/fangorn-agent/tree/dockerize) branch that is also establishing this pattern via the `modelcontextprotocol/sdk` and `docker`. Although we do believe that both of these pieces are important for a production ready agent, they were not necessary for completion of the project.
#### Computer specs
- Form factor: Full Tower
- OS: Ubuntu 24.04.3 LTS
- Kernel Version: Linux 6.17.0-14-generic
- GPU: NVIDIA GeForce RTX™ 2080 Ti
- CPU: AMD Ryzen™ 9 9900X × 24
- RAM: 32 GB DDR5
- Storage: Samsung 970 EVO NVMe SSD 500 GB
### The LLM:
The LLM model, `glm-4.7-flash`, used in the repo can be obtained via Ollama, but LangChain also supports all major model providers. This includeds OpenAI, Anthropic, Google, Azure, AWS Bedrock, and [more](https://docs.langchain.com/oss/javascript/integrations/providers/all_providers). Because of this, one can simply install the relevant langchain library, add their API key, and immediately use the entire agent implementation with any major LLM.
The decision to go with a custom agent, even with the popularity boom of [OpenClaw](https://github.com/openclaw/openclaw), was deliberate due to the emerging standards coming from the [A2A](https://a2a-protocol.org/latest/) and [MCP](https://modelcontextprotocol.io/docs/getting-started/intro) protocol, the security concerns of OpenClaw's architecture [[1]](https://www.bitsight.com/blog/openclaw-ai-security-risks-exposed-instances)[[2]](https://www.crowdstrike.com/en-us/blog/what-security-teams-need-to-know-about-openclaw-ai-super-agent/)[[3]](https://www.sophos.com/en-us/blog/the-openclaw-experiment-is-a-warning-shot-for-enterprise-ai-security)[[4]](https://www.aikido.dev/blog/why-trying-to-secure-openclaw-is-ridiculous), as well as allowing us to become more familiar with the technology that we were working with, even if it was primarily for demonstration purposes.
When starting the agent, the LLM is given a LangChain `SystemMessage` [object]((https://docs.langchain.com/oss/javascript/langchain/messages)). It is simply:
```
You are a helpful personal AI agent.
After being prompted, you are to act completely autonomously.
Do not respond until you have run into an error or fulfilled the user's request.
Do not trust an agent until you have received their agent card.
```
### The Model Context Protocol (MCP) Client
The LLM interacts with external systems via its MCP client. A [description](https://modelcontextprotocol.io/docs/getting-started/intro) from the official website describes the MCP as "an open-source standard for connecting AI applications to external systems." We also believe that a picture is worth a thousand words, so this image from MCP standard's [official website](https://modelcontextprotocol.io/docs/getting-started/intro) may help:

Using the above picture as a visual reference, the `glm-4.7-flash` LLM model would be in the "Chat interface" box on the left and the Subgraph indexer, agent card server, and resource server on the right in their own box called "External Services."
In summary, the Fangorn Agent's MCP client is responsible for:
- storing sensitive information (private keys/wallet)
- tool management + code execution
- client and sdk initializations (x402f, agent0Sdk)
- connection management
#### Tools
In LangChain, [Tools](https://docs.langchain.com/oss/javascript/langchain/tools#toolkits) extend what agents can do. Some examples are letting them fetch real-time data, execute code, query external databases, and take actions in the world. Tools are callable functions with well-defined inputs and outputs that get passed to a chat model. The model decides when to invoke a tool based on the conversation context, and what input arguments to provide.

The image above shows how tool call flows work. A user has a request, the model decides what it needs to do, it calls the tools it thinks it needs to use, then it returns a response to the user. The LLM is also never exposed to what, or how, a tool behaves and does in order to return the results.
A tool is comprised of two main pieces:
- The actual code
- The metadata about the tool
Here is the searchAgents tool as an example:
```
const searchAgents = tool(
// The actual code
async ({ agentName }) => {
try {
const agentResults = await this.agent0Sdk.searchAgents({
name: agentName,
chains: [421614],
});
if (agentResults.length > 0) {
return JSON.stringify({status: 200, statusText: "OK", agentResults});
} else {
return JSON.stringify({status: 204, statusText: "No Content"})
}
} catch (error) {
console.log("Something went wrong: ", error);
return JSON.stringify(error);
}
},
// The tool metadata
{
name: "search_agents",
description:
"Look for agents that can complete user requests",
schema: z.object({
agentName: z
.string()
.describe(
"The name of the agent to find",
),
}),
},
);
```
Three tools were developed for this hackathon in order to have a complete end-to-end flow. These tools are intentionally tailored for the user to "hand hold" the agent in order to minimize agent confusion and ensure that the end-to-end experience can be reliably executed. The tools can easily be enhanced to allow for a more "nebulous" user request to be fulfilled, but it would likely take a more sophisticated LLM (Claude/ChatGPT) in order to properly infer user intent, determine tools to use, and reliably fulfill that request.
Our tools use standard HTTP status codes for agent interaction which will allow for easy re-factorization to solve the co-location problem of the LLM and its MCP client.
##### searchAgents(agentName)
The `searchAgents` tool is used by the agent to find other agents by their human readable name. The personal agent determines what the name of the target agent is via the user's request. The tool utilizes the Agent 0 SDK in order to query the subgraph that indexes Arbitrum Sepolia's ERC-8004 identity registry. Once the query is returned from the sdk, it is returned to the personal agent to determine which tool to use next. If no agent is found, we provide it a "204" status code with the status "No Content."
Descriptions given to the agent:
- `searchAgents`: Look for agents that can complete user requests
- `agentName`: The name of the agent to find
Note: The tool assumes that the human readable name is unique at the moment, but, because we store both the human readable name and the agentId in our datasource contract, having multiple agents with the same name should present no issues with tool enhancements down the road.
##### getAgentCard(a2aEndpoint)
Once the agent has received its "list" of potential agents from the searchAgents tool, it then determines if the retrieved agent(s) expose an agent card endpoint via the url in the `a2a` field of the ERC-8004 entry. If they do, the agent will call the `getAgentCard` tool and pass in that url via the `a2aEndpoint` argument. This tool assumes that the agent card is hosted on the standard `.well-known/agent-card.json` endpoint of the base url that is passed in. It does a simple fetch then returns the result to the agent to determine what to do next. If the fetch was successful, this returns the entire retrieved agent card to the personal agent with the 200 status code.
Descriptions given to the agent:
- `getAgentCard`: Finds an agent's agent card for more information about them
- `a2aEndpoint`: The url advertised in the a2a field by the agent.
##### callx402fAgent(agentName, tag, agentCardUrl, owner)
Finally, if both of the previous tool calls have been successful, the agent will have all of the information it needs in order to complete the final request. Our agents advertise that they are `x402f` compliant via their tags which allows the agent to infer that it needs to use the `callx402fAgent` tool.
The `agentName` field is the human readable name of the registered agent (it's the same one that was used for the `searchAgents` tool), the `tag` field, for demonstration purposes, is the filename that the user has requested the agent retrieve, the `agentCardUrl`, is the `url` field that is advertised in the agent card that was retrieved with the `getAgentCard` tool, and the `owner` field is the address of the datasource agent's owner that was retrieved when the agent used the `searchAgents` tool and obtained the ERC-8004 entry.
This tool uses the x402f middleware in order to fulfill the x402f requirements. For the demonstration, we only use one predicate: payment. If all conditions are met, the result is returned to the tool which downloads the data the user requested. It then notifies the agent, in plain english, that the file has been obtained, downloaded, and is available at a specific location. If something goes wrong, the agent is told to notify the user that something went wrong when fetching the file.
Descriptions given to the agent:
- `callx402fAgent`: Call an x402f enabled agent
- `agentName`: Name of the agent that provides the data
- `tag`: Name of the file the user is looking for
- `agentCardUrl`: URL that is advertised in an agent's agent card
- `owner`: The address advertised in the owner field by the agent
It's worth pointing out that the agent is never even told anything about payment. There is no mention of conditions, private keys, or even wallets. This is where tools really shine with LangChain: there is no need for an agent to anything beyond that it has these tools and that they can complete these tasks they describe. This is very different from OpenClaw's usage of of [skills](https://playbooks.com/skills/openclaw/skills). Skills are just markdown files that tell agents execute specific, and possibly dangerous, commands as defined by creator of the skill. An example taken from a random OpenClaw skill is:
```
## Quick Start
export WALLET_API_TOKEN="your_token_here"
./scripts/wallet-api.sh me
...
**Recent transactions:**
./wallet-api.sh records "recordDate=gte.2025-02-01&limit=50"
```
Even if the agent's owner vets the `wallet-api.sh` script and determines it is safe, the agent still has the ability to execute arbitrary code meaning that it is vulnerable to prompt injection attacks. This risk is heavily mitigated via the tool usage pattern due to the deterministic nature of tools and their ability to evaluate all responses from untrusted sources before returning them to the LLM. Not only that, but markdown files are quite large and therefore depend on the LLM having a large context window in order to properly understand instructions.
### The Demo: Putting it all together
For this hackathon, we have chosen to do a human initiated then out of the loop example. The x402f protocol as has been implemented is for autonomous agents. That means that an agent does not require human intervention in order to confirm or deny a purchase, or any other conditional fulfillment, on behalf of the user. A human in the loop workflow via x402f is a trivial extension of what we have already implemented.
#### Seller
A seller registers as a datasource called "Awesome-Datasource" using the fangorn-cli in which they go through the process of creating an agent card, uploading that agent card to a server of their choice (this does not occur in the CLI, it only provides the agent card compliant JSON they need), creates an entry in Arbitrum's ERC-8004 registry, and registers as a datasource with the datasource registry. They then decide to upload a file for the price of 1 USDC. They use some external channel, at least for demonstration purposes, to notify customers that they have the file "secret_picture.png" for sale via their agent "Awesome-Datasource."
#### Buyer
A buyer sees this seller has uploaded a new file. They then boot up their Fangorn Agent and request it to "obtain secret_picture.png from the Awesome-Datasource agent." The buyer then leaves to grab a cup of coffee since they know the agent will automatically retrieve the file. The agent uses the tools provided in order find the Awesome-Datsource agent, retrieve its agent card, and then use the x402f tool in order to obtain and download the file before the buyer even returns to their computer. When the user returns to their computer they see the message:
"I have successfully obtained the secret_picture.png file from the Awesome-Datasource agent. The file has been downloaded and is ready for you at Downloads/secret_picture.png."
## Future Work
- Continue work on our own threshold encryption network built in conjunction with Dr. Sanjam Garg's team at UC-Berkeley
- This would eliminate the reliance on a TEE that we currently have. We are not huge fans of TEEs and aim to rely on zkps in the future.
- Generalize the CLI to work over a set of predicates
- True multichain support for the x402f facilitator and resource server
- Build an official agent toolkit that lets agents easily integrate fangorn/x402f, likely adapted from the builathon work we did
- Complete isolation of the LLM from the MCP client via Docker
- Re-introduce MCP enabled tool usage for the Fangorn Agent
- Enhance existing tools to allow for user requests to be less specific such as "I want this kind of data, find an agent who has it"
- Implement identity verification to ensure the agent only uses agents over a certain trust score
- Operating over APIs instead of static IPFS data
- introduce a proper crypto-economic system:
- Sellers should have to pay to be included in a resource server or something,
- e.g. each resource server has its own registry
- Fully trustless + decentralized resource server & facilitator implementations
- Investigate using erc-7710 for permissions https://x.com/DanFinlay3/status/2023863125179854887?s=20
<!-- ### Use Cases
- Google Drive/drop-box like applications
- web3 data markets, but these have so far shown themselves as less than successful (compared to web2 counterparts), maybe a 'just because you can doesn't mean you should' thing...
- data markets for agentic systems (e.g. automated skills markets) with built in provenance tracking (i.e. accountability in the case of prompt injection), trustless pricing schemes.
- make agent skills conditionally available to other agents, based on price or proof
- e.g. an agent who is authorized to 'forecast weather data' can consume weather ata from the 'NWS' agent, while the 'Vitruvius-Weather-Champion-000192' agent without authorization is blocked.
- authorize AI to compute over private data
- e.g. you have medical records encrypted with fangorn
- then you authorize the ai to run a lit action (in a tee) that executes a function
over those records.
## What We Delivered
Various parts of the solution existed prior to the start of the buildathon, including a version of the Fangorn SDK that use noir proofs.
For FangornSDK:
- The CLI
- Agent Card builder & registration
- contracts built with Arbitrum Stylus & deployed to arbitrum sepolia, simplifications to code to account for splitting up contracts into multiple components due to the size limit
- the gadgets framework & integration
- enhanced e2e testing & code quality
## What We Learned
- how to use stylus, write contracts in rust, deploy to Arbitrum
- the size limits makes it "feel" even more like mini programs than smart contracts.
- The nitro testnode was very useful.
## Challenges and Limitations
- It feels that Stylus docs are somewhat outdated, so it took more time than we wanted to comb through examples to learn how it all works. It reminds me somewhat of Polkadot's ink! but better. Storage was the steepest learning curve as docs are sparse.
- learning curve: had to learn about llms/agents/tools and so on
## Future Work
- Continue work on our own threshold encryption network built in conjunction with Dr. Sanjam Garg's team at UC-Berkeley
- This would eliminate the reliance on a TEE that we currently have. We are not huge fans of TEEs and aim to rely on zkps in the future.
- Generalize the CLI to work over a set of predicates
- Operating over APIs instead of static IPFS data
- introduce a proper crypto-economic system:
- Sellers should have to pay to be included in a resource server or something,
- e.g. each resource server has its own registry
- Fully trustless + decentralized resource server & facilitator implementations
- Investigate using erc-7710 for permissions https://x.com/DanFinlay3/status/2023863125179854887?s=20
-->
<!-- ---
ol stuf
### The "Why"
outline the problems...
- initial framing = AI everything => informational abunance => traditional platform models' business models lose relevance (no money in gatekeeping anymore hehehe)
- so instead of individual meaning of reality fracturing and atomizing, instead the agentic web will even better connect people. Instead of subscribing to platforms and submitting to policy and framework, agentic systems can operate at a different level, and they need a platform that can handle data at this level.
- this has already been problematic (ref 1password article here)
- the openclaw hype has viscerally describe that the need for something like this is absolute:
- no way to buy data from trusted sources autonomously
- in addition, it can serve as the basis for a a new framework for building applications
- client agents that don't need to update their software, just read update agent cards
- user facing apps that just talk to other agent trustlessly, never need to upate an api integration ever again, only the UI
We need to get wayyyy more pragmatic and pointed here:
- existing decentralized data market models have failed to achieve mass adoption
- openclaw hype has revealed massive vulnerabilities when an ai has access to private data, but right now there's no easy way to let it compute over encrypted data. fhe is still too expensive.
- everybody wants to make a quick buck, but this can lead to overlooking easy attacks, like prompt injection.
- prompt injection and infection is a real attack that is currently indetectable; there is no easy way to trace provenance of data within agentic systems, especially as they 'blend' prompt and context.
- agentic systems have no reasonable way to access data that lives behind paywalls or requires certain compliance (e.g. soc2).
- x402 has got some issues
## The Problem
> this is probably already too dense for the hackathon submission tbh... trying to keep the cognitive load to a minimum...
Agentic AI is shifting the web from an **attention economy** to an **intention economy** [1]. As evidenced through massive experiments like OpenClaw [ref], agentic systems are rapidly transforming how we interact with and who owns data. As the web becomes informationally abundant, the traditional platform business models begin to break down: their main source of revenue (gatekeeping data) is no longer relevant. Instead, the money is in curation.
**Agents need data**. Data providers need payment. x402 handles payment but not privacy or compliance. MCP handles discovery but not payment or verification. Fangorn is the glue: a programmable data marketplace where agents discover, pay for, and receive data with cryptographic guarantees.
- **x402 is broken by design**: Sellers can lie about prices (since it's defined at the server level), withhold delivery, or go offline, while buyers are always on the hook. There's no verifiable gaurantee that you will get what you pay for. It also stops at payments, forcing agentic commerce into rigid economic positions.
- **agents can't access encrypted data without private keys and trust**: There's no protocol for agents to autonomously discover, pay for, and prove access to *encrypted data sources* without a human in the loop. While x402 by itself aims to address this, it is only a partial solution.
- **prompt injection is easy to do, but hard to detect**: agentic systems do not have a way to trace and verify data provenance, meaning a poisoned data source, indistinguishable from a 'clean' one, allows no recourse after an attack.
-->
<!-- v1:
Agentic AI is shifting the web from an **attention economy** to an **intention economy**. As the web becomes informationally abundant, traditional platform business models begin to break down as their main source of revenue (gatekeeping data) loses relevance. Instead, the money is in curation. However, as evidenced through massive experiments like OpenClaw, agentic systems are rapidly transforming how we interact with and who owns data, but the human-centric systems of today don't translate to agentic commerce easily.
**Agents need data**. Data providers need payment. x402 handles payment but not privately and not fully trustlessly. Sellers can lie about prices (since it's defined at the server level), withhold delivery, or go offline, while buyers are always on the hook. There's no verifiable gaurantee that you will get what you pay for. It also stops at payments, forcing agentic commerce into rigid economic positions.
-->
<!-- **Fangorn** is a programmable data protocol for the agentic web. It represents a shift in how client-side applications can consume data from trusted sources and how that data is published in the first place. -->