---
# System prepended metadata

title: 'Idea: GPU Compute Stablecoin'

---

# Idea: GPU Compute Stablecoin

> Very nascent, unpolished idea

A token backed by real GPU hardware. 1 token = a defined unit of AI inference compute, redeemable on demand.

## Core Idea

Jan has users with GPUs sitting idle. Jan agents need compute. A token connects supply and demand without a centralized cloud provider in the middle.

## Two Variations

**Variation 1: Menlo/Polkadot Reserve**
Menlo or Dot buys GPUs. Mints tokens against owned capacity. Simple trust model, guaranteed SLA. Centralized custodian by design, at least initially.

**Variation 2: Community Commit**
Users commit their machines. Hardware is attested on-chain. Tokens are minted against verified contributed compute. More decentralized, but quality and availability are harder to guarantee. e.g. Vast AI, SaladCloud, but on chain.

**Likely path:** launch with Variation 1, open to community contributors once attestation and slashing mechanics are proven.

## The Stability Problem

Insight: GPU compute is not stable. It's an appreciating asset.

Most GPU clouds are a 9 mo payback period, so 130%+ annualized return at current prices. Newer models and more efficient hardware will only increase that return.

Three honest options:
1. **Compute-pegged:** 1 token = fixed FLOP-hours. Let it appreciate. Don't call it a stablecoin.
2. **US Dollar-pegged:** 1 token = $1 of compute at current market rates. Requires a pricing oracle.
3. **Inference-pegged:** 1 token = 1M tokens of inference on a benchmark model, repriced periodically. Most practical for buyers.

Maybe we call it a "compute credit", not a stablecoin.

## Why Polkadot

The same Substrate light client handles:
- Hardware attestation via off-chain workers (OCW)
- Token mint/burn on verified compute delivery
- XCM for liquidity on DeFi parachains

## Connection to Agent Wallet

Jan agents with on-chain DIDs can hold compute tokens and purchase inference autonomously. The user never touches a wallet — the agent manages compute as a resource, like context window or tool selection.

## Open Questions

- What is the benchmark unit at launch?
- Variation 1 or hybrid launch?
- Who runs the pricing oracle if going dollar-pegged?
- Slashing conditions for community contributors
- How to incentivise backwards data contribution towards an **Open Superintelligence model** trained with volunteered data?