# The Half-Life of Trust
### Why verifiable AI needs post-quantum foundations, and why the answer is math
Cryptographic systems have a property we rarely discuss: how their trust ages.
Some forms of trust age well. They rest on mathematical assumptions that, when revisited by the next generation of adversaries, degrade gracefully. Other forms are brittle. They rest on secrets, hardware lifetimes, and assumptions about what future computers cannot do.
For most of the history of applied cryptography, this distinction was an academic curiosity. Today, two forces make it load-bearing: credible quantum threats on the horizon, and AI systems whose outputs we increasingly need to verify. At their intersection, the aging of trust stops being a philosophical question and becomes an infrastructure question.
When we build systems for verifiable AI inference, systems whose job is to convince us that a particular model, on particular inputs, produced a particular output, we are choosing a substrate of trust. That substrate has a half-life. We should choose it carefully.
## Where we are today
The dominant substrate for verifiable AI computation in 2026 is the trusted execution environment.
Intel SGX, Intel TDX, AMD SEV-SNP, and NVIDIA's Confidential Computing on H100 and successor architectures all share a common design. A hardware root of trust. An attestation protocol. A cryptographic chain that lets a remote party verify that a specific code ran on specific hardware in a specific configuration.
This is genuine engineering progress, and dismissing it would be foolish. TEEs let us run inference on untrusted infrastructure with reasonable assurance about what executed. They have made entire categories of confidential AI workloads tractable, from model evaluation behind enterprise firewalls to multi-tenant inference with cryptographic isolation.
If you need verifiable inference at production scale today, TEEs are the right answer. The performance overhead is on the order of single-digit percentages for most ML workloads. The developer experience continues to improve. The deployment story is mature.
This is the honest pragmatic baseline. Anyone selling you a different solution today, for production at scale, is either ahead of the curve or telling you a story.
The interesting question is not whether TEEs work today. They do. The question is what assumptions we are encoding into the foundations of an AI infrastructure we will need to trust for years, perhaps decades.
## The cryptographic dependencies hidden in attestation
Every major TEE attestation protocol depends on classical public-key cryptography.
Intel SGX's DCAP attestation uses ECDSA over NIST P-256. AMD SEV-SNP signs its attestation reports with the VCEK, an ECDSA P-384 key, rooted through a certificate chain (VCEK, ASK, ARK) at AMD's certificate authority. NVIDIA's H100 confidential computing attestation follows the same pattern: a certificate chain signed with elliptic curve cryptography, rooted at the manufacturer.
Every one of these schemes is, under current understanding, broken by a sufficiently large fault-tolerant quantum computer running Shor's algorithm.
The reasonable response is that we will migrate to post-quantum signature schemes well before such machines exist. NIST finalized ML-DSA (FIPS 204) and SLH-DSA (FIPS 205) in 2024 precisely so that protocols can rotate. TLS, code signing, software updates, identity infrastructure: each has a clear path to post-quantum migration that does not require throwing away physical infrastructure.
TEE attestation does not have this property.
## The silicon problem
The reason is architectural. The trust anchors for hardware attestation are not configuration values living in software. They are derived from physical secrets fused into the silicon at manufacturing time.
Intel's Provisioning Certification Key, the cryptographic identity at the base of the SGX attestation chain, derives from values written into the chip during fabrication. AMD's chip endorsement keys are anchored in fuses set at wafer test. The same is true of NVIDIA's confidential computing root of trust.
This was a deliberate and reasonable design choice. Rooting trust in physical secrets, sealed during fabrication and never exposed in software, is precisely what makes hardware attestation hardware. The price is that the root key cannot be rotated. To migrate the root of trust, you build new silicon, ship it, deploy it, and decommission the old generation.
This is not a software upgrade. It is a hardware refresh cycle measured in years, gated by fab capacity, supply chains, and customer adoption velocities.
So the question is not whether TEE vendors can migrate to post-quantum attestation. They can, and serious vendors are already designing the next generation. The question is what happens to the attestation reports the current generation of hardware has already produced, and to the ones it will keep producing until the next generation takes over, and to all of them the day after a sufficiently large quantum computer exists.
## Retroactive forgery
Most discussions of the quantum threat to cryptography focus on confidentiality. The phrase "harvest now, decrypt later" captures the immediate concern: encrypted traffic captured today can be archived and decrypted in the future once quantum capability arrives.
For attestation, the structurally analogous threat is more subtle and arguably worse. Call it forge later.
An attestation report is a signed statement: this enclave, in this configuration, produced this output. It is meaningful only because the verifier trusts that the signature could only have been produced by the legitimate hardware key.
The day a sufficiently large quantum computer exists, the manufacturer's signing keys for any unrotated hardware generation become recoverable from any certificate they have ever issued. From that point forward, an attacker holding the recovered key can produce attestation reports that verify as legitimate under the original public key, including reports backdated to any moment in the past.
If a regulator, a court, an auditor, or a counterparty in 2035 needs to verify that a particular AI system produced a particular output in 2026, and the verification protocol depends on a TEE attestation signed with classical elliptic curve cryptography, the answer in 2035 is: this signature could have been forged by anyone with quantum capability.
This is not a defect of TEEs that vendors have failed to address. It is a structural property of any attestation system that ties evidence to a non-rotatable cryptographic root and uses signature schemes that quantum computers can break. The evidence has an expiration date governed by the slowest-aging assumption in the chain.
For ephemeral use cases, this matters very little. Real-time fraud detection, live API serving, short-window trading: the attestation needs to be valid at the moment of verification, and that moment is now.
For use cases where verifiable execution evidence needs to outlive the hardware that produced it, the structure is fragile. Regulatory compliance. Model provenance. Evidence in disputes. Scientific reproducibility. Legal accountability for AI decisions.
The set of use cases in the second category is precisely the set we are now constructing AI governance regimes around.
## Why hash-based cryptography ages differently
There is a different kind of cryptographic primitive that ages very differently in time. It is built almost entirely on collision-resistant hash functions and information-theoretic arguments, and it sits at the foundation of the STARK family of proof systems.
Hash-based cryptography is not new. Lamport described one-time signatures in 1979. Merkle turned them into practical tree constructions. For nearly five decades, the assumption has been interrogated by adversaries and conceded nothing structural. It is one of the oldest live primitives we have.
A STARK proof of an inference computation does not say "trust this hardware." It says: here is a mathematical certificate that the computation was performed correctly, and you can verify the certificate in milliseconds without trusting any party other than the soundness of the underlying hash function.
The cryptographic assumption STARKs rely on is, essentially, that finding collisions in the hash function used by the proof system is hard. There is no elliptic curve discrete logarithm to break. No factoring problem. No manufacturer key.
Quantum computers do not leave hash functions untouched, but they do not break them either. Grover's algorithm gives a quadratic speedup for preimage search. Brassard-Høyer-Tapp gives a cube-root speedup for collision finding. These are polynomial degradations, not exponential breaks. For a proof system targeting 128 bits of post-quantum security, you enlarge the hash output and you are done. The cost is small. The mitigation is well understood. There is no Shor-equivalent hanging over the assumption.
This is why STARKs are described as plausibly post-quantum. The qualifier "plausibly" matters; cryptography is humble by tradition. But the structural argument is sound. A STARK proof produced today, with a collision-resistant hash function of adequate output length, remains verifiable and remains sound under the best understood quantum attacks indefinitely.
A STARK proof has no expiration date that depends on hardware generations. It has no retroactive forgery surface. The trust does not age.
> A TEE attestation says trust this hardware. A STARK proof says verify this math. One ages with silicon. The other ages with our understanding of hash functions.
## The honest comparison today
The argument above is a long-term one. The short-term picture is more nuanced.
For ML inference at production scale today, STARKs are not yet competitive with TEEs on raw cost or latency. Proving a forward pass over a modern transformer is computationally expensive. The state of the art has moved fast in the last two years (Circle STARKs over Mersenne-31, better arithmetizations of common ML operations, GPU-accelerated provers, richer lookup arguments), but we are not yet in a regime where you drop a prover in front of every Llama-class call and pay a tolerable overhead.
This is true. It is also the trajectory GPU-accelerated cryptography has followed before, and the trajectory hardware-friendly proof systems are following now. Better arithmetizations, better commitment schemes, better lookup arguments, better field choices, better silicon: these compound. The resulting curve is not asymptotic to a wall.
The honest framing is therefore not STARKs versus TEEs. It is: TEEs solve the verifiable AI problem today; STARK-based systems will solve it post-quantum and trustlessly tomorrow; the open question is how aggressively we invest in closing the gap.
The consequence asymmetry is what should settle the question. Getting it wrong on TEEs produces a structural credibility crisis for AI evidence in the late quantum era. Getting it wrong on accelerating STARKs produces some duplicated R&D. Current funding patterns invert that asymmetry: we pour capital into the substrate whose trust decays and starve the one whose trust does not.
## Beyond the quantum question
The quantum threat is the sharpest argument for migrating verifiable AI infrastructure to STARK-based foundations, but it is not the only argument.
There is also a question of trust topology. A TEE attestation is a statement signed by a hardware vendor. Verifying it means verifying a chain that terminates at Intel, AMD, or NVIDIA. This is acceptable for many purposes. It is not acceptable for all of them.
For systems whose entire point is to remove trusted intermediaries, sovereign AI deployments, cross-jurisdictional model audits, evidence in adversarial proceedings, anything that touches geopolitical asymmetry, rooting trust in any single vendor is a structural compromise. This is true regardless of whether the vendor has any present intention of misusing the trust. Architectures should be evaluated on their assumptions, not on the goodwill of their custodians. State actors have a long history of converting vendor goodwill into supply-chain access. Future ones will too.
A STARK-based verification protocol moves the trust assumption from "this vendor's silicon is honest and uncompromised" to "this hash function is collision resistant." The second is publicly verifiable, vendor-neutral, and falsifiable by anyone in the world with the requisite mathematical training. It is the kind of assumption that scales across borders, jurisdictions, and decades.
There is also the question of side channels. The last decade of TEE history is, in part, a history of an arms race over speculative execution leaks, power analysis, voltage glitching, and architectural state attacks (Foreshadow, Plundervolt, ÆPIC, Downfall, the ongoing catalog). Each generation has been responsibly disclosed and patched. Each generation has reminded us that "this code ran inside an enclave and was not observed" is a more fragile assumption than the abstraction implies. Pure cryptographic verification, by construction, sidesteps this entire class of concerns.
None of this is an indictment of TEE vendors. It is a recognition that hardware-rooted trust and math-rooted trust are different categories of guarantee, with different failure modes and different aging behavior. A mature verifiable AI infrastructure will use both, deliberately, with explicit reasoning about which guarantee each component provides.
## What the path forward looks like
The practical path is not a flag day where we abandon TEEs and switch to STARKs. It has three layers.
The first is hybrid architectures. TEE-attested inference today, with STARK proofs produced asynchronously for the subset of computations where long-lived evidence matters. This is feasible now. It captures the latency and cost advantages of TEEs for live serving while building durable mathematical evidence for audit, compliance, and dispute resolution.
The second is accelerated investment in STARK-friendly machine learning. Arithmetizations of common ML operations that minimize the cost gap with native execution. Hash function choices optimized for proof generation. Tighter integration between proof systems and ML compilers. Most of this work is happening in academic labs and a small number of crypto-native companies. It deserves an order of magnitude more attention from the AI community than it currently receives.
The third is a serious public conversation about cryptographic agility in AI evidence regimes. The EU AI Act, the NIST AI Risk Management Framework, and emerging national compliance standards are being drafted with no explicit treatment of the temporal validity of cryptographic evidence. Any framework that treats a TEE attestation and a mathematical proof as equivalent forms of evidence is encoding a fragility into the foundations of AI governance that will surface long after the authors of the framework have moved on.
## Why this matters beyond cryptography
The AI models we are deploying now mediate access to information, healthcare, credit, justice, and opportunity. They produce evidence future generations will need to interrogate. Decisions made today by automated systems will be litigated for decades. Models trained today will be subject to provenance claims long after the hardware they trained on has been recycled.
Whether that evidence remains meaningful is a question about the kind of accountability we are encoding into the substrate of the digital world.
If we build verifiable AI on foundations whose trust expires when a hardware generation ages out, or when a particular cryptanalytic capability matures, we are quietly handing future actors the ability to deny, forge, or rewrite the past. That is a power no institutional design, regulatory, judicial, or journalistic, has ever managed to constrain once it exists.
If we build verifiable AI on foundations whose trust is rooted in mathematical assumptions that age gracefully and are publicly verifiable across borders and generations, we preserve a different possibility. The record of what AI systems did, on what inputs, with what outputs, remains examinable by anyone, anywhere, for as long as the mathematics holds.
This is a choice about the time horizon of accountability. About whether the evidentiary substrate of the AI era is built to outlive the corporations and governments that operate it. About whether the truth of what machines did in our time will remain available to the people who come after us.
The pragmatic answer for today is TEEs. The right answer for the durability of the record is post-quantum, math-rooted verification. Both should be said plainly. And we should be moving on the second faster.
Math is patient. Hardware is not. We get to choose which one we trust to carry the evidence forward.