# The Complete Guide to Choosing a Web3 Audit Partner
**A Stack-Specific Handbook for Technical Founders**
This guide is structured to help you look past the marketing fluff and verify **technical competence** specific to your tech stack.
---
## Phase 1: The Universal Filter (Pass/Fail)
*Before getting into stack specifics, filter firms using these non-negotiables. If they fail these, do not proceed.*
### 1. The "Lead Auditor" Rule
Ask **who** specifically will audit your code. Large firms often sell you on their brand but assign junior auditors to your project.
* **Action:** Demand to know the lead auditor’s handle, GitHub profile, or portfolio.
### 2. Public Report Transparency
If they don't publish their past reports (or hide them behind NDAs), assume the quality is low. You need to see their work to verify their depth.
### 3. Price vs. Time Logic
A thorough manual audit typically covers **200-400 lines of complex code per auditor-day**.
* **Red Flag:** If a firm promises to audit a 5,000 LOC protocol in 3 days, they are just running automated tools.
---
## Phase 2: Stack-Specific Deep Dives
*This is where you separate generalists from specialists. Use these specific criteria to vet their expertise in your ecosystem.*
### 🔷 1. EVM (Ethereum, L2s, Avalanche, BSC)
**The Litmus Test:** Do they rely solely on standard tools (Slither, Mythril)? A top-tier firm will have their own internal fuzzing infrastructure or formal verification harness.
* **Key Technical Focus Areas:**
* **Reentrancy Variations:** Beyond simple reentrancy, do they check for *read-only reentrancy* (a common DeFi oracle exploit)?
* **DeFi Composability:** Do they understand how your protocol interacts with Curve, Aave, or Uniswap V3? (e.g., flash loan attack vectors).
* **Gas Optimization vs. Security:** Can they distinguish when gas savings compromise security (e.g., unchecked return values)?
> **✅ Green Flag:** They mention "invariant testing" or "property-based testing" (using Foundry/Echidna) in their proposal.
### 🦀 2. Solana (Rust)
**The Litmus Test:** Solidity intuition does not translate to Solana. If they talk about "reentrancy" as a primary risk without mentioning "account data validation," they are tourists.
* **Key Technical Focus Areas:**
* **Account Substitution Attacks:** Do they rigorously check that every account passed into the instruction is the *correct* account?
* **Owner Checks:** Are they verifying `account_info.owner` matches the program ID?
* **PDA (Program Derived Address) Seeds:** Are they verifying seed derivation to prevent "fake" PDAs from authorizing actions?
* **Arbitrary CPI (Cross-Program Invocation):** Do they check for invocation of malicious programs?
> **✅ Green Flag:** They are active contributors to the **Anchor** framework or have published research on Solana-specific exploits (e.g., Soteria).
### 📦 3. Move (Aptos, Sui)
**The Litmus Test:** Move is safe by design, so "generic" bugs are rare. The auditor must understand **Resource Capabilities** and **Object Ownership**.
* **Key Technical Focus Areas:**
* **Capability Leakage:** Can a user accidentally gain a "Capability" (permission) they shouldn't have? (e.g., `MintCapability`).
* **Sui Specifics:** Do they understand the difference between *Shared Objects* and *Owned Objects*? Misclassifying these can lead to consensus bottlenecks or security holes.
* **Formal Verification:** The Move ecosystem is heavy on the **Move Prover**. A top firm should offer to write formal specs for your critical invariants.
> **✅ Green Flag:** They don't just audit the code; they audit the *module upgrade policies* and governance controls.
### ⚛️ 4. Cosmos (Interchain, Cosmos SDK, CosmWasm)
**The Litmus Test:** Cosmos is about *interoperability*. If they don't understand IBC (Inter-Blockchain Communication), they are useless to you.
* **Key Technical Focus Areas:**
* **Non-Determinism:** Go iteration over maps is random. If this affects state, the chain halts. Do they check for this?
* **IBC Packet Handling:** Do they check for packet timeouts and acknowledgement logic? What happens if a packet gets stuck?
* **Module Interaction:** Cosmos SDK modules (Bank, Staking, Gov) have complex interactions. Do they check if your custom module breaks the `Bank` module's invariants?
> **✅ Green Flag:** They have experience auditing **relayer** infrastructure or have found bugs in the Cosmos SDK core itself.
---
## Phase 3: The "Sample Report" Audit
*Ask for a recent audit report for a project in your stack. Do not read the summary; go straight to the "Issues" section.*
| Area | 🚩 Red Flag (Low Quality) | ✅ Green Flag (High Quality) |
| :--- | :--- | :--- |
| **Issue Titles** | Generic titles like "Floating Pragma" or "Missing Zero Address Check." | Specific titles like "Incorrect Reward Calculation due to Precision Loss in `calcReward`." |
| **Severity** | Inflating "Gas" or "Style" issues to "Medium" severity to make the report look longer. | A "Medium" or "High" finding that includes a coded **Proof of Concept (PoC)** exploit. |
| **Description** | "The code should be updated to follow best practices." | "Line 45 allows an attacker to bypass the check by passing `amount = 0`, leading to..." |
| **Recommendations** | Copy-pasted generic advice. | Actual code snippets showing exactly how to fix the specific logic error. |
---
## Phase 4: The 3 "Killer" Questions to Ask
*During your call, ask these specific questions to gauge their depth.*
**1. "Can you walk me through a 'High' severity finding you found recently in a project that automated tools missed?"**
* *Why:* Tests if they do manual logic review or just run scanners.
**2. "How do you handle 'Acknowledged' issues? Will you re-review our fixes included in the base price?"**
* *Why:* You want a partner who verifies the fix, not one who charges extra for a "re-audit" of 10 lines of code.
**3. "Do you offer post-deployment monitoring or incident response support?"**
* **Why:** The best firms (e.g., OpenZeppelin, Spearbit, OtterSec) view security as a lifecycle, not a one-time transaction.
---
## Summary Checklist for Your Decision
- [ ] **Expertise:** Verified lead auditor + stack-specific research/tooling.
- [ ] **Methodology:** Manual review is the primary focus; automation is secondary.
- [ ] **Transparency:** Public reports show complex, logic-based findings.
- [ ] **Alignment:** They offer fix reviews and help with bug bounty setup (e.g., via Immunefi).
> **💡 Final Advice:** If you have to choose between a "Big Brand" firm with a 6-month waitlist and a smaller, hungry boutique firm composed of top **CTF (Capture The Flag) security researchers**—choose the researchers. They are often more thorough and cheaper.
**