Nick Gheorghita
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Portal Summit ## Day 1 - May 28, 2024 ### Hive recap - Missing Hive tests for wire spec level - incrementing sequence numbers / ack numbers - Sending right ack/fin packets & responding correctly - But there are limits to what we can test with hive, every client needs to be treated as a black box (so hive isn't the best place for these tests) - (this isn't possible currently we would need to develop a special testing framework, and everyone would need to add a standardized interface to their uTP library. As stated above this wouldn't be a "hive" test, uTP is too low level for Portal hive tests uTP interop isn't the same thing as Portal client interop - Kolby) - Missing negative test vectors - [ ] TODO: invalid payloads / clients are rejecting invalid payloads (prioritize!) - [ ] TODO: Missing beacon network tests - [ ] TODO: Missing state recursive gossip - kurtosis? - better for testing mini-testnets - enables network latency simulations - (Pari told me about `assertoor` and `shadow` I think they are likely more useful or could replace the usefulness kurtosis has for us. Kurtosis is currently a framework for testing cordination like what had to happen for the merge -Kolby) - Hive currently is a mix of the stable tests and unstable tests. It also combines stable clients and in-development clients. This means that every suite reports failure, and leads to a broken window problem (All tests are stable, if a test is failing for 1 client but working for the rest I believe it is an issue with the client (If a test isn't ready we wouldn't merge it) - Kolby) - [ ] TODO: display which clients are "fully" implemented (eg you expect 100% passing) and another view for "partially" implemented clients (where < 100% passing is ok) - [ ] TODO (within 2-3) months: run 2 different instances of hive (production & testing). Get production to 100% and maintain it there - [ ] TODO: updated discord bot to only notify on failures / regressions for production-grade tests - [ ] TODO: document how to contribute to hive - [ ] TODO: deprecate portal-hive repo - move util library to main hive ### How to handle being offered the same content simultaneously If a client is offered the same content multiple times, it would accept and start transferring both content values simultaneously. (it doesn't have the content in its database at the time of any of the offers) This is a waste of resources that scales with how many concurrent transfers there are. - pertinent to state network where FIND/OFFER messages are different (ie. w / w/o proofs) - (problematic) solution #1. - limit accepts to 1 / content key - cache backup offer enrs - works fine for history network / not for state - cannot go look for failed txs w/o any backup enrs - also susceptible to trickled utp stream attack (eg 1byte / packet) - (problematic) solution #2 - allow more than 1 concurrent accepts / content key - still not great, doesn't solve the problem - (not problematic?) solution #3 - `delay(n1, n2) -> time value1` n1 = node offer, n2 = accept node - delay offering content to nodes close to you, immediately offer content to nodes far away from you - bridges bypass this mechanism - how do we measure this? - HEAD latency - histogram for how quickly the latest head is available - glados via monitoring incoming offers - how fast does the network itself get saturated - how often do our clients have a redundant / wasted transfer ### Portal Endgame: Roadmap #### Ideals - Foundation of most EL clients - Can do everything EL can do today, perhaps with some exceptions (like block building) - Doesn't require full node, still lightweight - Adoption for core infrastructure and userland - Sustainable network from active use (we could turn all our nodes off, and it keeps going) - Usable for userland activities from phones/pi's/consumer-devices/etc - EL adoption fully handles bridging - Layer 2 adoption / non-mainnet adoption - CL data on Portal, bridged from CL clients #### Intermediate Goals **MVP** - State (Merkle & Verkle) - **Archive**: Trie node hash-based access - Head: Path access to account & storage - *Optional/Experimental* Deep Archive: Erigon reverse-diff approach (10x storage improvement) - Large SSZ object storage (can help with re-seeding cold data) - Bulk History Sync - After MVP - Depends on Large-SSZ storage - Beacon Snap Sync - After MVP - Transaction Gossip & Reverse Bridge to dump them back into devp2p network - Depends on state network HEAD state - **NAT traversal** - Needed for bringing on users - Native uTP over discv5 - https://github.com/ethereum/devp2p/issues/229 - Optional but ideal for MVP - Adavced State Handling (Specific client work, not general protocol work) - Discard Cold State - Caching - Viable testnet/non-mainnet solution () - Depends on large radius nodes - **Large radius nodes** (want to spin up ~15 nodes to store all of testnet data) - Needed for MVP - Canonical Indices Network for tx-by-hash - After MVP - Re-seeding cold data that will be lost due to nodes eventually going offline - Unclear what priority is here. Incentivization? Nearly intractible: - Who will pay? - Hard to reliably prove data serving - How to transfer funds, when you don't have access to the network yet ### EIP-4444 and EL Adoption - Geth + Nethermind + Besu - [97% of network](https://clientdiversity.org/) - Adoption from these 3 clients means we get actual users - DevCon November 2024 - In-person event attended by EL Clients - Who ALREADY know things about Portal - Have spent 3-4 months implementing - Started by July 1 2024 - Find out WHO is implenting 4444's on each team - IF they are interested at all - Event should be run by EL teams with Portal devs as resource - Create EIP - Maybe as simple as linking to history network spec. - Copy/pasta spec into EIP if necessary ## Day 2 - May 29, 2024 ### Session: ### Large Radius Nodes Why are they problematic? Makes finding nodes addressed closed to the target content is more difficult during the RFC lookup. Approx Geth Archive node size: 20TB (assumes future growth) Target data redundancy: 10x Network storage size requirements: 200TB -> 500TB (assumes future growth) Easier to maintain a fleet of fewer, larger nodes than more, smaller nodes while we are (mostly) responsible for supporting the network in the initial phases. - [ ] TODO: deploy a variety of differently-sized nodes to the network. Solution: A single node, with large storage, that is represented via X different identities on the network. BIP39: HD wallet, heirarchical deterministic This scheme doesn't protect against sybil attacks (how can we use Glados to prevent that?) but it does improve network topology for honest nodes ENR will contain... - master pub key (to derive path identities) - path - id Goal: represent a master node (aka single node with multiple ids) as a single node in a peer's routing table Is there a problem using ring geometry vs xor for distance calculation with this scheme? "Sub" keys need to equally divide / bifurcate the keyspace of the max effective radius of the master node. Done by brute forcing the path. - [ ] TODO: Figure out a viable mechanism for nodes to increase their radius. Open Questions: - how do sub nodes manage routing tables? share a single one? - how should the subnodes of a master node be represented in a peer's routing table? multiple times? single time? - implementation? is there a simpler, mvp implementation? - [ ] TODO: graphs : histogram based on all findcontent requests that i get, how far away from me was it served? and what distance was the content i served from my id? measure these before and after deploying variable sized nodes to the network ### Glados Portal Prague 29 May 2024 #### 1. Glados introduction by Mike - coverage graph - blue and orange color - 99.9% 4444s data into the network - census explorer overview #### 2. What do we want in Glados - make Glados more friendly to outside people - need to do 5000 per minute audits - able to see sybil atacks on census explorer - percentile response times - Ping latency - FindContent latency - RecursiveFindContent latency - Content first seen in network delay. - ability to see glados metrics on ethportal website - total derived network capacity before Devcon - reson for failure: utp or nobody should have it - glados node view of the head - check beacon sync - latency measurments - block time based success rate chart - revive bi-weekly or monthly Glados meetings - client count pie chart as a stack-wide graph over time - testnet view in Glados - redy for next Pectra fork - support multi-client Glados - download failing keys button: endpoint to expose failed audits ### State Network Goals: - MVP for devcon Roadmap - Glados monitoring - Infrastructure Kolby's recap: - Bridge currently pushes state onto network, some forks are not available yet - There's a problem with the trie library where it doesn't tell us which intermediate nodes changed, - we need to account for extension nodes - Need a portal hive bridge test for regeneration - It will take 3-5 years to gossip all state onto the network - 2 strategies for how to make this faster: - In bridge we should do a census of the entire network and gossip to all interested - bottleneck in trin, our main event loop gets blocked (?? this doesn't appear true in source code) - Milos Q: let's say we have a good way of seeding the data, which data do we seed before devcon? - Piper/Jason A: start from block 15 mil and gossip forward - Headsnapshot: 3,4,500GB of trie nodes - Need to find out how many bridges we need to push a single diff in 12 seconds (depends on where in the chain we are) (We want to push a single diff in 4 seconds or less ideally - Kolby) - How do we reliably export state near head in a format that makes it easy for us to load into the network and how we move on from that point with our clients generating the next state and the next state. That's what we need for devcon: keeping up with the head of the chain and doing getBalance (just do account trie, smoke & mirrors everything else). - MVP for devcon: - snapshot of headstate - bridge able to keep up at some predictable lag behind head (hopefully less than a minute or 2) - validation of state roots - ok to be slightly incorrect for the sake of demo - Milos Q: why not do the beginning blocks instead? - Piper A: Por que no los dos? - Glados currently doing random walk of account trie, need pre-image solution for contract trie #### Failure to Propagate if State Flip-flops Background: In state network gossip, the specification says that the bridge will gossip the leaves of the state trie. Then, those receiving nodes strip off the leaf they store, and gossip the parent of that leaf. If that parent is already stored (say because a sibling leaf gossipped it already), then the process terminates. This termination helps prevent having too many nodes hammer the area of the network that needs to store the state root. Milos identified a problem when an account's state is modified to be the same as in an older block, so that its leaf node hash has already been gossipped in a previous block. In chis case, intermediate nodes will not be correctly gossipped. The bridge tries to offer the "new" (identical) leaf trie nodes, all Portal nodes reject them as already present, and the Bridge terminates. In other words: if you flip-flop a value in state between two values: A->B->A, you get a sub-trie that was the same as in a previous block, the things above it are different, and when you try to gossip it you gossip identical data so nodes will say "no thanks" even though the proof changed. So you keep stepping up the trie until someone says yes, so the state root eventually get gossiped by everybody. Current solution: bridge gossips everything. This has the unfortunate effect of hammering the state root nodes. ### Verkle Milos: - Unlike merkle, verkle has a 256 branching factor. there are no extension nodes. so every node has 256 branches, except the very last one, the leaf. The hash of the node is not a keccak like in merkle, it uses banderson elliptic curve hashing. 32 byte keys and values of the verkle trie. the leaf nodes are chunked into 4 parts. - ethereum is going to store all account state and contract state in one verkle trie. data that's close to each other are colocated in leaf nodes as much as possible. - Problem: if you go past the first 2 layers, most likely only one path will be modified per block. So there's a big duplication factor. - Working on generating data now - [ ] TODO: actually pipe data in to the network - On the approach of splitting the 256-value node into 2 layers of 16-value nodes: - Generate an extra proof at each level, but this doubles the storage - Can also use Stark ZK proof to reduce the proof size down to `(2log2(255)+1)*32 = 17*32` bytes, "Inner Product Argument" - That ^ proof size is needed once fore each 16 new children - Using multiproof, you can squash them all down to a single proof, of 17*32 bytes - Milos will post a link to Inner Product Argument - Verkle targetted to the launch around late 2025 or early 2026 - Why can't we skip Merkle and go straight to Verkle? - We need to launch right away, by devcon, much earlier than Verkle launch. This launch needs to include real data, not just testnet data - Work on Merkle archive won't be wasted anyway, because we want to access old state no matter what - Verkle research is working on gas pricing, which changes in many ways, including: - accessing bytecode - accessing storage slots - even basic balance transfers ### Beacon Chain: update - Need system for proofs post-merge for canonical header proofs - `historical_roots` got replaced by `historical_summmaries` - Proving path for `historical_summaries` crosses across multiple parts of the Consensus data structure in order to anchor data. - Proof sizes are roughly 836 bytes - Open question of how we deal with blocks that are within the most recent 27 hours (8192 blocks worth of data neede before proofs stop being ephemeral) - Access to the `historical_summmaries` roots needed in order to verify proofs. - PRs for test vectors are in PR #287 and #291 in specs repo for specifying these proofs. - Starting point would be to do pre-capella since those proofs are all frozen. - Clients will need to do some amount of database purging to remove old headers-without-proof types that are in client databases. - We still need a solution for how to purge unproven headers that pass the epoch boundary. Also maybe need to purge body/receipts. Maybe we are re-injecting. - `historical_summaries` are stored and retrievable from beacon network and are the anchor for finalized header/body anchoring proofs. - `historical_summaries` are anchored through the finalized header or latest header which is gossiped every 6 minutes. - Bootstrap objects are covered in [PR #306](https://github.com/ethereum/portal-network-specs/pull/306) - Update objects during a 27 hour period are subject to the my`is_better_update` condition. ## Day 3 - May 30, 2024 ### Deep Archive with Massize SSZ Objects Background: The older the data, the less important it is to get an answer for a single quickly. It's also more helpful to request ranges of data, because some users will want to use Portal as a source for ingressing large swaths of data. Additionally, when storing data back to genesis, the network is most sensitive to data size. All of that leads us to asking the question: what are some designs that allow us to reduce total storage usage, and support the use cases we need? One option under consideration is to freeze a huge series of data into a giant SSZ object (say 1 or 10 million block headers), spread that data around the network in chunks, and bake the commitment hash into the client. The following discussion was exploring this idea: - Encode a giant binary object, and then encode it as an SSZ list of U256s - the reason you re-encode it this way is so that you have a flat structure that's easy to distribute evenly - large binary tree with 32 byte leaves, very normally shaped with all data front-loaded - trivial to map onto the address space of the dht - For example can be used to store all pre-merge headers in a single giant distributed object - There's some size below which it doesn't make sense to do this model. As it gets larger though, the wait before being able to create the next epoch increases. - Kim won a bet that the variable size offsets are interlaced with the fixed elements - `List[ERA, ERA, ERA, ...]` (not uniform) -> encode -> `List[u8, ...]` (uniform) - Could seed million block chunks in this format - Partial responses would be valid - List - Objects - Address - List - Objects - Block Number - Account Changes - List - Slot Changes

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully