# DAGSync: Graph-aware Zcash wallets tl;dr Zcash shielded wallets are slow. We want to fix this! With graphs! To get optimal syncing! ## What _is_ a wallet? Cryptocurrency users <sup>[citation needed]</sup> have "spending keys" that authorize their funds to be spent, from which addresses can be derived for receiving funds. A wallet manages these spending keys, as well as tracking all the information needed to enable spending the funds controlled by those keys. More specifically, what a wallet needs to do (in order to be useful) is: 1. Manage the user's spending keys, address generation, etc. 2. Maintain a correct balance of funds. 3. Enable funds to be spent. ## The status quo There are four operations that a Zcash wallet needs to perform in order to maintain a correct balance: - Trial-decrypting all on-chain notes to detect new funds. - Creating new witnesses for newly-detected notes. - Updating existing witnesses for our notes, so we can spend them. - Looking for nullifiers to detect spends of our notes. These are generally implemented (for example in `zcashd` and the mobile SDKs) in a simple and very linear way: as each block is fetched (from other `zcashd` peers, or as a compact block from `lightwalletd`), all these operations happen together and in sequence. It's been a long-term goal to split these operations up, for example by trial-decrypting in parallel, or updating existing witnesses separately from the rest in order to spend existing funds more quickly. Since mid-2021 we have started to see wallets implement these performance improvements (e.g. the [BlazeSync algorithm](https://github.com/zecwalletco/zips/blob/blazesync/zip-1015.rst), and `zcashd 5.2.0` implementing batched and parallelised trial decryption). Another area that affects performance is note management. The number of parallel transactions that can be in-flight (waiting to be mined) is limited by the number of spendable notes, as notes can't be spent until they are mined. If the user makes one large deposit to a wallet and then makes transactions within that balance, they will only ever have one spendable note at a time. Wallets can improve this by splitting up large notes into smaller ones, either actively or as part of existing user transactions (by creating multiple change outputs). Similarly, wallets can merge smaller notes together optimistically to reduce future transaction sizes. No wallet implements these techniques yet, but this is a previously-known optimisation that can be deployed when necessary. ## Wallet UX speedrunning To go beyond the status quo, we need to more closely examine how wallets are used in practice: - Imagine that I'm in line at a coffee shop, I see they accept ZEC, and I pull out my wallet that I haven't used in a month. I don't actually care about being able to spend my entire balance. What matters to me, in that moment, is that I can spend _enough_ funds to pay for a coffee. - Imagine my phone dies, so I buy a new one, install a new wallet app, and start recovering from my backed-up seed phrase. While I would like for my past transaction history to be recovered (which could be years of data), my primary goal is to regain access to my funds so that I can continue to use them. We can leverage these UX observations for a speed hack: if the wallet can ensure that _some_ funds are guaranteed to be spendable, the user can potentially get on with whatever transaction they want to make, without having to wait for the wallet to be fully synced. Let's refine our earlier definition of a wallet: 1. Manage the user's spending keys, address generation, etc. 2. Maintain a correct *lower bound on the* balance of funds. 3. Enable funds *up to the lower bound* to be spent. 4. *Eventually determine the correct balance of funds.* Learning an exact lower bound on a wallet's balance obviously can't be slower than learning the correct balance, for any scanning protocol. The question is, how can we make it faster, beyond running more operations in parallel? What we need is some extra information to guide our wallet, so that it prioritises work that enables notes to be made spendable quickly. And we alone have that information: our personal transaction graph. ## An aside, the usefulness of which will become apparent shortly A Directed Acyclic Graph (DAG) is a datastructure with three key features: - **Graph-ness**: it is a set of nodes (also known as vertices) connected by edges. - **Directed-ness**: each edge has an arrow indicating a "direction of travel" through the graph. - **Acyclic-ness**: starting at any node, and moving between nodes along edges in their direction of travel, it is impossible to end up where you started. ![Example of a directed acyclic graph.](https://upload.wikimedia.org/wikipedia/commons/f/fe/Tred-G.svg) Each node in a DAG has an "in-degree" (the number of edges arriving at it) and an "out-degree" (the number of edges leaving it). In the example DAG above, node `d` has an in-degree of 3, and an out-degree of 1. We can constrain the structure of a graph by assigning labels (known as "colours") to the nodes, such that no two adjacent nodes have the same colour. The example graph above cannot be coloured with fewer than 5 colours, because node `a` is connected to every other node. Meanwhile, a Petersen graph has 10 nodes and 15 edges, but can be coloured with three colours: ![image alt](https://upload.wikimedia.org/wikipedia/commons/9/90/Petersen_graph_3-coloring.svg) ## The personal DAG Imagine that you have an empty wallet, and you receive some funds in a transaction. Let's represent that transaction as a pair of nodes in a two-colour graph: a box for the transaction, and a circle for the note, with a directed edge showing that the note is created by the transaction: ``` ┌──┐ │S0├─►O U0 └──┘ ``` Since a note only exists between a pair of transactions (the one that creates it, and the one that consumes it), they will have an in-degree and out-degree of at most 1. > We could also just use the edges themselves as the notes, and represent unspent notes as "dangling edges", but whenever I suggested this to people who understood graphs, they would scream. Next, you use those funds to pay for coffee. This consumes the note that you received in the previous transaction, and creates a new note containing your change (as well as the note for the coffee shop, but we'll ignore that one for now). We can represent this as: ``` ┌──┐ ┌──┐ │S0├─►O─►│I0├─►O U1 └──┘ └──┘ ``` Now let's assume your wallet also performs note management. In addition to paying for coffee, it split the change into three separate notes: ``` ┌──►O U2 │ ┌──┐ ┌─┴┐ │S0├─►O─►│I0├─►O U1 └──┘ └─┬┘ │ └──►O U3 ``` Following this with a few more spends, and more received funds, we end up with a more complex diagram: ``` ┌──►O───┐ ┌──►O────────┐ │ │ │ │ ┌──┐ ┌─┴┐ ┌▼─┴┐ │ ┌──┐ │S0├─►O─►│I0├─►O─►│I1 ├─►O └────►I4├────►O U9 └──┘ └─┬┘ └───┘ │ └▲─┘ │ ┌▼─┐ ┌──┐ │ └──►O───────►│I2├─►O─►│I3├──►O ┌──┐ └▲─┘ └─┬┘ │S2├─►O U10 ┌──┐ │ │ └──┘ │S1├─►O──┘ └───────────►O U11 └──┘ ``` This is, by definition, the transaction graph. More precisely, it's the subset of the global transaction graph that involves your wallet. <!-- But it's useful to think of it specifically as a DAG, because we can apply DAG traversal logic! --> We can break down the wallet's transaction graph into three kinds of nodes: - Source nodes with in-degree 0. - These are always transactions that introduce new funds to the wallet (labeled `S#` above). - Sink nodes with out-degree 0. - These are either our unspent notes (labeled `U#` above), or transactions that consume all their inputs and have no change outputs (such as when moving funds out of the shielded pool). - Internal nodes (everything else). - These are previously-spent notes (with out-degree 1), and historical transactions. From this, we can draw several key insights: - Our current balance is exactly the sum of all notes with out-degree 0. Therefore, finding those as quickly as possible is the fastest way to enable users to spend their existing funds. - Given that the most expensive operation is trial decryption, the optimal path selection will be where paths are weighted by the number of outputs on each transaction. - Weighting by out-degree is insufficient; we need to trial-decrypt all transaction outputs in order to discover what its out-degree is (how many outputs are for us). - In the simplest case, we do not need to store witnesses for any notes with out-degree 1. If we can identify them more quickly than updating their witnesses, we can save on computation. - In practice, we need witness data up to 100 blocks back (the maximum reorg distance in `zcashd` and `zebrad`). Note that the protocol doesn't require this, although it documents and allows it (see [section 3.3 of the protocol spec](https://zips.z.cash/protocol/protocol.pdf#blockchain)). - We _only_ need to do "scan-everything" trial-decryption to discover source nodes in the graph. All other nodes exist downstream of these, and can be discovered with _targeted_ decryption. We now have everything we need to describe a wallet sync algorithm that exploits our personal transaction graph. ## The DAGSync algorithm The wallet starts with several sets: - `note-set`: A set of potentially-unspent notes. - `tx-set`: A set of transactions that need to be trial-decrypted. - `calc-set`: A set of notes that we know are unspent but can't spend yet. - `spendable-set`: The set of spendable notes. - The sum of these notes' values is the wallet's lower bound on balance. The wallet initialises `note-set` with the previously-unspent notes from the last time we synced the wallet. If we are recovering a seed phrase, all sets start empty. The wallet then runs several tasks in parallel: 1. **Nullifier Tracking:** For each note in `note-set`, look for its nullifier in an on-chain transaction. - If don't find the nullifier, the note is unspent! Add it to `calc-set`. - If we find the nullifier, and the wallet does not know about the transaction, add it to `tx-set` (and also to the wallet). 2. **Targeted Decryption:** For each transaction in `tx-set`, trial-decrypt all of its outputs. If any outputs can be decrypted, add the nullifiers for those notes to `note-set`. 3. **Witness Calculation:** For each note in `calc-set`, calculate its witness (either by updating an existing witness or calculating a new one), and then add the note to `spendable-set`. 4. **Source Discovery:** Starting from the latest block and going backwards, for each on-chain transaction that the wallet does not know about, trial-decrypt all of its outputs. If any outputs can be decrypted, add the nullifiers for those notes to `note-set`. - This task stops once we see a "marker" in a decrypted change output (if supported, see below), or it catches up to the history-discovery task. 6. **History Discovery:** Starting from the point we were previously synced to and going forwards, for each on-chain transaction that the wallet does not know about, trial-decrypt all of its outputs. If any outputs can be decrypted, add the nullifiers for those notes to `note-set`. - This task stops once it catches up to the source-discovery task. Syncing completes once all sets except `spendable-set` are empty, and all tasks have completed. However, at any point the wallet can take notes from `spendable-set` and use them to create transactions! ```mermaid flowchart TD tx_set[(tx-set\nblock height,\nCompactTx\noutputs to skip)] txid_set[(txid-set\nblock height,\ntx index,\noutput index)] note_set[(note-set\nblock height,\nDecryptedNote)] calc_set[(calc-set)] spendable_set[(spendable-set)] ChainData{{ChainData}} WalletData{{WalletData}} ChainData <-.-> SourceFinder ChainData <-.-> HistoryFinder ChainData <-.-> TransactionSource ChainData <-.-> NullifierTracker WalletData <-.-> SourceFinder WalletData <-.-> HistoryFinder SourceFinder --> tx_set HistoryFinder --> tx_set TransactionSource --> tx_set tx_set --> OutputScanner OutputScanner --> note_set note_set --> NullifierTracker NullifierTracker --> txid_set txid_set --> TransactionSource NullifierTracker --> calc_set calc_set --> WitnessCalculator WitnessCalculator --> spendable_set ``` ## Efficiency This syncing process traverses the graph from the known state to the leading edge as quickly as possible, via sequences of nullifer-tracking and targeted-decryption tasks that traverse the shortest path of sequential spends. This algorithm is efficient for several reasons: - Nullifier tracking (a database lookup) is much cheaper than output decryption (variable-base scalar multiplication). This means that we are enabling the most effective parallel work, by discovering useful transactions to decrypt at multiple points in chain history simultaneously (as nullifier tracking for longer historic paths of spends can be occuring alongside shorter newer paths). - We know that many earlier paths are likely to rejoin (or could be artificially encouraged to do so via the wallet's note selection algorithm), so with a small amount of synchronisation between the sets, most nullifiers can be eliminated as spent fairly quickly after the shortest path has been explored. - The trial-decryption operations can be prioritised based on how the outputs were discovered, with targeted-decryption having the highest priority, and history-discovery having the lowest priority. The upshot of this is that a wallet with random access to chain data, where accessing that data is much cheaper than trial-decryption, can very quickly establish a lower bound on its balance, _and make it spendable_, while having an eventually-consistent balance that is guaranteed to discover the same set of spendable notes as the existing linear scans. You also get very fast recovery from seed in this model: as soon as you find the very first source, you can "surf" the DAG to find your most recently-unspent notes. This is much more efficient that backwards-linear scanning (as the notes might have been created tens of thousands of blocks earlier than the current chain tip) and _way_ more efficient than forwards-linear scanning (as the wallet might be hundreds of thousands of blocks old). Wallets generally already make finding this first source easier by asking users to back up the "wallet birthday" alongside the seed phrase. Note the restriction "random access to chain data". Unlike a linear scan, this sync process requires multiple passes over the chain, in non-sequential order. However, for a wallet that already performs a linear scan, this shouldn't be much of a constraint: they can fetch blocks linearly as they currently do, but cache them instead of scanning and dropping. The wallet has a "window of access" its scanning requires, stretching from the oldest output-decryption task to the most recent nullifier-tracking task. The wallet could potentially tune this cache by tracking the progress of the output-decryption tasks triggered by the new blocks queue, and only fetching more blocks as older ones are complete. Some tasks (primarily nullifier-tracking) would need to block until the blocks they require are available; there would therefore be a trade-off between task queue sizes and the block cache size. ## Even more efficiency: wallet markers and knitting Once we have a wallet that is cognizant of its own transaction graph, we can start actively influencing the shape of that DAG to get even more speed! While performing note management, a wallet that is DAG-aware can _additionally_ explicitly select newly-received non-change notes to be part of a transaction. This has an interesting effect on the structure of the DAG. Consider two transaction graphs, where the source `S1` is either left unselected (because it was unnecessary for the values of other transactions), or actively selected for `I2`: ``` ┌──►O───┐ ┌──►O────────┐ │ │ │ │ ┌──┐ ┌─┴┐ ┌▼─┴┐ │ ┌──┐ │S0├─►O─►│I0├─►O─►│I1 ├─►O └────►I4├────►O U9 └──┘ └─┬┘ └───┘ │ └▲─┘ │ ┌▼─┐ ┌──┐ │ └──►O───────►│I2├─►O─►│I3├──►O ┌──┐ └──┘ └──┘ │S2├─►O U10 ┌──┐ └──┘ │S1├───────────────────────────►O U11 └──┘ ``` ``` ┌──►O───┐ ┌──►O────────┐ │ │ │ │ ┌──┐ ┌─┴┐ ┌▼─┴┐ │ ┌──┐ │S0├─►O─►│I0├─►O─►│I1 ├─►O └────►I4├────►O U9 └──┘ └─┬┘ └───┘ │ └▲─┘ │ ┌▼─┐ ┌──┐ │ └──►O───────►│I2├─►O─►│I3├──►O ┌──┐ └▲─┘ └─┬┘ │S2├─►O U10 ┌──┐ │ │ └──┘ │S1├─►O──┘ └───────────►O U11 └──┘ ``` Imagine that we are currently synced up to transaction `I0` (and know its outputs): - In the first case, the only way we can discover `U11` is with source-discovery or history-discovery scanning (whichever happens to reach `S1` first). This could potentially require trial-decrypting tens of thousands of blocks. - In the second case, we can reach `U11` directly from the outputs of `I0` (as there is a path through the DAG from those outputs to `U11`). We never actually need to discover `S1` except for historical purposes, so we *want* it to be discovered by the history-discovery task (which has the lowest priority). The key insight here is that given a set of unspent notes at some point in time, there is a high probability that they are connected to almost your entire DAG at a later point in time. A wallet can therefore improve sync performance by actively "knitting in" new sources as they are discovered, by preferentially selecting them as transaction inputs. Once a wallet is doing this, they can (TODO: Finish this thought). ## Stay tuned! The ECC core team is working on implementing DAGSync in the Zcash Rust crates. Once that is complete, anyone using the Android and iOS mobile SDKs will benefit from this. --- # Not part of the blog post, maybe part of a ZIP ## A graph-aware wallet architecture The primary goal of this algorithm is to find spendable notes with the minimum number of expensive operations. There are three non-trivial operations we need to perform: - Trial-decrypting a transaction output. - Locating a spent nullifier in the block chain, if it exists. - Constructing or updating a witness for a note. We start by caching the block chain data between the height at which our wallet was previously synced, and the current height (this already occurs for existing wallet sync algorithms; we can similarly drop the cache once sync is complete). With this cache, locating a spent nullifier becomes a database lookup. Constructing or updating witnesses for notes requires reading data from the cache, and computing hash functions. Thus we consider the only expensive operation to be trial-decryption of transaction outputs. A wallet that is cognizant of its transaction graph can break its syncing process into several task queues: - An ordered queue of received blocks, which will be scanned for transactions. - A queue of nullifier-tracing tasks: given a nullifier and a block height, find the transaction in a later block that reveals this nullifier. - A queue of output-decryption tasks: given a transaction and a set of viewing keys, determine all notes that belong to this wallet. - A queue of witness-creation tasks: given a note in a transaction and a block height, construct an incremental witness that is accurate to that block height. - A witness-update task: given a set of incremental witnesses accurate at some past block height, and a later block height, update the witnesses to be accurate at that later height. We then connect the tasks like so: - When a new block is added to the queue: - Start a witness-update task, passing in all witnesses in the wallet. - For each transaction in the block, check if the transaction is in the wallet. If it is not, start an output-decryption task. - When an output-decryption task completes, take every discovered note and: - Compute its nullifier, and start a new nullifier-tracing task. - When a nullifier-tracing task completes: - If it found a transaction, check if the transaction is in the wallet. If it is not: - Add the transaction to the wallet. - Start a _prioritised_ output-decryption task. - If no transaction was found for the nullifier, start a witness-creation task for the corresponding note, bringing it up-to-date with the current height. - When a witness-creation task completes, add the witness to the wallet. Finally, the initial conditions: - For a brand new wallet (or a wallet recovered from a seed), just start adding blocks to the new block queue. - For an existing wallet (syncing after some period offline), also create a nullifier-tracing task for each unspent nullifier in the wallet. INSERT GRAPH OF THE PROCESSES <!-- This syncing process traverses the graph from the known state to the leading edge as quickly as possible, via sequences of nullifer-tracking and output-decryption tasks that traverse the shortest path of sequential spends. Nullifier tracking is much cheaper than output decryption; this means that we are enabling the most effective parallel work, by discovering useful transactions to decrypt at multiple points in chain history simultaneously (as nullifier tracking for longer historic paths of spends can be occuring alongside shorter newer paths). Additionally, we know that many earlier paths are likely to rejoin (or could be artificially encouraged to do so via the wallet's note selection algorithm), so with a small amount of synchronisation between the queues, most nullifiers can be eliminated as spent fairly quickly after the shortest path has been explored. Finally, the slow trial-decryption of the chain brings up the rear, but it only needs to focus on identifying new sources (ignoring any transactions we already know about). The upshot of this is that a wallet with random access to chain data, where accessing that data is much cheaper than trial-decryption, can very quickly establish a lower bound on its balance, _and make it spendable_, while having an eventually-consistent balance that is guaranteed to discover the same set of spendable notes as the existing linear scans. Note the restriction "random access to chain data". Unlike a linear scan, this sync process requires multiple passes over the chain, in non-sequential order. However, for a wallet that already performs a linear scan, this shouldn't be much of a constraint: they can fetch blocks linearly as they currently do, but cache them instead of scanning and dropping. The wallet has a "window of access" its scanning requires, stretching from the oldest output-decryption task to the most recent nullifier-tracking task. The wallet could potentially tune this cache by tracking the progress of the output-decryption tasks triggered by the new blocks queue, and only fetching more blocks as older ones are complete. Some tasks (primarily nullifier-tracking) would need to block until the blocks they require are available; there would therefore be a trade-off between task queue sizes and the block cache size. --> ## Notes being brought in And if wallets take the scanning behaviour into account when doing note selection for transactions, they can actually reduce scan time by influencing the shape of the DAG. Most importantly, given a set of unspent notes at some point in time, there is a high probability that they are connected to almost your entire DAG at a later point in time. A wallet can in fact influence that probability by actively "knitting in" new sources as they are discovered, by preferentially selecting them as transaction inputs. The key insight is that given a set of unspent notes at some point in time, there is a high probability that they are connected to almost your entire DAG at a later point in time. (and we can influence that probability by actively "knitting in" new sources as they are discovered, by preferentially selecting them as transaction inputs) The DAG needs to operate in the forwards direction because we can leverage the nullifier map, whereas we can't go backwards along the DAG And meanwhile do backwards-scanning to find the most recently-received sources that aren't bound into your DAG And if your wallet does regular knitting, you can bound how far back from the chain tip your backwards scan needs to go! (at which point, any further bulk trial-decryption only gives you historical information, not any new unspent notes) the sources can be really easy to find in some cases (e.g. a merchant terminal, where the customer's wallet sends the transaction out-of-bound to the terminal via NFC or QR). ## Notes to bring in <!-- The premise I was using when conceptualizing the DAG was b). The idea is that you can learn an exact lower bound on your balance much faster than you can learn the total balance. Learning the total balance is obviously going to take strictly more time than learning a lower bound, for any scanning protocol. Whether it is faster to learn the total balance, or make the lower bound spendable, is something I'm not certain on, but I'm at least 85% sure making the lower bound spendable would be faster. --> And if wallets take the scanning behaviour into account when doing note selection for transactions, they can actually reduce scan time by influencing the shape of the DAG. you can sync backwards instead of forwrads, but if you were last synced up to a week ago there's no guarantee you'll actually find any transactions for you in the most recent block. And doing so is still linear in the number of on-chain transactions Whereas the DAG approach discovers your spendable notes in the minimal number of trial-decryptions, which is linear in the size of the shortest path through your unsynced DAG (which is entirely controlled by your wallets) You do still need to scan the chain to find new "sources" to your DAG (new received notes that were previously-unknown to your wallet), and that could be faster backwards than forwards, but it would still be linear in on-chain transactions, so having the DAG scanner enable your existing notes to become spendable makes your wallet more rapidly usable. and then for the new sources, the wallet behaviour "looks" the same whether a note was sent to your today or two days ago (they just "appear" as if they were sent to you during the scan) If you scan backwards, the first note you find is guaranteed to be unspent. However, finding that first note requires work that grows linearly with the traffic on the chain. Whereas if you have a map from nullifiers to the transactions they are spent in, and you already know an existing note that was previously unspent, you can turn that into a currently-unspent note with work linear in the depth of your personal transaction graph, which is independent of any on-chain traffic growth. In my cost model, finding the transaction that spends a specific nullifier is an "easy" task (the mapping can be precomputed, cached for everyone in lightwalletd, and fetched by the light client at the start of the sync process for the bounded sync range you need), while trial-decryption of outputs is a "hard" task (requires individualised CPU work on each client). Ergo, in my cost model, "time to find first spendable note" equals "shortest path through my wallet's transaction DAG, where paths are weighted by number of outputs on each transaction" New notes from others are "sources" in the DAG, and your unspent notes are the "sinks" The key insight is that given a set of unspent notes at some point in time, there is a high probability that they are connected to almost your entire DAG at a later point in time. (and we can influence that probability by actively "knitting in" new sources as they are discovered, by preferentially selecting them as transaction inputs) The DAG needs to operate in the forwards direction because we can leverage the nullifier map, whereas we can't go backwards along the DAG But in this model, you get very fast recovery from seed: as soon as you find the very first source in your DAG (and you could include that info in your seed backup), you can surf the DAG to find your most recently-unspent notes (which might be tens of thousands of blocks earlier than the current chain tip, but hundreds of thousands of blocks after your wallet was created). And meanwhile do backwards-scanning to find the most recently-received sources that aren't bound into your DAG And if your wallet does regular knitting, you can bound how far back from the chain tip your backwards scan needs to go! (at which point, any further bulk trial-decryption only gives you historical information, not any new unspent notes) the sources can be really easy to find in some cases (e.g. a merchant terminal, where the customer's wallet sends the transaction out-of-bound to the terminal via NFC or QR). # Data sources For nullifier detection, we need random access from nullifiers to the transactions they are spent in. - Build a `HashMap<Nullifier, (block_height, tx_index, output_index)>`, so that if the nullifier has been spent, we can immediately decryt. After trial decryption, we need to be able to convert the local position `(block_height, tx_index, output_index)` into a global position in the note commitment tree. - Build a `BTreeMap<(block_height, tx_index, output_index), position>` sorted by `position` for notes at every $2^{16}$, starting from the History Discovery Frontier, plus every note in that interval that has already been witnessed (as we know their local position from our wallet data, and their tree position from our witness data). - Look up the decrypted output in this map, and find the positions either side of where it would go; this then tells us the range of blocks we need to scan to compute the witness. For the witness calculation: - Create `BridgeTree` that tracks bridges every $2^{16}$ notes. - A single block can hold at most around 2100 Sapling outputs, so this is 31 blocks, or around 60MiB. - Additionally the tree witnesses our own notes, like usual. So it stores the positions for both the boundaries and the witnessed notes. - When we find a new note, we look at our `BridgeTree` to find the bridge it intersects, and we split that bridge in two by manually appending from the start of that bridge to the new note, and then appending from that new note to the end of the bridge. - This may be between two powers-of-two boundaries, or between one boundary and an existing witnessed note, or between two witnessed notes.