owned this note changed a year ago
Published Linked with GitHub

NOMT Design Decisions

Hash-Table: Triangular Probing

We have chosen to use triangular probing as our preferred hash-table algorithm instead of 3-ary cuckoo or Robinhood.

We wanted the following properties in our algorithm:

  1. Minimal probes required for present and absent values
  2. Minimal writes on insertions and deletions.
  3. Graceful performance degradation up to at least a load factor of 0.8

With meta-data bits in memory, Triangular probing fulfills all 3. It degrades gracefully beyond a .9 load factor, with fewer than 2 average probes per existing key query, and fewer than 1 average probes per absent key query even at that load factor with 4 or more meta-bits.

Note that triangular probing requires that the number of buckets is a power of two. A proof that it visits all the buckets is provided here: https://fgiesen.wordpress.com/2015/02/22/triangular-numbers-mod-2n/

An insert or delete operation only ever requires a single page write, which is optimal.

Manual DB Sizing

We have chosen to have the database operator manually choose a database size on node initialization. This database can only be re-sized either by creating a new node or with a special command while the node is not running. We do not perform in-flight resizing.

The reasoning for this choice:

  1. Resizing is a long process which will eat into IOPS if run in the background, degrading performance, possibly for all nodes at the same time.
  2. Nodes for large-scale chains will likely allocate (almost) the entire drive to the database first thing, making automatic resize impossible.
  3. manual configuration has a better path towards direct block device access.
  4. resizing will increase read and space amplification while ongoing, leading to unpredictable performance.
  5. Reduced implementation complexity

Because our hash-table degrades in performance slowly, there is a long period of time during which a node operator can re-size their database. It will likely take months or years for a properly sized database to grow to 80% or 90% occupancy, providing the operator with a long window to upgrade.

Crash Consistency: Use SQLite's Assumptions

We have chosen to use the assumptions which SQLite makes, as regarding crash consistency. These assumptions strike a healthy balance between crash consistency and performance, while not being theoretically perfect. We expect our database to run in a replicated environment, where rare corruptions due to power loss are acceptable.

Quotes from the document on atomic commit (https://www.sqlite.org/atomiccommit.html)

SQLite assumes that the operating system will buffer writes and that a write request will return before data has actually been stored in the mass storage device. SQLite further assumes that write operations will be reordered by the operating system. For this reason, SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed.

SQLite assumes that when a file grows in length that the new file space originally contains garbage and then later is filled in with the data actually written. In other words, SQLite assumes that the file size is updated before the file content.

SQLite assumes that a file deletion is atomic from the point of view of a user process. By this we mean that if SQLite requests that a file be deleted and the power is lost during the delete operation, once power is restored either the file will exist completely with all if its original content unaltered, or else the file will not be seen in the filesystem at all.

SQLite assumes that the detection and/or correction of bit errors caused by cosmic rays, thermal noise, quantum fluctuations, device driver bugs, or other mechanisms, is the responsibility of the underlying hardware and operating system. SQLite does not add any redundancy to the database file

Quotes from the document on Powersafe Overwrite (https://www.sqlite.org/psow.html)

(several of these properties are mostly corroborated by https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-pillai.pdf)

When an application writes a range of bytes in a file, no bytes outside of that range will change, even if the write occurs just before a crash or power failure.

In other words, powersafe overwrite means that there is no "collateral damage" when a power loss occurs while writing. Only those bytes actually being written might be damaged.

Override RocksDB WAL with our own

We have chosen to override the RocksDB WAL for the purposes of consistency with our hash-table. We will create a separate WAL which contains updates to the flat key-value pairs as well as the diffs to pages in the hash-table. RocksDB provides a hook for listening to the operation sequence numbers committed to SST files, which we can use to prune the WALs as data is successfully flushed to disk by RocksDB.

IO system

After considering several options we have decided to go with io_uring as the primary IO interface.

It's worth talking about the alternatives that we considered and why we turned them down.

The traditional way to interact with IO is via the read/write syscalls and their modern counterparts. NOMT is built to achieve great speedups by taking advantage of parallelism of SSDs. To leverage that the app using sync read/write syscalls would need to use many threads. This turns out comes with a lot of overhead: context switching, syscall overheads, locking. While this may not be a bottleneck for NOMT, but the resources wasted on the overhead may be spent better spent elsewhere in the node.

A similar approach is to use mmap. At first it may seem very attractive because it solves some of the overheads that come with sync syscalls and it also provides a simple interface to work with. However, this simplicity is superficial. While it's easy to create a prototype with it it's rather hard to make the implementation robust. This is because of lack of control over when the pages get evicted, when the pages get flushed and it is not trivial to do error handling.

OTOH, mmap's performance is not as great as it's perceived. There are several reasons: TLB shootdown, eviction happening in a single thread and page table contention.[1]

Another way is linux-aio (also known as io_submit). This one provides an asynchronous interface to disk IO. However, it suffers from some downsides[2]:

  1. Only O_DIRECT supported. While it's extremely likely we use O_DIRECT, it would be great to have an option to backtrack and have an option to use buffered IO without having to rewrite the whole storage subsystem.
  2. AIO, despite the name, is not always asynchronous and can block during submission.
  3. API suffers from some limitations affecting performance. Anecdotally, the implementation under the hood as well. fio/io_uring shows better performance numbers than fio/libaio.
  4. aio seems to be dedicated to mostly niche use-cases and never received wide adoption.

In addition to that, io_uring offers a bunch of advanced features that could offer performance benefits.

SPDK is the king when it comes to performance. However, it's more limiting and harder to use. For example, it can only be used with NVMe devices directly without any FS. At the same time, io_uring can get close to SPDK[3]. Therefore, the risks do not seem to outweigh the potential benefits. Although it would be interesting to see what improvements it could provide as future work.

io_uring also has somedraw backs. Most of them stems from the fact that it's relatively new (5 years in trunk compared to 22 years of aio).

Yet, it seems already that it's getting some tangible adoption. To name a few examples:

  1. bun relies on io_uring exclusively.
  2. tigerbeetle relies on io_uring exclusively.

Mostly though it's the new developments and it seems that the more established projects[4] have limited success of adopting io_uring even though the performance numbers show significant improvements.

Compatibility also doesn't seem to be a big problem. If using Debian as a conservative choice of a Linux distribution, then the current LTS version comes with the 6.1 kernel. This covers most of the available APIs.

The security of io_uring leaves much to be desired.

  • Google disables io_uring in ChromeOS and Android. At the same time they admit that:

    For these reasons, we currently consider it safe only for use by trusted components.

    However, and NOMT is a trusted component and this is not a significant concern for us.

  • Anecdotically, io_uring is disabled in Docker due to seccomp profiles. While this may be a problem for running a node in the a managed container service, this should not pose a problem for the operator controlled docker deployment.

To cater for the use-cases where io_uring is not supported and/or the utmost performance is not required (e.g. developers working on macOS), we are going to provide a fallback on the best effort basis (e.g. posix_aio or likely read/write syscalls coupled with a worker pool).

Split Update / Commit Functions

We have chosen to split the update and commit functions in order to enable users to support short-term forks. We will design this API to be flexible for differing users' needs.

While NOMT is a single-state database, users will likely need to handle short-term forks in some capacity. Even with "instant" finality, there may be blocks which are proposed, discarded, reordered, etc and so on.

The general approach is that we will have the update function return a set of page-diffs and accept as a parameter a set of page-diffs to build upon.

The commit function will actually commit the changes to the disk, and will accept a set of page-diffs and key-value changes to commit. It will be the responsibility of the user to make sure these are consistent.


  1. Are You Sure You Want to Use MMAP in Your
    Database Management System?
    ↩︎

  2. Jens Axboe: Efficient IO with io_uring ↩︎

  3. https://research.vu.nl/ws/portalfiles/portal/217956662/Understanding_Modern_Storage_APIs_A_systematic_study_of_libaio_SPDK_and_io_uring.pdf ↩︎

  4. PostgreSQL, RocksDB. ↩︎

Select a repo