# Reading notes: Mental experiments with `io_uring`
Link to document: https://vorner.github.io/2019/11/03/io-uring-mental-experiments.html
###### tags: `reading-club`
Leave questions, observations, discussion topics below.
---
## Topic
name: prompt
---
## Notes
nrc: there seems to be a much bigger push for using io_uring for network (as well as disk) nowadays (vs when it was originally proposed)
In fact, it's worth noting the blog post was from 2019 and the io_uring world is moving FAST
re buffer ownership, there seems to be two patterns that have been settled on by implementers: an owned buffer abstraction (either Vec\<u8> or something custom) or having the library manage a pool of buffers (which I've not actually seen implemented, only requested).
Of the solutions in the blog post, copying was widely done but also acknowledged as a bad design and moved away from. Implmentations have chosen either to use Vec or their own type, I don't think it matters too much in any big way, just the ergonomics of it.
Tokio did in fact do an io_uring implementation: https://github.com/tokio-rs/tokio-uring
tmandry: Poll model is better in some situations?
nrc: Good when you have lots and lots of sockets, if you have to preallocate a buffer for every socket you can't. io_uring solution is not to preallocate a buffer per socket/IO, you preallocate a bunch in a buffer pool and tell the kernel to use them as needed. But this is a heuristic, you have to tune the number of buffers ahead of time.
eholk: 10/25gbps ethernet is becoming more ubiquitous, coming close to the performance of SSDs. Could also imagine this becoming relevant in a high-performance computing environment where you're very sensitive to latency.
nrc: With multiplexing, intra-datacenter, could imagine it coming close to local disk speed. But on another server it has to be nontrivial work. Distributed database sharded
tmandry: Can only shove so many SSDs into one machine
eholk: Graph processing, wonder if that's relevant. Light computation but very distributed.
nrc: Anything that's both storage bound and with a protocol using a small-ish number of sockets
---
tmandry: The framing of a runtime as layers like this:
1. Executor
2. Timer
3. Reactor
is interesting. The `Park` trait sounds a lot like the doc we read about abstracting reactors... [Reading notes: Context reactor hook](/8NCF0R-NTXmvRzx8selRVg)
eholk: Could we add arbitrary layers? Layering reactors could solve some of the tradeoffs around needing multiple threads for different things that can block.
Park trait: https://docs.rs/tokio-executor/0.2.0-alpha.5/tokio_executor/park/trait.Park.html
tmandry: Looks like this isn't in Tokio 1.x.
nrc: Task to break down an executor in this way is really interesting. Most people assume executor and reactor the same.
tmandry: Executor is what implements `wake`, calls `poll`, and loops. That could be built-in, but tokio probably wants to own it to do certain optimizations.
nrc: In practice they're almost always together
eholk: To me it feels like you have two event loops. Executor you call poll, wait for reactor to do something. Reactor is wait for OS events and return to the executor. Need to be intertwined on the same thread so you sleep while waiting on hardware. Makes it hard to compose them.
---
## Vision
tmandry: Sneak peek of https://github.com/orgs/rust-lang/projects/28/views/2
---
## Meeting times
DST changes in US this weekend, in NZ Apr 2.
Will keep the current (alternating) schedule until end of the month, then a poll.
eholk: Used to rotate by 8 hours
nrc: Have to do that if you have Asia/Australia.
nrc: How well it works? Some meetings I don't want to miss at all, but regular meetings I'm ok with.
Can we find a time that's inconvenient but acceptable for everyone?