(The original document is here.)
I (@kpreid) have not previously engaged with the Rust project in matters of async language/library design, so let me quickly tell you a bit about where I'm coming from.
My very first programs were graphical, often interactive ones — that is, existing in an event-loop environment — and on machines that had a single core, no memory protection, and often no threads. (Even in later times when threads were available, they were clearly a complication that had to be used carefully due to all the unmarked hazards of lack of synchronization.) When the Internet became available, I started writing networked programs — one of the notable ones being a select(2)
-based multiuser interactive game server (MUD).
Later, I got involved with the object-capability community, which among other things took the idea of event loops and ran with it as an entire programming paradigm, seeing it as a way to build programs that avoided hazards from simultaneous mutation — in the same way that the "actor pattern" we know as Rust developers does, but serially concurrent rather than parallel. In either case, we break up the life of a stateful entity into a sequence of “events” or “messages”, each of which is processed mostly in isolation — limiting causality to (mostly) known incoming and outgoing channels. Threads could be made safe by having each thread run separate actors or separate event loops which communicated only by messages rather than shared memory.
(Even with Rust's safety features, there are advantages to these paradigms; it's easier to avoid deadlock when you have fewer locks, and subtler application-specific state management problems can sometimes be avoided. And RefCell
and Cell
have stronger guarantees than RwLock
and Mutex
.)
As a consequence of these experiences, I tend to see message/event-based architecture, and even cooperative multitasking, not as a specialty tool for high-performance network services (as some Rust developers would tell you) but one of the fundamental paradigms in which code can be written. Rust's Future
s and async
are a great language/library feature to have available, and I want to see enhancements that make it easier to use them when they fit.
The particular feature we are discussing adding is trivial in API:
// in mod std::thread
impl<T> IntoFuture for JoinHandle<T> {
type Output = Result<T>;
type IntoFuture = JoinFuture<T>;
fn into_future(self) -> Self::IntoFuture {...}
}
/// future type supporting the IntoFuture implementation
pub struct JoinFuture<T> {...}
impl<T> Future for JoinFuture<T> {...}
There are very few plausible alternatives to this; the only one have thought of is impl Future
instead of impl IntoFuture
.
The question at hand, then, is not what the design should be but whether we should do this at all.
I believe we should.
The opposing arguments that I have seen made or that I can think of, which I will address in passing while making the arguments in favor, are:
Suppose that an application in fact needs to do this. It is possible to implement already, with the aid of a oneshot channel implementation. Any program that intends to run blocking tasks from async, with a return value, likely does something similar to this (though it may be in a loop run by a thread pool).
use std::future::Future;
use std::{thread, io};
use futures_channel::oneshot;
pub fn spawn_thread_with_future_output<T: Send + 'static>(
thread_builder: thread::Builder,
body: impl FnOnce() -> T + Send + 'static,
) -> io::Result<impl Future<Output = Result<T, oneshot::Canceled>>> {
let (tx, rx) = oneshot::channel();
thread_builder.spawn(move || {
let _ = tx.send(body());
})?;
Ok(rx)
}
This implementation has the following disadvantages compared to what std
could offer:
In the most general form as presented above, it's pretty clunky; note the io::Result
and thread::Builder
which most applications won't care about, but which some will. On the other hand, impl IntoFuture for JoinHandle
makes “take the result as a future” an orthogonal choice from the thread's builder (or std::thread::spawn()
), its scoped-ness, and the code in its closure. This orthogonality is not critical, since it usually doesn't make sense that a library API would produce or accept a JoinHandle
, but it's a nice API choice.
It requires an additional library for the channel implementation. That's not necessarily a problem (we do not expect std
to contain everything it could), but it feels overkill compared to the problem, and there are no policy decisions to make here; there is no reason to want to swap in a different implementation, except perhaps for fine points of scheduling behavior, where one would want to write something custom and explicit anyway.
(Counterargument: perhaps std
should provide a oneshot channel.)
It does not catch panics from the thread in the same way both std::thread::JoinHandle
and tokio::task::JoinHandle
do; in order to do so, you need to make it yet more complex with a catch_unwind
.
It makes an additional Arc
memory allocation for the channel, notably in addition to the thread/JoinHandle shared data (private struct std::thread::Packet
) which already serves essentially the same job of communicating success or failure. (This is probably not significant, and impl IntoFuture for JoinHandle
would mean adding an Option<Waker>
-ish field to Packet
.)
Rust suffers from a perceived and actual division between "sync and async" worlds ("function coloring", etc.). Some of this is intrinsic and would require language support to address (generic functions with possibly-async callbacks), but some of this can be addressed by adding simple interop features. In particular, having impl IntoFuture for JoinHandle
would lower the cost of calling a blocking function from an async function, and of moving that async/blocking boundary when new requirements or refactoring demands it, because there would be fewer entities and lines of code that need to be repositioned.
Rust programmers who are using Tokio (or a similar async runtime) have the option of tokio::task::spawn_blocking()
, which for some purposes is a superior choice since it uses a thread pool. However, one of the obstacles to async adoption is the perception (accurate or not) by library authors that “I have to make a choice of executor and my code won't be as general any more”; by offering the convenient impl IntoFuture for JoinHandle
, we can remove this obstacle to a gradual introduction of async usage to a crate.
Rust programmers who are writing a program they want to keep simple, and not bring in anything that feels heavyweight-with-configuration-knobs like a thread pool, might yet find themselves in a situation where they want to, say, handle the results of several threads as they come in, or even do something more heterogenous-select than that. In that case, impl IntoFuture for JoinHandle
(plus having some executor) lets them do that.
let something_local = ...;
let t1 = thread::spawn(|| {...});
let t2 = thread::spawn(|| {...});
block_on(join!(
async { handle_result_1(&something_local, t1.await.unwrap()) },
async { handle_result_2(&something_local, t2.await.unwrap())) },
))
In this example, I'm assuming the result processing requires local data that might be !'static
and !Send
, so it can't just be sent to each thread. It does include block_on
and join
, which are both not yet features of std
, but they are also things that can be provided by small libraries, and might (in some possible paths for Rust) become part of std
.
The same effect can be obtained with no async, using a MPSC channel, but it's not necessarily as straightforward:
enum
of different output types from the multiple threads, or create keys to identify each of an unbounded set of different itemsThus, even if you intend to write a largely thread-based program, futures made from threads may be able to help you write your structured concurrency.
(Also, instead of a vanilla block_on()
, the program might be using an odd executor like async-winit
which isn't even in the business of providing general-purpose executor features like task management, but has good reason to be async
anyway.)
The above isn't just hypothetical; in the original Zulip discussion thread, @The 8472 brought up some cases where something async-ish was already being done. In particular, https://github.com/rust-lang/rust/blob/5bd5d214effd494f4bafb29b3a7a2f6c2070ca5c/src/tools/tidy/src/main.rs#L51-L93 is a limited-concurrency thread spawner which could be replaced by a simple use of futures::stream::StreamExt::buffer_unordered()
, if only the spawned threads could be turned into futures.
We should offer good-enough tools to make it easy to do these things cleanly, even if an dedicated, elaborated async framework like Tokio could do them more efficiently. Not every part of every program has to be high-performance, and offering correctly-written straightforward interop features serves Rust's “fearless concurrency”.
std
's job is interop and it should do lots of thatstd
provides common types for Rust libraries to use and share with each other; therefore, it provides various conversion functions to go from one type to another, so that when a caller needs a slightly different type than they have, they can obtain it easily. There are many From
implementations, and many conversion methods, like Vec::into_boxed_slice()
. Many of these conversions could easily be written in other ways; convenience, “do the obvious thing”, is valuable.
Therefore, we should provide interop features between async
concurrent code and thread-based concurrent code. This does not necessarily mean the specific IntoFuture
discussed here; rather, there just should be something available, which there isn't currently.
Some of the conversions std
offers are purposefully lossy — for example, Result::ok()
discards the Err
value if any — because what is lost may reasonably be irrelevant to the task at hand. We choose to offer these convenient conversions even though they could be used to make a mistake. Somewhat similarly, offering async access to threads' results could be used to make a poor implementation of spawning which creates threads in excess (rather than using a thread pool), and defeats the value of async. However, I believe we should provide the interop anyway; the mistake in this situation would be in unbounded thread::spawn()
ing, not in feeding the results thereof into async-land.
Similarly, I believe std
should offer a trivial executor like pollster::block_on()
, because there is more-or-less only one way to do it and it enables lots of interop, even though it could be used to make the mistake of trying to run an IO/timer future on the wrong executor. But that's not the main topic today, and I do not think that we should block one-directional interop on bi-directional, because one is still useful without the other.
block_on
?.tmandry: I'd love to see this API in std, but it feels a little incomplete without block_on
. At the same time, I worry about the footguns you mentioned. Are there mitigations?
kpreid: As a reminder, I'm not proposing block_on
, and we shouldn't block this on that. The issues you mention don't come up here. Also, Future
s are useful for many things. The idea of "what is blocking" is relative to what you're doing. You could use futures to do some kind of coroutine that is unrelated to the scheduler.
kpreid: The standard library should not constrain you to operating within the Tokio-style paradigm.
Justin: We can support this without the full context reactor hook. The interface of block_on
is simple, and we could allow for extensions to it later.
Justin: Tokio is a bit odd because it won't put the IO reactor in a different thread. This does work with smol
.
kpreid: The connection here is the other direction of portability. In the direction we're talking about, you can't create a deadlock with block_on
. If you spawn a thread and block_on
in that thread, using impl IntoFuture for JoinHandle
can't deadlock.
tmandry: I really don't have any issues with this JoinHandle
proposal. It's just that I see how people would then want block_on
, and I worry about the problems that might come from that.
yosh: This may be the same question as Tyler's, but how does this deal with structure? - E.g. if the calling future cancels, the thread will continue running. There is no way to cancel the thread; so how do we manage that? I think for this to be correct we should have an async Drop
impl which will allow us to wait for the thread to complete before returning?
yosh: here is an example of the issue
let my_thread = thread::spawn(|| {
thread::sleep(Duration::from_secs(2)); // Expected to run for 2 seconds
println!("1. hello from inside the thread")
});
let _ = my_thread.into_future().timeout(Duration::from_secs(1)).await; // Expected to time out after 1 second
println!("2. program done");
This program will print the following:
2. program done
1. hello from inside the thread
eholk: I think that's just a problem with the existing thread spawning API. This doesn't add any new issues.
kpreid: Agreed with eholk. This is a property of threads. Unless you're using a VM, you can't cancel a thread soundly. So all you're doing is waiting for an event here.
kpreid: There are plenty of applications here where this would not be an issue.
TC: Yosh, what would you expect this to do in a world with async drop?
yosh: The async drop future would return Pending
until the thread has completed.
eholk: This does bring an interesting twist to the timeout example above. The timeout essentially wouldn't need to take effect because the async drop future would need to wait for the thread to complete.
kpreid: It would be equally a mistake to design an API that would be hard to use in an unstructured way. In the projects I've worked on, with games, the paradigm is pipeline based, and that's not tree structured.
kpreid: Also, we can support a weak form of cancellation here. The JoinHandle
could set a cancellation flag that the thread could choose to read. For some use cases, that would be perfect. This would not take care of waiting for the cleanup to finish. That may be better served by a channel. The point of the solution proposed here is that it's simple.
tmandry: I like structured concurrency. But I also agree we need to support some kind of unstructured usage.
tmandry: We have this API called ScopedJoinHandle
. It blocks. I'd like to experiment with implementing IntoFuture
for that.
yosh: Regarding supporting unstructured concurrency, we already do of course, and it's not clear how we would forbid it. But even though we can't forbid unstructured concurrency, I don't think we should encourage or suggest it to people. I want to push back on the idea that we should support unstructured concurrency. I don't think we should actually. There are probably other ways that we can support the patterns that people want. That is, I suppose, a strong position of "I disagree."
yosh: I have a post about "Tree structured concurrency" that goes through some of this. If there are examples of things that can't be handled by structured concurrency, we should go through those.
yosh: I'd go so far as to say that we should deprecate thread::spawn
. We should contain this damage and not propagate this to async Rust.
Justin: I'm not sure we could ever get rid of this behavior while preserving the idea that Rust can do everything you can do in C. But I see broadly what you're saying.
kpreid: We could have a way to express that we don't want an async drop to complete. That may have some applicability here.
tmandry: We could have a more narrow API like detatch
. I could see us doing something like that.
yosh: I would get rid of the detach
method on task
as well. It allows you to have an unstructured system.
tmandry: Part of me wants to do that. But, I don't know. I've written code where I just want to spawn a thread and forget about it. I understand the architecture of my program well enough that I don't need the guardrails.
tmandry: Also, I want to make incremental progress. I'm hesistant to block on features we haven't even designed yet.
yosh: I'm worried about adding more features that would make adding cancellation even more difficult.
yosh: The average Rust user doesn't understand how cancellation works and how it propagates. I'd like us to think about how these things connect together. If an API doesn't go in that direction, we shouldn't do it. Every step should be toward that direction.
kpreid: Another option is that we could provide a method instead of impl IntoFuture ...
. It could be unstable. Doing it this way prevents it from being instastable which would help us to experiment with this.
kpreid: The thing you can't do outside of the standard library is invoke the waker when the thread exits. That's why we need something here.
tmandry: To do this outside of the standard library, can I write my own wrapped thread::spawn
.
kpreid: Yes. But it's not great because you have to wrap everything that may spawn.
jkarneges: Maybe we could guide users to not do the wrong thing by panicking when the future is dropped?
tmandry: It would be a breaking change to go from panicking to not compiling.
Yosh: Do we have prior art in the ecosystem? What is usage like there? How does this compare to alternatives such as spawn_blocking
/ thread pools?
kpreid: Implementing it as a library is "Argument 0" in the document. This would have interactions with thread builder, thread spawn, scoped threads, named threads - that's a lot of interactions.
yosh: What's an existing example of that in the ecosystem?
TC: This is similar to a spawn_blocking
in Tokio, but with different performance tradeoffs.
kpreid: There are use cases that don't need the full complexity of a thread pool.
kpreid: Right now there's a problem in the ecosystem that you can't have "a little bit of async". Forcing users to have the highest-performance most complex thing or nothing at all continues in this direction. It's also concerning to hear that we'd try to block this on async drop.
jkarneges: With respect to experimenting in crates, I worry about that in terms of incremental adoption and education. So we could do this in a third party crate, but I don't feel like we'd want it to live there for a long time.
kpreid: This goes back to adding a method for this.
eholk: There do seem use cases for this. I don't like blocking on things when we don't know how long those other things will take. Structured concurrency seems useful in a lot of cases. But we already have unstructured primitives in the standard library.
eholk: I'm positive toward this idea. But at the same time, we should hear the concerns that have been raised.
Daria: I agree with Kevin here. It's not that bad to insert blocking code into an async task. I agree that structural concurrency should be the default. But there should be some way to opt-out of this.
Daria: By the way, mem::forget
may not detach the thread. You should specifically call detach
.
Yosh:
let handle = thread::spawn(|| {});
mem::forget(handle); // <- no more way to join the handle
tmandry: We could make the destructor here block the thread if the future is dropped.
eholk: There was an idea to add this as an unstable API, and that does seem to be a good way to make progress on it. There don't seem to be any downsides to this.
TC: Is this something you are interested in working on?
kpreid: I do have a prototype of this. Not sure how long it will take me to finish it. Given the discussion here, I'll probably work on adding an unstable method.
tmandry: I'm a bit unsatisfied with this outcome. While I'm interested in structured concurrency, I'm also interested in doing the nicer thing here and adding the impl.
TC: Personally, I feel the tension on both sides. On the one hand, we shouldn't do things that make it difficult or impossible to do better things later (c.f. Infallible
). But on the other hand, we don't want to look back on this in 15 years and still see this being unresolved. That would be 15 years where we hadn't solved real problems for users that we could have. It's our responsibility to run these analyses to ground now so neither of these things happen.
jkarneges: +1 on that tension.
kpreid: We do have to focus on making what we have good.
tmandry: Definitely interested in seeing more work here.
(The meeting ended here.)