or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing
xxxxxxxxxx
Async Shiny Future Design Doc Sketches and Notes
Use async fn anywhere
High-level goal: One should be able to write
async fn
anywhere that you can writefn
.Type alias impl trait (TAIT)
type Foo = impl Trait
at module and impl levelFoo
(module or impl, respectively), the TAITFoo
must be used only in certain positions:Generic associated types
type Foo<...>
in traits and implsAsync fn in traits
You should be able to write
async fn
in traits and impls:This desugars into a GAT + impl Trait:
Unresolved design questions:
Bar
here, but that's somewhat arbitrary, perhaps we want to have some generic syntax for naming the method?-> impl Trait
in trait" scenarios.Send
, maybe even in the trait definition?-> impl Trait
in trait" scenarios."Inline" async fn in traits
Short version: make it possible to have async fn where the state is stored in the
Self
type (detailed writeup). This is equivalent to writing a poll function. Like a poll function, it makes the trait dyn safe; it also has the advantage thatSelf: Send
implies that the returned future is alsoSend
.Remaining challenges:
"Dyn" async fn in traits
The most basic desugaring of async fn in traits will make the trait not dyn-safe. "Inline" async fn in traits is one way to circumvent that, but it's not suitable for all traits that must be dyn-safe. There are other efficient options:
Box<dyn Async<...>>
– but then we must decide if it will beSend
, right? And we'd like to only do that when using the trait as adyn Trait
. Plus it is not compatible with no-std.This concern applies equally to other "
-> impl Trait
in trait" scenarios.We have looked at revising how "dyn traits" are handled more generally in the lang team on a number of occasions, but this meeting seems particularly relevant. In that meeting we were discussing some soundness challenges with the existing dyn trait setup and discussing how some of the directions we might go enabled folks to write their own
impl Trait for dyn Trait
impls, thus defining for themselves how the mapping from Trait to dyn Trait. This seems like a key piece of the solution.One viable route might be:
async fn
are not, by default, dyn safe.#[repr(inline)]
#[derive(dyn_async_boxed)]
or some such#[async_trait]
-style approachBox
used to allocate can be reused in between calls for increased efficiency.no_std
land.Recursive async functions
Recursive async functions are not currently possible. This is an artifact of how async fns work today: they allocate all the stack space they will ever need in one shot, which cannot be known for recursive functions.
The compiler could manage this by inserting a "box" automatically (perhaps with an allow-by-default lint to let you know it's happening); another option would be to have a convenient way to make a "boxed" async fn, such as
box async fn
, and the compiler could suggest inserting this keyword at the appropriate point so that such a function can be recursive.(Note that the boxed function would not have to be a
Box<dyn Async>
, it could be a nominal type instead, and thus the only runtime overhead comes from memory allocation.)Alternatives: If we were very concerned about this, we could conceivably switch to an optional "arena" for growing the stack, but this scenario seems to come up relatively rarely.
Easily compose, control schedule
Provide a new "building block" for scheduling based on hierarchical futures. This building block should:
New async trait
Today's
Future
trait lacks one fundamental capability compared to synchronous code: there is no way to "block" your caller and be sure that the caller will not continue executing until you agree. In synchronous code, you can use a closure and a destructor to achieve this, which is the technique used for things likerayon::scope
and crossbeam's scoped threads.Async functions are commonly written with borrowed references as arguments:
but important utilities like
spawn
andspawn_blocking
require'static
tasks. Without "non-cancelable" traits, the only way to circumvent this is with mechanisms likeFuturesUnordered
. Fundamentally the challenge is thatIntroduce a trait like
Specialization to bridge old and new traits
Introduce "bridge impls" like the following:
However, we also need the ability for common combinators to implement both
Future
andΑsync
:This creates a problem, because you have multiple routes to implement
Async
forJoin<A, B>
whereA
andB
are futures.Specialization can be used to resolve this, and it would be a great feature for Rust overall. However, specialization has a number of challenges to overcome. Some related articles:
Scoped-based API
Async functions are commonly written with borrowed references as arguments:
but important utilities like
spawn
andspawn_blocking
require'static
tasks. Building on non-cancelable traits, we can implement a "scope" API that allows one to introduce an async scope. This scope API should permit one to spawn tasks into a scope, but have various kinds of scopes (e.g., synchronous execution, parallel execution, and so forth). It should ultimately reside in the standard library and hook into different runtimes for scheduling. This will take some experimentation!Side-stepping the nested await problem
One goal of scopes is to avoid the "nested await" problem, as described in [Barbara battles buffered streams (BBBS)][BBBS]. The idea is like this: any combinator which merges multiple async pieces of work – i.e., initiates concurrency – needs to take a scope parameter. The way to get concurrency, in other words, is to spawn into a scope, and not to construct small "subschedulers". This includes
FuturesUnordered
andStream::buffered
, but also more familiar APIs likejoin
. By doing this, we ensure that the scheduler is able to poll those concurrent tasks even while the main task is busy doing something else. A good rule of thumb here is that only the scheduler ever invokes poll. All other code just "awaits" things.[1]In the case of [BBBS], the problem arises because of
buffered
, which spawns off concurrent work to process multiple connections. This means that the [buffered
] combinator would take ascope
parameter to use for spawning:The
join
combinator would likely be replaced with a method onscope
:[2]This join method would be defined like so:
Could there be a convenient way to access the current scope?
If we wanted to integrate the idea of scopes more deeply, we could have some way to get access to the current scope and reference its lifetime. Let's imagine we added a keyword
scope
, and we said that its lifetime is'scope
. One would then be able to do something like the following:Lots of unknowns to work out here, though. For example, suppose you have a function that creates a scope and invokes a closure within. Do we have a way to indicate to the closure that
'scope
in that closure may be different?It starts to feel like simply passing "scope" values may be simpler, and perhaps we need a way to automate the threading of state instead.
Voluntary cancellation
In today's Rust, any async function can be synchronously cancelled at any await point: the code simply stops executing, and destructors are run for any extant variables. This leads to a lot of bugs. (TODO: link to stories)
Under systems like Swift's proposed structured concurrency model, or with APIs like .NET's CancellationToken, cancellation is "voluntary". What this means is that when a task is cancelled, a flag is set; the task can query this flag but is not otherwise affected. Under structured concurrency systems, this flag is propagated to all chidren (and transitively to their children).
Voluntary cancellation is a requirement for scopes. If there are parallel tasks executing within a scope, and the scope itself is canceled, those parallel tasks must be joined and halted before the memory for the scope can be freed.
One downside of such a system is that cancellation may not take effect. We can make it more likely to work by integrating the cancellation flag into the standard library methods, similar to how tokio encourages "voluntary preemption". This means that file reads and things will start to report errors (
Err(TaskCanceled)
) once the task has been canceled. This has the advantage that it exercises existing error paths and permits recovery.Cancellation and
select
The
select
macro chooses from N futures and returns the first one that matches. Today, the others are immediately canceled. This behavior doesn't play especially well with voluntary cancellation. There are a few options here:select
signal cancellation for each of the things it is selecting over and then wait for them to finish.select
continue to takeFuture
(notAsync
) values, which effectively makesFuture
a "cancel-safe" trait (or perhaps we introduce aCancelSafe
marker trait that extendsAsync
).async fn
could not be given to select, though we might allow people to markasync fn
as "cancel-safe", in which case they would implementFuture
. They would also not have access to ordinary async fn, though.Async drop
Create an
AsyncDrop
trait:When an async function is compiled and a value is dropped, we will use the "async drop glue". This is analogous to drop glue except that it invokes the
foo.async_drop().await
method where appropriate.There is also a lint for when something that implements
AsyncDrop
is dropped in a synchronous context.What happens when you drop an async-drop value in a sync context?
We can't stop you from doing that right now and I don't want to encumber this work with trying to crack that nut. (The basic idea would be to allow things to
impl !Drop for X
, and to makeT: Drop
a default rather likeSized
, but there's lots of details to work out.) Instead, we offer a lint, and we would encourage people implementingAsyncDrop
to abort or warn or otherwise try to recover as gracefully as they can. The hope is that this is "sufficiently reliable" that the scenario doesn't happen a lot in practice. Not especially satisfying.Generic code that is portable across runtimes
Read and write
We need to have
AsyncRead
andAsyncWrite
traits. These have several design goals:async fn
)dyn
traitOne possibility is the design that CarlLerche proposed, which separates "readiness" from the actual (non-async) methods to acquire the data:
This allows users to:
T: AsyncRead
,T: AsyncWrite
, orT: AsyncRead + AsyncWrite
Note that it is always possible to ask whether writes are "ready", even for a read-only source; the answer will just be "no" (or perhaps an error).
Iterator
The async iterator trait, like async read and write, can leverage "dyn in traits":
Use current runtime without generics
std library spawn
std library async_io
std async abstractions
Open questions and far out ideas
async as a monomorphization mode
Niko has been toying with the far out idea of making it so that any
fn
can be compiled as an "async fn". Put another way, what if there was an implicit "mode" for your function. It could be compiled in synchronous mode or asynchronous mode? There would be some functions that are only compatible with one or the other, but a lot of things (e.g., the I/O abstractions in the standard library) could be made compatible with both.What could be amazing here is that it might offer a way for us to do all kinds of things:
?
inside of iterator combinators (by making the intermediate functions async– perhaps cancel-safe async)Some challenges:
await syntax
Should we require you to use
.await
? After the epic syntax debates we had, wouldn't it be ironic if we got rid of it altogether, as carllerche has proposed?Basic idea:
async foo()
to create an "async expression" – i.e., to get aimpl Async
.async || foo()
, i.e., create an async closure.Appealing characteristics:
.await
. It'd be nice if you just didn't have to remember.But there are some downsides:
Fn
traits and so forth. In this "await-less" Rust, an async function is called differently from other functions, because it induces an await. This means that we need to considerasync
as a kind of "effect" (likeunsafe
) in a way that is not today.?Drop
One problem with async drop is that nothing stops you from dropping async values in a sync context. What do we do in that scenario? There isn't a great answer. The problem is that, right now, Rust assumes that all values are "droppable". But what if we removed that assumption. You could declare some values as non-droppable (e.g.,
impl !Drop for Foo
). This would then mean that those values must be consumed in some way, presumably via some kind of blessed function that takes ownership of the value. Generic code would then not be able to use those functions unless it was declared asT: ?Drop
. That's right, another?
trait – something we have traditionally shied away from (and with good reason).Lots of sticky questions to answer. What about
dyn Trait
, for example?This is not a hard rule. But invoking poll manually is best regarded as a risky thing to be managed with care – not only because of the formal safety guarantees, but because of the possibility for "nested await"-style failures.
[BBBS]: https://rust-lang.github.io/wg-async-foundations/vision/status_quo/barbara_battles_buffered_streams.html
[
buffered
]: https://docs.rs/futures/0.3.15/futures/prelude/stream/trait.StreamExt.html#method.buffered ↩︎Naturally we would want a variadic variation, or perhaps a method macro. ↩︎