--- title: "Meetup 2024: Strategic goals" tags: ["T-lang", "design-meeting", "minutes"] date: 2024-09-10 url: https://hackmd.io/qIlGycAQQEabV08R6JHeiQ --- # Meetup 2024: Strategic Goals Abstract: * What are the big problems we hope to tackle over the next 3-6 years in Rust? * What do we think the project should be focused on in 2025? Preparation: Brainstorm a list of goals you think might be important and bring it to the discussion. Artifact (Big Goals): A list of 7 goals we think are worthy of the title "big goal". Artifact (Medium Goals): A set of 3-5 likely flagship goals and 5-10 likely team goals. ## Brainstorm area You may optionally use this section for brainstorming. :::spoiler pin ergonomics / immovable linear types place lifetimes c++ interop contexts mod generics effects? trait transformers? - generic-over-mut ::: ## Use case / personas * GC developer (Alan) building networked services * New kernel developer exploring more easily using RfL abstractions * RfL abstraction author * Kernel C programmers that don't want Rust to get in their way but will slowly but surely be drawn in by its niceness * People building developer tools in other communities ## Goal candidates ### Niko * Async Rust becomes the [Apex Predator](https://www.youtube.com/watch?v=Amv8T27SqCw) * Impact: Rust becomes dominant language for implementing network services, FAAS, etc * Technical agenda: Async drop, Scoped threads, Default Runtime, Complete library * Where Clause the Ultimate * Extend power of trait system to encompass a number of hard problems (portability etc) * Technical agenda: module generics, with clauses, `where Platform: 64-bit`, I forget what else I include here * [The Borrow Checker Within](https://smallcultfollowing.com/babysteps/blog/2024/06/02/the-borrow-checker-within/) * Impact: Simplifying Rust learnability and also grow Rust's expressive power * Up to and including self-referential types * Cloud APIs for Rust * Easily build cloud applications from Rust source programs using * Exceptionally extensible, aka, proc macros ftw * Technical agenda: Customizable diagnostics, error messages, proc macros that supply type definitions and can interact with incremental compilation * No matter what language you use, you're using Rust (tooling) * Make Rust best language ever to build developer tooling in, get it used by everybody everywhere * (not sure how much this is a lang-team goal, it is kind of the meta goal of salsa) * No matter what language you use, you're using Rust (libraries) * Easy to write a library in Rust and then distribute it for use from other languages * Well-combining Combinators * Let you write combinator (e.g., closure-heavy) APIs that work seamlessly with `?`, `await`, etc * Technical agenda: * Easy errors * Finally get a real error story (`?` works, `try`, don't have to pick ecosystem crates to get the "universal error", etc) * Dyn made Easy * Make dyn feel natural * Technical agenda: dyn* ?? needs more thought ### pnkfelix * Support dynamicness (linking, method dispatch) rather than staticness (linking, monomorphization, etc) as a first class citizen. * Why this matters: Rust has over-indexed on its static optimization story. There is an important community in dynamic language space that we could/should be serving better. * For linking: Is our current story around dylibs "good" enough? Are there language-y things that could/should be done to improve that story? * For dispatch: see e.g. yesterday's discussion of [dyn trait usability](https://hackmd.io/jD1Pmf6wQZGVWd6VOxooEw#Felixs-pick) * Partner with verification tool community. * Why this matters: Rust is already a language targetted at people who are willing to jump hurdles to get static verification of properties in type system. We should double-down on that attraction. * from language itself: * provide support for contracts (as mechanism for method specification and type invariant encoding) * enrich type system (e.g. pattern types? refinement types?) * but also: help model checkers and proof assistants to become partners with (and supported by) Rust project, not just research prototypes. ### Josh Triplett - Seamless C support - add a C compiler to the Rust compiler, build in the ability to include C header files, handle inline functions, call Rust from C, all with no translation layers required. - Relaxing the orphan rule - make it possible for glue between crates X and Y to live in a separate crate XY, rather than an optional feature of X or Y. Help the ecosystem scale. - Standalone `derive`: `derive Trait for Type;` - Fully on-demand compilation across the entire crate graph - only compile what you actually need. - Sample test cases: `aws-sdk-ec2`, `windows` with all feature flags enabled. Massive API surface, users use a tiny subset. - Stable crABI ABI - safe interoperability with other safe languages for high-level types like Option, Result, String, Vec, HashMap - Later: stable Rust ABI, inspired by Swift - `dyn ConcreteType` - handle a "real" type via a vtable as if its interface were a trait. - First-class sandboxed proc macros - Sandboxing lets us know what proc macros depend on, so we know when to re-run them and when not to. - Ship AFIT-based common async traits - this is *primarily* a libs matter but may need lang support - move-only fields that can be moved but not referenced in-place (because they don't exist in memory in their normal layout). Stuffing types into niches and synthesizing them when needed. ### ScottMcM #### Trustworthy `unsafe` It's easy to follow the patterns that avoid mistakes, lead to the right behaviours, are well tested, and have plausible paths to correctness guarantees. #### Optimizations everywhere Unsafe code can still get the optimization goodness of safe code -- for example, alignment metadata for pointers, which are lost today in slice iterators -- and more things can be done in safe code instead of hand-rolling it in unsafe -- like offering move-only fields to unleash more layout optimizations that people often expect us to provide but which aren't actually legal today. #### What can we stop doing in check? Which parts of the check build can we avoid? What do we do, what takes so long, what provides value, what can we skip while still giving people a good experience? ### TC #### WebAssembly Rust has unique advantages to take advantage of WASM support in browsers and to be the best language for this. #### Scientific computing and machine learning ML is eating the world but, due to its research origins, the models and tooling are often written in Python, and that presents problems for "working in the large", scaling, and reliability. Now that these systems are so mission-critical, people are looking for alternatives. Rust has unique advantages in this area if we can offer compelling tooling. #### Others - Pin ergonomics - Linear types - An effect system, if we can - Guaranteed TCE - scottmcm: yes please - Reflection - Better ergonomics for writing proc macros - E.g., by lowering the amount of ceremony. - Type providers, as mentioned by Niko. - Leveraging LLVM for better C interop - Making RfL a success - Solve the Linux distro problems for Rust - impl Trait everywhere! ### tmandry * Pin ergonomics or immovable types * Use case: Kernel developers and Async * Impact: Make working with immovable values feel non-assaulting * Impact: Intrusive collections are much easier to work with (great for ring 0) * Impact: * Place lifetimes and safe self-referential types * Use case: Everyone * Impact: Make self-referential values buildable and returnable without unsafe * Linear types (unforgettable and/or undroppable) * Use case: Async, io-uring, Kernel devs, ~Everyone * Impact: Sound scoped async tasks * Impact: Better io-uring APIs * Impact: Enable async drop * C++ interop * Use case: Companies with "legacy" codebases * Collecting monomorphizations * Contexts and capabilities * Use case: Kernel developers, Operating systems, Many others ## Discussion ### Felix's pick Proc macros. * Exceptionally extensible, aka, proc macros ftw * Technical agenda: Customizable diagnostics, error messages, proc macros that supply type definitions and can interact with incremental compilation * First-class sandboxed proc macros - Sandboxing lets us know what proc macros depend on, so we know when to re-run them and when not to. * Better ergonomics for writing proc macros * scottmcm: convenient patterns for things that require proc macros now (e.g., building a derive) - Why do we need a proc macro to write a derive - "polytypic programming" -- provide the operations for sums, products, base-cases, etc - pnkfelix: was powerful but I found it hard to read/understand * More powerful hygiene * Standalone `derive`: `derive Trait for Type;` We make Rust code more expressive, community uses that to make Rust much better at any particular domain than it ever would be on its own. We don't yet have a stable way to get the expansion of macros or proc macros. Examples: Database mapping, Salsa, Bevy, PyO3, Serde Reflection story? e.g. for Serde remote type. Standalone derive. Type providers in F# - a provider is a macro that can provide a type definition to the compiler, not provided as tokens Can we avoid tokens for things like this that are dealing in Rust things? (As opposed to things like `json!` that want custom syntax that isn't Rust the way that derives on structs are.) How would we frame this on the *blog*? "Exceptional extensibility" -- proc macros? or focus on the *applications*? Could have a shorter goal of "can we derive without writing a proc-macro"? Maybe list out 3 headline applications (C++ interop, incremental, etc) and then say "oh btw it's all extensibility". Like how iPhone was introduced as "3 products" ### Josh's pick Tyler/TC's mentions of linear types. * Linear types (unforgettable and/or undroppable) * Use case: Async, io-uring, Kernel devs, ~Everyone * Impact: Sound scoped async tasks * Impact: Better io-uring APIs * Impact: Enable async drop JT: Common pattern of `fn close(self) -> Result` ('this will automatically happen on Drop but if you want to handle errors...') * use cases for static machines etc etc etc NM: cites [must move types](https://smallcultfollowing.com/babysteps/blog/2023/03/16/must-move-types/) NM: many appls from async, definitely helps with correctness by construction JT: Want this for compile-time correctness, things like transactions: you must commit this or roll it back, "fall off the end" (Drop) is not one of your options. SM: TC had guaranteed TCE, does that fit in? "The way people want to be able to write stuff", versus the things they have to do to avoid... e.g., they want CbC but they have to do workarounds. So like "hey I'm trying to write a state machine and by golly if I had `become` I could avoid this goofy". NM: ties to CoP. JT: How does that tie into linear types? SM: Maybe there's no technical connection, but the idea of directly expressing the thing you want. TC: I get it in an intuitive sense; if you think about the kind of code you write with TCE, you're continuously driving down this "infinite stack that never unwinds" which has implications for how you think about drop, etc. TM: What's the headline goal? FK: Better static reasoning. What's our headline? - "Guaranteed cleanup" - "Required explicit cleanup operations" Outline: - Drop helps you clean up automatically, but there's only one kind of dropping, and it happens implicitly - Some types have multiple ways to clean up, and you should explicitly decide between them. For instance, database transactions must be committed or rolled back. Today, types handle this by rolling back on drop unless you commit, or committing on drop unless you roll back. In either case, that's easy to forget, because the Drop is invisible and it's hard to notice the *absence* of a commit or rollback operation. - We want to introduce types that require you to explicitly do something with them, and that cannot be implicitly dropped. - This will also enable things like async Drop, and scoped async tasks. ### Niko's pick #### Scientific computing and machine learning ML is eating the world but, due to its research origins, the models and tooling are often written in Python, and that presents problems for "working in the large", scaling, and reliability. Now that these systems are so mission-critical, people are looking for alternatives. Rust has unique advantages in this area if we can offer compelling tooling. Lots of CPU-based ML inference happening, actually (in addition to GPU). Can rust be the language of choice for "productized" (optimized, operationalized, etc) machine learning or something? What we as a Rust community need to do -- some of it is going to be direct support, but a lot is probably going to be finding quality partners and figuring out what they need, I suspect proc macros are going to be a big part of it. SM: see also autodiff NM: yep JT: Useful general desire for intermixing Rust in both directions. Python you can imagine doing the same thing. Write normal Python, also write Rust, what is the full generalization? File-level proc macros? NM: *cough* webassembly *cough* TC: I think the full generalization is a "hookable" compiler. Think about F#, hooking in at the type system level. Often your limitation is that you don't ahve type information. What autodiff stuff needs is hooking in at a lower layer. Needs interaction at the codegen side. More efficient to work at the LLVM level for that. Start looking at these use cases and start seeing, "ok, proc macros hook in at syntax layer, but...". FK: big project to provide at full generality, but yeah. SM: makes me think about "dyn thing" (DLLs?) and "ABI stuff", we need some way to provide hooks that doesn't break on every release. JT: Why is Python the leading choice here? Because https://xkcd.com/1838/ and Python makes it easier to "stir". You can iterate, poke, experiment, repl, this is part of the reason why people use it. Ultimately a holistic poking until something comes out. I still pull up the repl because we don't have one. REPL, scripty dialect, more handwaving inference ... FK: tough sell because many parts of Rust really need to know types (e.g., collect) to even know what the intent was NM: Yes, though in tension with correctness-by-construction, but lots of room to do more like "you can always run, even with compilation errors" TM: I like targeting prod models, scripy Rust could come later JT: static duck typing! SM: C#'s dynamic is a cool version of that, have to give a static type of Dynamic and link in the stuff like name resolution etc, kind of a nice interface to reflection TC: mojo provides a sort of market validation for this big goal as being important. If they achieve their goals, why use Rust? (They may struggle to achieve them...) Conversely if we are successful with proc macros etc Rust will be compelling to much of the mojo audience. ### TC's pick * [The Borrow Checker Within](https://smallcultfollowing.com/babysteps/blog/2024/06/02/the-borrow-checker-within/) * Impact: Simplifying Rust learnability and also grow Rust's expressive power * Up to and including self-referential types :tada: :tada: :tada: ```rust fn get_name(s: &Foo) -> &{s} String { &s.name } ``` ```rust fn get_name(&self) -> &{self.name} String { &self.name } ``` ```rust struct Foo { s: String, t: &'self.s str, } ``` Pin/async/generator ergonomics - async and generators are giving access to an underlying compiler mechanism of self-referential types that we don't give direct access to. We should give direct access to self-referential types. NM: I suspect pin complements but I've not given it a super duper amount of thought. FK: would this include substructural (or perhaps fractional is better word) capabilities NM: yes it would (see ["partial views on structs"](https://smallcultfollowing.com/babysteps/blog/2024/06/02/the-borrow-checker-within/#step-3-view-types-and-interprocedural-borrows)) SM: I liked the point that we could give a suggestion to add "this is the field you use" NM: this was also on dioxus list (private helper methods) JT: This was the thing that *almost* made me give up on Rust when I first tried it. I tried to build something using git2+FUSE, mounting a git tree as a filesystem. Worked fine, but then I tried to encapsulate it in a structure, and got borrow checker errors. Got really frustrated with Rust, felt like it was getting in my way and not understanding something "obvious". NM: target audience...new Rust users? TM & JT: literally everyone FK: depends on which part of it NM: experienced folks have learned the workarounds, doesn't mean you like using them SM: Pass all the individual fields as parameters SM: Supports compositionality -- you hit it by taking two things, glueing them together, now it doesn't work anymore. NM: CoP, can't refactor your functions the way you want. TC: from a learnability standpoint, having a syntax for concrete lifetimes I think would be huge in helping people understand one of the hardest parts of Rust (borrow checker). Having notation would help people conceptualize this a lot better. Another way would be to allow use loop labels (NM: :eep:), not super useful in real code, but as a scaffolding...it is. This approach gets to that in a more useful way. ### Scott's pick Felix's note on > proof checkers, kani, and the like expanded to "if it compiles, it works" but we know that the meaning of unsafe is that this doesn't hold. The more we can find a way to raise the bar here, now there's miri, if that could be "miri is integrated, kani is integrated, you've annotated a bunch of properties on here, you have a weekly CI that churns an SMT solver to see for weird things". FK: verification community seems to be in 2 camps, one to focus on safe code side, assuming the unsafe code works, not attempting to make statements about that. Other tools are targeting unsafe code, lack of memory model holds us back. I was very general to encompass both those camps but if the goal is proving correctness of unsafe code, I'm all for it, very much all for it, but I want to point out a large community of folks who aren't using unsafe code but still want stronger correctness guarantees. NM: My experience with miri suggests to me that we need a static story and a dynamic story. Glad I can use miri with salsa. I don't think I would be ready to think about proving stuff with it yet. Seems like a lot of effort. We need to consider the dynamic story. Really tied into proc macros. We should focus more on UB-freedom and panic freedom, not arbitrary invariants. FK: Part of the problem is that you need the hardcore theorem proving for proving panic freedom. NM: yes but for a subset of your code TM: if we accept panic freedom as a goal, we can add arbitrary assertions, and get prover to prove they'll never be hit, right? NM: yes SM: if we add invariant annotations, then hopefully we could use them both statically and dynamically -- whether they're hit in SMT solvers or fuzzers or whatever it's fine. TC: it seems to me that the normal work we do in expanding scope of safe Rust is in some ways a way to add proofs to what otherwise would be safe code. Which is to say that the more we can pull in from unsafe world to safe world, we are adding proofs. When I think of things like panic freedom, I think of that as a "safe" assertion we are making, the sort of thing that's useful in safe code, and so I wonder, what are the patterns occuring in safe code that are actually safe? How can we pull those into safe code? Get all the apparatus working for us? Maybe part of this work is trying to expand scope of safe rust? FK: I mentioned proposals regarding refinement, pattern types, is that the kind of thing you are talking about? TC: yes. You're proposing a way to add a bunch of different annotations and a different language, essentially, to state why some unsafe code is actually safe. My question is to what degree could we solve that same motivation by trying to make that code *safe*? Rather than it being unsafe and then adding something to prove its correctness, can we make more of those patterns and figure out how to make them safe code? By expanding scope of the type system, etc? TM: I think that seems like a more promising direction than adding an additional language for "things you want to prove". NM: what do we mean by "additional language?" TM: pre/post conditions NM: how is that an add'l language? FK: if you want a full-fledged specification, you need to be able to cover things like "forall X" and so forth. Not something you would evaluate dynamically, (I know SPARK/Ada does but I wouldn't duplicate their exact design choices myself.) TM: I feel a bit more skeptical, maybe we should go down the refinement types route. TM: I wanted to say that if we do get more dynamic checking, we need a UB sanitizer for Rust that can embed in real programs that interact with the system. NM: *cough* [krabcake](https://github.com/pnkfelix/krabcake) *cough* (a project that aimed to do that, roughly) SM: compositionality of wrapper types is very bad, if you want to do the ascii string... non-empty... at most 30 characters... maybe that's a sign refinement types are useful. I think for "cross-field things" we need something new. Pattern types work well for individual fields but when it's "this field is always less than that field", or "these numbers are apart by a multiple of the size of this generic"... NM: I think we're too far down and I'd prefer us to ask what value we're trying to deliver *exactly*, is it getting rid of unsafe code, is it helping safe code get even safer, etc, and not as much the precise *path* TC: As we talked about yesterday, one thing that the RfL team is doing is taking a pile of C code with implicit safety obligations, and, in wrapping it in Rust, encoding those requirements in the type system. That is proof-making, because Rust is a proof checking system. There's some overlap here we should consider. ### Tyler's pick * Support dynamicness (linking, method dispatch) rather than staticness (linking, monomorphization, etc) as a first class citizen. * Why this matters: Rust has over-indexed on its static optimization story. There is an important community in dynamic language space that we could/should be serving better. * For linking: Is our current story around dylibs "good" enough? Are there language-y things that could/should be done to improve that story? * For dispatch: see e.g. yesterday's discussion of [dyn trait usability](https://hackmd.io/jD1Pmf6wQZGVWd6VOxooEw#Felixs-pick) Runner-up: C++ and C interop TM: FELIIIIIX TALK FK: I put two things under one umbrella but I think they make sense. I think Rust has an awesome static deliverable story but there's a reason things like Python, Java, Swift, have chosen designs that enable people to deliver components and link them together as independent entities. FK: We sort of had that story but we gave up on investing it in favor of static linking. FK: Because we've not invested as much, it's caused us to lose on the dynamic library side, even as a project not properly documenting how they work, and on the language side, handling dynamic dispatch in an ergonomic way, having it handle more cases, e.g. expanding dyn compatibility/capability. Should not be something where I have to think so hard up front. JT: You said Swift with a question mark, but I think it deserves an exclamation point. Java and Python were dynamic to begin with, but Swift is a compiled language and then added more dynamic features, including building out a Stable ABI around the concept of "what if you used dyn for all the things". Their stable ABI could be summarized as "use dyn for all the things" and avoid having static types since then you have to know how they are laid out and agree on it, and dyn lets you delegate it. It would be worth learning from that. Using it to build not just a stable ABI but the tools that go into making dynamic linking between Rust, loadable plugins, etc, easy. C has this because it has almost no types to speak of. We have a harder time because we have richer types. C++ manages to pull it off in a pretty reasonable way if you stick to one compiler. TM: we have some limited support, dylib, etc, but has to be the same compiler version JT: we don't have the ability to pass a hashmap around, must be same copy of std, etc. Can't do separate linking easily. FK: big part of the problem is features, too, things like `#[cfg(test)]` TM: so much here. I thought we were going to talk about dyn trait! I also care about dynamic linking and binary size, esp. for embedded, but people have a lot of constraints. Some are fine matching compiler versions. Others want the ability to mix-and-match, requires a stable ABI, definitely different sol'ns for different people, have to figure out which we will tackle first. FK: I suspect if we make better dylib experience but don't solve `dyn Trait` we won't get anywhere. TM: I think that's right, it has to be less crappy. You can't monomorphize, so, we're going to have to use `dyn Trait` to make dynamic linking work well, especially if you want independent versioning. JT: Related item I've wanted for a while, "dyn concrete-type", a way to take a concrete type like hashmap, pretend its surface API was a trait, and pass around a vtable for that. e.g. `dyn HashMap`. Convenient for dynamically loadable plugins, don't have to agree on the standard library, just have to have an agreed upon vtable. TM: can simulate this with a proc-macro and a trait... FK: should we have used a trait everywhere? JT: was right design for static, but I think we should have a way to make traits more automatically, need some way to expand what is "dyn compatible". Already have to solve that... ### TC attempt at goal list - Enabling low-level applications (Making RfL a success) - Making Rust's borrow check easier and more powerful - Helping people extend Rust toward their own domain - ... ### Niko's whiteboard - Cloud, lambdas, etc. - Async/network services - High scale - Linear types - Machine learning - Proc macros - Kernel/system dev - Embedded - C/legacy code bases, interop - Smaller binary sizes - New Rust users - Graphical user interfaces - Hot reloading - SM: "The core infrastructure of the internet is written in Rust" Things people want to do with Rust (and what works well, what doesn't) - Writing a Kernel module - Writing a high-scale, high throughput network service - Writing a small function to run in FAAS settings - Writing a common library for use from other languages - Writing a common library for use across mobile platforms - Taking a ML model and putting it into production (??) - Writing IDE language server + other build tooling - Writing ... some kind of CLI apps? (what kind) - Writing game engines in Rust - Writing games in Rust ### Scott's attempts at vision statements - The core building blocks of the internet are in Rust (not C anymore) - If you're writing a library you want to be consumed everywhere, you write it in Rust, and seamlessly expose it (to C, Python, Java, etc) - Rust's type system and validation tooling makes it deservedly part of the trusted computing base ### PLAUSIBLE GOALS Let's call them "slogans" that tell you what we are doing - "Seamless interoperability with X, for all X" - Why? We believe people want to adopt Rust from all kinds of domains with big existing codebases, we need to make it possible for them to do it bit-by-bit - If you're writing a library that you want to be consumed everywhere, you write it in Rust (and expose it to C, C++, Python, Java, ...) - Technical agenda: - ... - "The borrow checker within" - Why? Making the core Rust value prop easier to ease learning and enable extended use cases - Borrow checker that meets programmer where they already are - Might tie in with proof checking somehow - "More correct by construction" or "Model your domain" - Linear types - Where clause the ultimate - Less upfront design required - How: Better dyn-capable traits. Where Clause the Ultimate / integrating cfg into the type/trait system - "Rust is great for writing portable libraries and applications. However, historically we haven't made it quite as easy to be deliberately non-portable. 'I only run on 64-bit systems', 'I only run on UNIX-like systems'. We want to give you a way to build specialized programs, while still getting the static correctness story of Rust. You'll be able to declare your requirements about the target platform, so that you gain all the power of that platform, and so that consumers of your code know your requirements statically rather than dealing with bespoke compile-time or runtime errors." Use cases for Where Clause the Ultimate * "From `#[cfg]` to trait system" * Portability and specialization to particular environments * Being able to pick the "provider" for a generic service (encryption, async, etc) * Not being limited by the capabilities of non-targets: converting infallibly between usize and u64 because you know you're 64-bit. How will Rust be different in 3-5 years? (and how will that matter to you) * Async Rust will feel complete... * ...and the big systems of the internet will run on Rust * ...and standing up a small CRUD service is easy * How? * You won't use `#[cfg]` to pick between platforms, providers, etc, you'll use the trait system * (`where Platform: Windows`) * And code for targets other than the current one will still be checked * You'll be able to seamlessly interoperate with various languages, like C, C++, and Python, without needing specialized tooling * At the source level, Rust will drop seamlessly into existing codebases, without requiring a massive up-front rework of your build system * Proc macros will be richer, integrated with compiler and IDE * * When writing unsafe code, you'll be able to run a tool to check if you're following the rules, and you'll be able to statically prove correctness * * I deal with fewer token-level things, using things that feel better integrated * `cfg`s are rare * most derives use a compile-time reflection system instead of re-parsing the code again ### Breakdown into vision, opportunity, technology TC: What I'm seeing here is we can break these down into three layers: - Vision - Opportunity - Technology The vision is the big statement, e.g. SM's "we want to own the infrastructure of the internet". The opportunity is, e.g., "we can make RfL a success". Linux is certain a big part of the infrastructure of the internet, so it serves that vision. The technology are pieces in support of that, like arbitrary self types, or `asm_goto`, or ways we can expand stable Rust to cover more of the kinds of proofs that RfL wants to make. OR (maybe better fit for RfL) Vision: Kernels should be written in memory-safe languages Opportunities: Rust-for-Linux has started, we need to help Technology: arbitrary self types, or `asm_goto`, or ways we can expand stable Rust to cover more of the kinds of proofs that RfL wants to make. ## Strategic goals rough draft * "Borrow checker within" * Self-referential types * Borrowing subsets of fields * Using parameter/field/etc identifiers as "lifetimes" * "Linear types" (and from them, async drop + scoped async tasks) * Bringing async closer to sync Rust * "Doing more with (and without) proc macros" (need to elaborate the technological goals) * Seamless X Interop for many X (C, C++, Python, ...) * Stable-ish ABIs, dynamic loading, proc-macro-like improvements to allow interspersing code * Scaling the ecosystem (relaxing the orphan rule, support compile-time or runtime dependency injection, ...) * Ergonomics initiative redux - `is`/`let`-chains, move-only fields, parameter lists, ergonomic ref count, better lifetime elision, impl Trait everywhere * Small features that enable a large variety of things - tail-call elimination * Generators * Const Rust should feel more like Rust * Async Rust should feel more like Rust * dyn Trait usability "less design up front" * Dyn Rust should feel more like Rust * Language changes to enable much faster compilation * polymorphization and "dyn all the things" * On-demand compilation across crate graphs, don't compile anything you don't actually need, stop needing features just to prune out chunks of a crate * Pattern and refinement types {Const, Async, Dyn} Rust should feel Rust "Less design up front" ~ felix's nice phrasing - linear types, TCO, async drop, pattern types - distributed slice, pluggable implementations (runtime, allocator, float parsing, crypto, ...) - compile-time reflection (and less proc macros) - better unsafe: better checkers, unsafe fields, better pointer types - less unsafe: move-only fields, more layout optimizations, alignment niches, ...