owned this note changed 2 years ago
Published Linked with GitHub

Design meeting meta notes

Goals from this meeting

Our goals for this meeting, in order of priority:

  • Settle the path forward to making an "official" set of lang-team principles.
    • Do we have consensus to do something like this? (Separate from the exact content.)
    • Are there major topics missing from this document that should be included in a hypothetical RFC?
    • Is an RFC the right path forward? (We think an FCP is necessary for this to evaluate consensus; is an RFC the best way to do that?)
    • Is lang-team the right scope, or should it be broader?
  • Brainstorm how/where we can integrate these principles into our processes
    • We have one proposal, "design principle showdown", are there ideas for others?
  • Feedback on the principles themselves
    • Given time after the above, we can discuss detailed questions about the principles themselves, but as that could easily expand to fill all the alotted time, we suspect that is better left to later meetings, or asynchronous conversation on #t-lang.

Our proposal

We propose to create an RFC outlining the principles and giving details and examples of how they are to be used. We also propose to incorporate the design principles into our lang team design processes, and give one example of how that could be done.

Some key changes from prior efforts:

  • We have focused on the "design principles" that were specific to how Rust itself works and feels to use; we've excluded the community oriented principles, which were half-baked. There may still be merit there, but if we're going to do it, the community deserves a better effort. (And they don't necessarily belong in the same document.)
  • We are proposing to scope this effort as an official language design team effort (i.e., specifying principles for the lang team) as opposed to framing them as (unofficial) "overall Rust principles". We include some discussion points on whether the scope makes sense.

RFC sketch

This section contains a sketch of the RFC content we would expect to include, though it's not formulated in RFC form.

Goal/motivation

A set of high-level principles and priorities and tradeoffs that will

  • help the language team evaluate difficult decisions;
  • provide insight for the broader community into how the language team thinks about Rust design and evolution.

These principles are focused on how the lang team approaches the design of the language itself. Although we expect significant overlap with the guiding principles from other teams, we would rather begin with the language and then consider whether to go for broader consensus.

Rust's goal

Rust's overall goal is to empower everyone to build reliable and efficient software.

The principles we use to achieve it

The following list of principles describe how Rust should feel; they are meant to complete the phrase "Rust empowers by being".

How to use the principles:

  • Each aspect of Rust should strongly embody some principle (or multiple principles) and be neutral with respect to the others.
  • If that is truly not possible, we prefer principles earlier in the list, but we strive to make the impact on the other principles as minimal as possible.
  • If a propose change works strongly against any principle, we should not do it, even if it works strongly for another.

Reliable: "if it compiles, it works"

One of the great things about Rust is the feeling of "it if compiles, it works". This is what allows you to do a giant refactoring and find that the code runs on the first try. It is what lets you deploy code that uses parallelism or other fancy features without exhausting fuzz testing and worry about every possible corner case.

Examples that embody reliability:

  • Memory safety
  • Using ? and #[must_use] to highlight errors and make sure they are handled
  • Returning Option or Result instead of panicking

Performant: "idiomatic code runs efficiently"

In Rust, the fastest code is often the most high-level: convenient features like closures, iterators, or async-await map down to code that is at once efficient and which uses minimal memory.

Examples that embody performant:

  • Iterators, futures, and other zero-cost abstractions

Transparent: "you can predict and control low-level details"

The translation from Rust to underlying machine code is straightforward and predictable. If needed, you have options to control the low-level details of how your application works.

Examples that embody transparent:

  • No required allocator or stdlib to use Rust
  • Able to choose instruction set or opt into the use of features like SIMD at a fine-grained level
  • Inline assembly

Question: We also use "transparent" to mean "versatile" in some places basically, "whatever you want to do, you can do it with Rust". For example, requiring all values to be move is a blow for versatility. Are these distinct principles? Should we have both? Can we say Transparent and Versatile?

Supportive: "the language and tools are here to help"

We strive to make our tools polished and friendly, and we look for every opportunity to guide people towards success. Just as the community eagerly shares its knowledge in a welcoming and inclusive way, we want the language and tools to help people be bold and fearless, counting on the language and tools to support them.

Examples that embody supportive:

  • Requiring type annotations at function level to ensure error messages are predictable
  • Limiting the scope of Type Alias Impl Trait definitions to work well with IDEs

Compositional: "great things that work great together"

Note: We are not sure the best word here! Here are some alternatives we considered, but other suggestions are welcome: "Orthogonal", "Productive", "Interoperable"

The most powerful concepts are ones that make sense on their own, but which can be combined to form new things. Rust language concepts are designed to facilitate code reuse and combination. They are also designed to combine smoothly with one another, applying everywhere and not having arbitrary exceptions or special cases.

Examples that embody compositional:``

  • Choice of .await so that it combined with ? operator and chaining
  • Coherence rules that guarantee crates can be used together
  • Traits for combinator patterns like Iterator

Opinionated: "easy things are easy"

We make it easy to write high-level code that gets the job done. When doing high-level tasks like writing some glue code or a simple script, Rust may not be the best tool for the job, but it's also not the worst.

Examples that embody opinionated:

  • T: Sized by default
  • No copy constructors; Copy is just memcpy
  • Special language types/constructs oriented around error handling, such as Result, ?, #[must_use], etc
  • Permitting main to return Result

How we propose to use the principles

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Design principle showdown

When we encounter a sticky design question, we can go through the principles one by one and write down the ways that they are impacted by each of the options at play. To make this easier, we could create a template document that can be filled in, which could look something like the following. This document is designed so that all participants can agree to its contents, even if they don't agree on the outcome.

# Design principles analysis

## Option A: Foo bar baz

Describe the option briefly.

### Principles strongly embodied

Which principles are strongly embodied by this choice?

### Neutral principles strongly embodied

Which principles are essentially unaffected by this choice?

### Principles traded off

Which principles are being "traded off" 
against other principles by this option.
What can be done to mitigate this impact?

### Principles strongly compromised

Are any principles *strongly compromised* by this option? 
If so, we should find ways to mitigate that impact
so that it is merely a tradeoffs.
Ideally, this section should be writen by people
who do NOT prefer this option.

## Option B (C, D, ...)

(as above)

## Summary

Uplift the best reason to pick that option,
in the form of "If you believe that X is
more important than Y, you should pick option Z."

## Appendix: the principles

For reference, here is a list of the design principles:

* Reliable: "if it compiles, it works"
* Performant: "idiomatic code runs efficiently"
* Transparent: "you can predict and control low-level details"
* Supportive: "the language and tools are here to help"
* Compositional: "great things that work great together"
* Opinionated: "easy things are easy"

It may also be better to organize the analysis by option which principles are strongly upheld? neutral? strongly worked against?

RFC template extension

Extend the RFC template with a design principle showdown section.

Case studies

This section presents three case studies, two where the princples are well-balanced and one where they are not.

Well balanced: Memory safety, out of bounds memory accesses

The classic Rust tradeoff is memory safety itself. It makes programs reliable, for obvious reasons, but it also helps to ensure they are performant by enabling the compiler to do more aggressive optimization (or it will, once we settle on the right rules).

However, there are limits to what Rust is able to prove statically. A pure focus on Reliability would lead to a system that incorporates a full theorem prover, thus eliminating all unsafe code. However, this approach would violate supportive and opinionated the full generality of such a language would make it very hard for people to do things.

We mitigate this in two ways:

  • Vector accesses like vec[i] are checked at runtime. This favors reliability over performant. For most applications, the cost is minimal.
  • Despite the minimal cost of the above, for some applications it is significant. We offer the option of unsafe code, which enables raw pointers and unchecked vector accesses. This preserves transparency, but preserves (in the large) reliability, albeit making it the responsibility of the user.

Well balanced: Undefined layouts and repr annotations

We choose to make struct layouts undefined unless users explicitly opt-in. Giving the compiler room to optimize layouts helps make programs more performant, but #[repr] annotations preserve transparency. The guideline that #[repr] annotates should never change the semantics of a struct (though they can subset its behavior) preserves reliability.

Well balanced: Error-handling design and "making things look like exceptions"

The ? operator is a classic example of balancing transparent with supportive and opinionated. We want error handling to be easy and for there to be clear patterns (arguably we've got a ways to go here), but we also want people to avoid the exception failure mode that it's hard to predict what happens in the case of error. The ? operator is explicit and traceable, preserving transparency, but still short and unintrusive, minimizing the impact on opinionated. It is coupled with strong patterns (e.g., using Result for errors) that permit helpful compiler error messages, preserving supportiveness (and indeed the RFC spent some time talking about the best errors to give when users misuse ?).

On its own, however, ? and Result violate compositionality. This is why we are (slowly) pursuing a generalization in the form of the Try trait, so that the concept of unpacking a result into a "normal" and "abrupt" form can be generalized.

(An argument for try fn, which we will not go into here, is a case that that we are insufficiently opinionated with ?, and too transparent, requiring too much generic boiler plate such as Ok-wrapping.)

Poorly balanced: Allocation and copy vs clone (also an anti-pattern)

Current Rust rules draw a sharp distinction between Copy values, which can be implicitly copied just by copying bytes from one place to another, and Clone values, which must run custom logic. This rule was meant to ensure that Rust programs are transparent given a function call like foo(a), there is no implicit execution of arbitrary code to "clone" a. This ensures (among other things) that allocations are marked with some kind of explicit syntax.

However, this rule arguably makes Rust programs less reliable and performant:

  • Less reliable because cloning a value can mean very different things depending on that value's type:
    • If I clone a Vec<u32>, I get a fresh vector that is independent from the original. Vectors (and all deeply owned types) thus operate as values, same as a u32 but with a more expensive clone.
    • If I clone an Arc<Mutex<Vec<u32>>>, I get a second handle to the same underlying resource, and mutations on one handle are visible to the other.
    • As a result, if I do let y = x.clone(), I cannot know the extent to which x and y are "interlinked" without knowing the details of their type and implementation.
  • Less performant because our design makes copying large amounts of data more ergonomic than reference counting or having small allocations:
    • Cloning a Rc<[u32; 10*1024]> requires writing x.clone() but copying a slice not wrapped in Rc does not?
    • Introducing a Box into an enum can make it radically smaller and make your code a lot faster. We don't do the "optimal thing" by default in cases like this, favoring transparency instead (of course, the "optimal thing" is very difficult or even impossible to determine in general).

Poorly balanced: All types require move and all values are affine

At least with its builtin rules, Rust requires that all values (1) can be moved from place to place; (2) can be dropped; and (3) can be forgotten. In contrast, C++ does not work this way, but instead says that a value never moves, instead providing a way for a value to "take its value" from another value. This is an opinionated choice for Rust and it can be very helpful when writing code, however, it violates transparency, because there are low-level details that Rust doesn't let you control. This has led us to the idea of pinning, which remains an awkward and poorly integrated element of Rust.

Poorly balanced: Sync and async functions do not have the same capabilities

We've violated compositionality because sync functions have the ability to invoke a closure and then guarantee that they can take action after that code terminates, even in the event of panic or forgets, but async closures lack that capability (we don't guarantee poll-to-completion). This in turn hinders our ability to integrate rayon-style parallelism into async.

Unclear: auto traits

We decided early on to make Send and Sync into auto traits. This upholds performant, opinionated, and arguably composition by encouraging Rust users to adopt strict ownership patterns that, in turn, support parallelization. It trades off reliability by making semver decisions more complex; in principle this can be mitigated through supportive tooling, but the rust org has never invested in such tooling (there are third party options out there). It also trades off transparency, such as in the rules for how Send flows through function types without always being mentioned.

Other examples

Here are some other interesting examples of specific tradeoffs we have encountered.

  • Poisoning and mutex guards: current design favors reliability and transparency, but Niko would argue that it is insufficiently opinionated (lock on a Mutex<T> should be returned a guard and panicked in the case of poisoning). Others might argue that the cost to performant is too high and benefits to reliability are too low. This is an example that is in accordance with the ordering of the principles, but still feels "suboptimal" because its overall impact is too high.
  • Floating point: How much do we unify across platforms vs let them diverge?
  • Temporary lifetimes: Challenge of ease-of-use vs reliability
  • Async boxing: Explicit Boxing::new (transparent) vs opinionated (see these blog posts)

Rules of thumb

The following are examples of "rules of thumb" that we follow to help ensure that we are uploading the principles (and sometimes to help us maintain other constraints that may not be directly captured by the principles). They are not as general, but they're quite useful. We're writing them down to capture them as documentation and reference.

  • Innovation tokens. Rust doesn't necessarily stick to precedent, but we have to be careful how far we diverge. We can do things that are unfamiliar to people, but that's a high cost and needs to have a correspondingly high benefit. (People also sometimes call this "weirdness budget".)
  • Local reasoning. You should generally be able to understand code based on what you see close to where you're editing or reading; the more context you need, the harder it will be for code to be reliable and transparent, and the harder it is for the language or tools to be supportive. (Tools can make up for this but we don't want to force people to use those tools, and we don't want to make tools harder to build.) This is related to what people mean when they say they want something to be "explicit". See also Aaron's classic formulation of balancing "applicability", "power", and "context dependence".
  • Don't require tools to understand code. Code appears in many places without the support of cross-references, syntax highlighting, and other advanced features. For instance, people read code in git diff, in editors without IDE features configured, in emails/documents, and on web pages. We don't tend to incorporate language features that rely heavily on such features to write, read, or understand.
  • Don't create "dialects" of Rust. We want Rust to be understandable across the ecosystem and for Rust code to be copy-and-pastable between Rust projects to the extent possible without changing what it means. This emphasizes orthogonality / compositionality.

Rough sketch of a rule of thumb, which Niko doesn't fully agree with yet:

  • Don't make something look like a familiar concept if that concept may lead people to make incorrect assumptions that will lead to important mistakes.
    • Imperfect models can be useful, and this isn't a hard-and-fast rule; it matters whether the incorrect assumptions will lead to important mistakes or just unimportant misconceptions.

Frequently asked questions

Decisions we made and the rationale behind them.

What about community-oriented principles?

This list focuses on Rust's language design; community principles or broader Rust project principles would be a much broader document, and we don't think that should be combined with these aspects of language design. Among other things, the language design principles are designed to explain tradeoffs we make, and community principles might have similar tradeoffs, but we don't want to approach potential tradeoffs between the two kinds of principles in quite the same way.

What about stability without stagnation?

There are other kinds of principles that Rust has evinced that have more to do with the way we operate. These are good to talk about, and interact with the principles, but they are not the same as.

  • Open, participatory discussions
  • Bold and ambitious
  • Stability without stagnation?
  • Work iteratively?
  • Find the MVP we can all agree on?
  • Don't settle for satisficing solutions, push for ways to satisfy.
  • Begin with the building blocks or the end abstraction?
  • Learn from other languages, do things similar if we don't have a good reason to do otherwise, do things different if the benefits are worth the innovation tokens

Why is ergonomic not on the list?

Begs the question of ergonomic for what opinionated (easy things easy) covers the case of making simple programming tasks straightforward, but for complex software meant to last a long time, ergonomics involves reliability, performant, etc.

Why is "familiar" (e.g. from other languages) not on the list?

Rust doesn't seem to place existing precedent that high on the list. We don't intentionally reinvent the wheel, but we are willing to do something different (e.g. Rust's syntax is C-inspired; we write .await and not await x).

Questions for discussion in this meeting

Should we start with this as a lang document?

It's useful to focus on lang to keep things moving, but a lot of the principles (e.g., supportive, orthogonal) are more cross-cutting, and indicate areas where the language design works in concert with the stdlib and tooling to achieve the desired effect. Maybe it would be better to frame this as a whole project effort from the start?

An interesting question to consider is how we would go about modifying the principles suppose that we adopted them and then found that something didn't seem right, or there was a missing principle. What would we do and how would it play out differently if this were a lang-team vs whole-project document?

What kind of language would we get if we reordered the principles?

Worth checking our assumptions, and exploring what kinds of language design we would get by having different priorities.

What other exercises could we do to validate the principles?

How can we validate the existing set of principles and check for exhaustiveness? Possible examples:

  • Do a survey asking people to highlight cases they think work well or work against the orderings, or which are not covered by the principles.
  • Do an exhaustive comparison of each principle against one another, demonstrating that the total ordering is largely valid.
  • We can't, let's just use them and see how it goes.

Other ideas? Do those things seem worth doing?

Do we have the right model? e.g., Ordering vs "Goldilocks"?

The principles are based on the idea of ordered tenets, as practiced at Amazon and other companies. It's possible that a "total ordering" isn't the right model. Another interesting one is the "goldilocks" model, where for each principle we list what it means to have "too little", "just right", and "too much". This seems like a useful exercise regardless. To avoid wasting meeting time, let's avoid discussing this unless anyone wants to actively argue AGAINST ordered tenets. In other words, adding goldilocks-style analysis seems like a "pure win", but it also doesn't contradict ordering, which also adds value and is easy to understand.

Here are some examples of going "too far". In these cases, what makes each of them "too much" is that they go beyond "trading off" another principle to "completely disregarding it":

  • "too reliable": Making an exceptionally complex type-system feature to make it impossible for something to fail at runtime, vs handling errors at runtime.
  • "too versatile": There are platforms that can't guarantee basic things we want to guarantee everywhere

This is an interesting example where it's not clear if the problem is that it violates some other principle. This may indicate a missing principle (long-term maintenance):

  • "too transparent": We don't guarantee exactly what syscalls we use, though people writing seccomp filters might want that.

Is the set of principles correct? Anything missing?

Some interesting points to discuss:

  • The transparent principle kind of does "double duty" as versatile where versatile means "whatever you want to do, you can do it with Rust". There doesn't seem to be much tension between the two, as having the ability to control low-level details leads to versatility (unless of course what you want to do is not mess around with low-level details, but that's opinionated). And yet the names sound very different how to reconcile this? And is transparency the only factor required for versatility?
  • Where does maintenance over time fit in, or semver considerations? Are they reliability? They seem distinct, as there is a kind of tension between detecting errors and long term maintenance and stability, and we tend to favor the former (as evidenced by e.g. exhaustive matching on enums by default and the #[non_exhaustive] attribute).
  • What about "simple" or "easy"? We're packaging that as opinionated, and that is part of it, but it's arguably too narrow. (Simplicity, of course, is very much in the eye of the beholder.) Are there other aspects of "easy to use" that don't fall under "opinionated"? ("hard to misuse" falls under "supportive" or "reliable")

How else can we integrate the principles into our lang process?

We make two suggestions, tradeoff documents and modifying the RFC template. Are there other ideas?

Meeting discussion

Add topics here, following the template below! We've left some placeholders for you. Before using the last one, add some more so people can easily edit in parallel.

Meeting etiquette: Please avoid typing answers to people's questions, unless you are simply clarifying a point of confusion, or asking them to clarify their question. Keep substantive discussion for the meeting so that everyone can participate. We add notes during the meeting with the major and comments raised.

How do I add a question?

Ferris: Use this format! Make a new ## section, summarize your question, and put your name at the front.

Consensus?

Niko: Do we have consensus to do something like this?

scottmcm: Would love to have something like this.

One way I've phrased this in the past is

When I'm next asked to

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
an FCP or leave a concern, what's the rule that you want me to be using to decide whether a sugar is ok?
~ https://internals.rust-lang.org/t/idea-implied-enum-types/18349/41?u=scottmcm

Similarly, having these to help people make arguments more productively on RFCs would be a big help. I liked the note below that it would be possible to write a doc that everyone could agree reflected how the feature impacts the principles, even if those people potentially disagree on whether those impacts are acceptable or not.

"straightforward"?

scottmcm: the translation to machine code is described as "straightforward", but we have a bunch of things where the optimized machine code translation is anything but, like in optimized iterator loops. Is that ok? Could we find another word that fits better?

scottmcm: (this is slightly weeds, so could be moved down)

"weirdness budget" / "innovation tokens"

Yosh: I've never really loved this framing because it feels like it is only ever employed as a negative to state why something should not happen - but never as a reason for why something ought to happen. E.g. I've never heard: "Hey we have budgeted for weirdness on this feature, we get to be weirder about it if we want to.". The "budget" or "tokens" are always something we are short of, running out of, and that's stated as a reason to not do things. We don't really know how many tokens we have, just that we don't have enough of them.

Yosh: In a way I feel like the concept of "innovation tokens" or "weirdness budget" describe an inherent conservative force which does not directly translate to actionable properties. The least weird way of doing things will always be to preserve the status quo. The smallest change is always to keep things as they are. We know this is the case, and it will always be brought up as a concern. Because of that I don't think this should be elevated to the status of "value". Instead it seems more useful to discuss proposals based on whether we have a good story to teach them, whether they're internally consistent with the language, whether they're inherent / orthogonal, etc.

Josh: Does the emphasis on "innovation tokens" as the primary phrasing help there?

Yosh: Not really, it's inherent to the premise of it I think. Though it does give it a less negative spin.

Yosh: That said, I do think there is something there that we need to do justice to exploring the obvious, conventional solution. This feels maybe like: "make sure to test the null hypothesis" or "if you do something different, it needs to be justified". Not quite about budget and stuff we run out of - but more about due diligence, teaching, etc? Are there different values we could be swapping in here to capture the same essence of what this tries to get to?

scottmcm: Office used to have a saying of "all change is bad unless it's great" (or something like that), which I think is aiming at the same sort of thing.

scottmcm: Personally I'm not a fan of it on its own, as "weirdness" is very history-dependent. Braces are either entirely normal or very weird, depending on background.

Defaults are the reasonable ones, not necessarily the fast ones

scottmcm: One thing we're usually good at is making the "default" version the one that behaves reasonably, and having foo_somesuffix be the version that trades off understandability for (potential) speed. Should I think of that as "reliable" or "opinionated"? Those choices have different relative orderings to "performant", which is why I'm trying to figure out how I should think about it.

Case Study: Type Inference

scottmcm: I think type inference is mostly considered a good choice nowadays though https://graydon2.dreamwidth.org/307291.html recently suggested Graydon would have preferred something else but if I look through these principles I have a hard time justifying that. It's clearly good for opinionated, but that's the last thing, and it's not necessarily good for anything else, and many consider it worse for that. Is there something missing that would cover it? At least outside of some of the worst stability gotchas we have with it, like people using x.as_ref() when they should use &x.

(Question inspired by the "inferred enums" threads that keep popping up on IRLO, for something like send_request(.Get, url) as opposed to needing use crate::foo::HttpMethod; send_request(HttpMethod::Get, url).)

nikomatsakis: lol

Transparent examples

pnkfelix: The transparency examples in the doc strike me as 1. potentially niche (not used by a massive percentage of Rust audience) and 2. not illustrative of how translation in Rust strives to be transparent (in the sense that the translations are straight-forward and predictable). However, reading further into the doc, I thought the example of ? is a good example of transparency, in the sense that it is easy to understand the effect of <expr>? on the code, via a local transformation. (Locality is not a necessary precondition for transparent translation, but it certainly helps.)

Great principles say what they're giving up

TC: Principles that sound "too positive" don't always provide great guidance. Think of a company principle like "we're innovative". Who doesn't want to be that? But it doesn't guide behavior as much as a principle like "move fast and break things" (setting aside whatever else you may think about that). Move fast and break things guides decisions. That principle will be used in the organization to make decisions in a way that an unbalanced principle will not.

Niko: This is expressed in the tension between the principles, rather than directly in the principles themselves.

"Transparent" and "controlling low-level details" are separate

TC: There's a meaningful different between a language being transparent and giving control of low-level details. For example, async has a lot of magic, and we've recently discussed other features that may have low-level compiler magic, but they're in the service of giving fine-grained control.

"Expressiveness" may be a good principle

TC: Expressiveness may better capture the idea of control of low-level details. A maximally expressive language is one where the programmer can get the exact runtime behavior and memory layout wanted. In Rust, that tends to mean that we want as much of that to be possible in safe code as possible. Expressiveness also means that you can build higher-level abstractions that still achieve your low-level vision of the behavior.

The trap of truisms

One thing to watch out for is that we don't end up saying: "we want good things because they're good" or similar. I think this may be the same point TC mentions above about "great principles are about what they're giving up" - but from the opposite angle?

Rules of thumb should, if possible, be children of one or more principles

pnkfelix: reading the "local reasoning" rule of thumb, I said to myself "this sounds like a special case of why transparency is important." (But perhaps that claim itself is not well founded) If my claim is right, then that leads me to wonder: do all of the rules of thumb actually have grounding in principles (and perhaps we have not established some missing principle that would be the appropriate parent for a rule of thumb).

Missing principle: Stable

TC: We probably think of this as going without saying, but we should comment on how we think about stability as a principle. It's a cultural virtue that everyone in this room shares, and it captures a philosophical notion and that is broader than a particular set of rules about what we can or cannot break. In the Linux kernel, for example, it's famously one of Linus' guiding principles: "don't break userspace!"

The relationship of lang values and the "drawbacks" section in the RFC template

Yosh: Writing it here because we closed the topic in the discussion. But the RFC template currently has a drawbacks section which is left suspiciously open? Could that potentially be a place where we instead ask people not to list drawbacks, but tradeoffs. What is gained? What is lost? Which values does this value over others?

Select a repo