# Criteria For Inclusion in `std` **NOTE**: this is not yet the official position of the libs team, this is a draft document being created based on discussions in https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Criteria.20for.20inclusion.20in.20std with the hope of becoming an official document in the future ## Rough Notes From the Discussion ### Reasons against inclusion - We don't typically put things in std if there's any reason to want a different implementation rather than just making one implementation better. (We do make choices in std, but if there's a set of huge use cases and our choices would make us only serve one/part of them, we'd hesitate to add something.) - We don't add things whose stability we're not confident in, since we can't take interfaces back. - Corollary: we don't tend to add cryptography, cryptographically secure random numbers, or other things where algorithms may need to change and be deprecated in the future. - We do like to add interfaces for which there's a strong benefit to only having one, such as common traits. - Error trait, `Fail` trait showed the cost of having competing error traits, causes ecosystem fractures - We prefer interfaces we can make portable. We try not to make portable interfaces that are more ideal for one target than any others. - Excessive complexity - The Needle API got retracted because we did not have the expertise to finish it or maintain it - Interfaces for operations that are otherwise error prone to implement ### Reasons for inclusion - Functionalities that are irreplaceable in terms of alternatives, have no potential breaking changes in the future, and apply to wide ranges of uses are generally candidates for std inclusion - whether or not it's an API we'd particularly like to use in learning materials and examples - used to convince the last few ppl that scoped threads should be in std and not a crate since it significantly simplifies examples showing off the power of rusts thread+memory safety and removes the need for wrapping stuff in Arcs - I feel like there should be something about unsafe patterns that people often implement incorrectly by hand. This was part of my argument for stabilizing `Vec::spare_capacity_mut`, but I'm not sure if it should applying to providing `bytemuck`-like APIs in `core` or `memoffset`. The latter is one of the leading causes of UB reachable by test suites (inarguably the leading if you ignore aliasing). ### Precedent - rand in std is unlikely due to both the API complexity concern, stability confidence, and cryptographic deprecation risk - we have precedence for deprecating specific algorithms, e.g. for sorting and hashing. the question is how best to provide algorithm-agnostic interfaces, and how to enforce that a specific algorithm doesn't ossify ### Non-Precedent - mpsc was added but would likely not make the cut under current criteria, primarily due to the diverse API needs concern ### All of Scotts Commentary... I think `core` is actually easier, because it has more foundational restrictions and doesn't have (imho bad) precedent like `mpsc`. Though I don't think there will ever be a hard "fill out this worksheet" style rule. - Methods are generally fine, so long as they're providing in some sense a new capability. That does include exposing a subset of something that otherwise needs unsafe code to safe code. But importantly it usually doesn't mean arbitrary combinations of other methods -- one common way that seems to come up is that `.blah_with(Default::default)` is considered fine, not a reason to add `.blah_default()`. - Traits are generally discouraged, and tend to need RFCs, because anything that's an extension point needs a much larger discussion. The most of traits in `core` are ones with integration in the language -- that hits `ops::*`, `Copy`, `Sync` (`static` cares), `(Into)Iterator`, and more. Then there's the set of common extension points where everything using the same one is important -- multiple `Clone` and `Hash` traits would be a royal pain, for example. There are also a few unusual ones, like `iter::Product`, which I'm not sure would be accepted today if they didn't already exist. - Provided trait methods are somewhere in the middle, since they affect more things so need more oversight than inherent methods do. They're generally tolerable on rarer traits like `Hasher`, but can cause trouble on pervasive things like `Ord`. - Types seem to be far more nuanced, so I don't have a great set of guidelines for them. They [seem](https://github.com/rust-lang/rust/pull/95485#issuecomment-1118134823) preferred over traits, where possible. And they're a good way to avoid primitive obsession in various places, especially in trait impls -- `impl Termination for u8` is less clear than `impl Termination for ExitCode`, for example. (That also works nicer with the current limitations of stability checking for `impl`s.) But if a method needs a new type to encode the return properly, that seems somewhat discouraged. I like [`zip_longest`](https://docs.rs/itertools/latest/itertools/trait.Itertools.html#method.zip_longest), but it needs a new enum for the item type of the iterator, and that makes me suspect it wouldn't be accepted in `core`. Notably, any time there's a new type in `core`, that means the full set of traits it should have, the invariants it offers, the safety consequences, etc need to be considered. (Notably a method doesn't have most of those questions.) A type can also be worth it if there's going to be a bunch of related methods, rather than needing longer names and a much longer rustdoc page on another type. - And of course `core` is also the place for things that need backend integration to work best. `count_ones` is much better when it can use the specialized instructions rather than forcing the backend to detect that's what your loop is doing.