---
date: 2025-08-19
url: https://hackmd.io/Se_GOZXdTba42UUwUomERg
---
# Libs-API Meeting 2025-08-19
###### tags: `Libs Meetings` `Minutes`
**Meeting Link**: https://meet.jit.si/rust-libs-meeting-crxoz2at8hiccp7b3ixf89qgxfymlbwr
**Attendees**: Amanieu, The 8472, David, Josh, Scott, Chris Denton, TC
## Agenda
- Triage
- https://github.com/rust-lang/rfcs/pull/3844
- Anything else?
## Triage
### FCPs
15 rust-lang/rust T-libs-api FCPs
- merge rust.tf/80437 *Tracking Issue for \`box\_into\_inner\`* - (1 checkboxes left)
- merge rust.tf/140808 *Implement Default for &Option* - (3 checkboxes left)
- merge rust.tf/143191 *Stabilize \`rwlock\_downgrade\` library feature* - (3 checkboxes left)
- merge rust.tf/106418 *Implement \`PartialOrd\` and \`Ord\` for \`Discriminant\`* - (2 checkboxes left)
- merge rust.tf/65816 *Tracking issue for \`vec\_into\_raw\_parts\`* - (3 checkboxes left)
- merge rust.tf/132968 *Tracking Issue for \`NonZero\<u\*\>::div\_ceil\`* - (3 checkboxes left)
- merge rust.tf/116258 *Tracking Issue for explicit\-endian String::from\_utf16* - (2 checkboxes left)
- merge rust.tf/139087 *Fallback \`{float}\` to \`f32\` when \`f32: From\<{float}\>\` and add \`impl From\<f16\> for f32\`* - (5 checkboxes left)
- merge rust.tf/129333 *Tracking Issue for \`lazy\_get\`* - (3 checkboxes left)
- merge rust.tf/127213 *Tracking Issue for AVX512\_FP16 intrinsics* - (3 checkboxes left)
- merge rust.tf/136306 *Tracking Issue for NEON fp16 intrinsics* - (3 checkboxes left)
- merge rust.tf/141994 *add Iterator::contains* - (3 checkboxes left)
- merge rust.tf/144494 *Partial\-stabilize the basics from \`bigint\_helper\_methods\`* - (3 checkboxes left)
- merge rust.tf/144091 *Stabilize \`new\_zeroed\_alloc\`* - (3 checkboxes left)
- merge rust.tf/63569 * Tracking issue for \`#!\[feature(maybe\_uninit\_slice)\]\`* - (3 checkboxes left)
[BurntSushi (13)](https://rfcbot.rs/fcp/BurntSushi), [nikomatsakis (2)](https://rfcbot.rs/fcp/nikomatsakis), [dtolnay (1)](https://rfcbot.rs/fcp/dtolnay), [m-ou-se (13)](https://rfcbot.rs/fcp/m-ou-se), [scottmcm (1)](https://rfcbot.rs/fcp/scottmcm), [joshtriplett (10)](https://rfcbot.rs/fcp/joshtriplett), [jackh726 (1)](https://rfcbot.rs/fcp/jackh726), [Amanieu (2)](https://rfcbot.rs/fcp/Amanieu)
### (nominated) rust.tf/libs587 *ACP: \`try\_exact\_div\` method on \`NonZero\<{integer}\>\`*
We've asked for feedback; we'll unnominate until we receive it.
### (nominated) rust.tf/141727 *Tracking Issue for NUL\-terminated file names with \`#\[track\_caller\]\`*
We'll propose libs-api FCP on the tracking issue. We'll ask for a lang nomination on the stabilization PR as there seems to be a new language guarantee here and this is the first time that it would be exposed.
### (nominated) rust.tf/145328 `P-critical` *\`pin!()\` has incorrect/unexpected drop order inside if\-let.*
We'll defer to lang on this and unnominate it.
### (nominated) rust.tf/145342 `P-lang-drag-0` *fix drop scope for \`super let\` bindings within \`if let\`*
We'll defer to lang on this and unnominate it.
### (vibe check) rust.tf/rfc3844 *Next-Gen `transmute`*
Rendered: <https://github.com/scottmcm/rfcs/blob/newt_level_transfiguration/text/3844-next-gen-transmute.md>
scott: `[[T; 2]; N]` <-> `[[T; N]; 2]`
The8472: `const fn fun_transmute::<T, U>() -> Option<fn(T, U)>`
alternative for users, handrolled const-if: https://github.com/rust-lang/rust/issues/122301
TC: It's maybe a bit prettier with RPIT:
```rust!
const fn divide<const D: u64>(n: u64) -> u64 {
const { assert!(D > 0) }; // <--- Post-mono assertion.
n / D
}
// Note that we can't make this a const function yet due to the fact
// that we can't return a const closure.
fn divide_checked<const D: u64>(n: u64) -> Option<u64> {
const fn const_if<const D: u64>() -> impl (Fn(u64) -> Option<u64>) + 'static {
match D {
0 => |_| None,
1.. => |n| Some(divide::<D>(n)),
}
}
(const { const_if::<D>() })(n)
}
```
The outcome of the discussion is that scottmcm will proceed on a lang experiment to add `union_transmute` (under some name) in nightly, along with a separately named `transmute` with the post-mono check, to work out the compiler bits and the lints and so forth.
### (nominated) rust.tf/145608 *Prevent downstream impl DerefMut for Pin*
### (checkbox reminder) rust.tf/3844 *Partial-stabilize the basics from bigint_helper_methods*
(Not asking for discussion, but one more [checkbox](https://github.com/rust-lang/rust/pull/144494#issuecomment-3133172248) would get this to FCP)
### (waiting on team) rust.tf/139087 *Fallback \`{float}\` to \`f32\` when \`f32: From\<{float}\>\` and add \`impl From\<f16\> for f32\`*
### (new change proposal) rust.tf/libs637 *ACP: Add \`Bound::copied\`*
Amanieu: We have a `cloned` method on `Bound`. This adds the `copied` method. I'm inclined to accept this, we've done this for other cloned methods.
Josh: Seems reasonable, it's just a 1-1 clone matching copy.
(broad agreement)
### (new change proposal) rust.tf/libs635 *ACP: \`Thread::os\_id\`*
Josh: Yes please.
Chris: Have we talked about this before?
Amanieu: Yes. We want this, we don't know what API to give this.
Josh: It has vaguely the shape proposed. Is ther any platform where this is not sufficient?
The 8472: There could be a future OS that could use e.g. UUIDs.
Josh: I feel the idea of entire new OS arising and using a new strategy for Thread IDs that's not based on u64, it feels that if that ever happens we could include a target-specific API for that platform.
Amanieu: Does Fuchsia have thread IDs?
Josh: It has process ID, I'm not sure about threads.
Amanieu: I don't think it even has process IDs.
Chris: Aren't they kernel object IDs? Same as any other objects in the kernel?
Amanieu: API-wise, should we make this an OS-specific extension trait?
Josh: I think we have that extension trait already. I think there's an extension trait that get you the PIDt
Amanieu: There's the `JoinHandle` which gives you pthread_t
Josh: Having means to get the OS thread ID would be really nice. For logging you don't really care if it's 32 or 64 bit because you can print it as a string. And when callind to the OS worst case scenario you can do try_into
Amanieu: There is a large risk of use-after-free thread_ids
Josh: These depend on the OS
Amanieu: The thread ID is a PID that ceases to be valid once the process exits. This doesn't happen for thread ids
Josh: I agree it's immediately available to reuse but I don't think it's immediately reused.
Amanieu: How would people feel about making ot only available for ?? kernel
Amanieu: I had to deal with someone having to set ?? max to 32k
Josh: You're right, it can be wrapped and reuned quickly
The 8472: We have PID_fd for linux.
Amanieu: I think people just want teh nice ID they can dump into the logs.
Josh: Systemd is currently advancing the idea of using the file ID of the ??. But that will take a while to bubble through and it's linux-specific.
Josh: +1 to the fact that current_os_id is a simpler ask. I think we still should provide the ID for separate threads and we should be documenting that using this in any OS API call is potentially prone to races and if you're not certain if a thread still exist. So it's mainly useful for logging.
Amanieu: Do we want to expose PID FD?
Josh: For linux, yes.
Amanieu: Should we bring in all process IDs?
Josh: The specific usecase here is to be able to write common logging type across OS
The 8472: We could use an opaque type that's printable but not corvertable to a number. I'd even make the current OS ID to bbe OS-specific. Do the type sizes vary?
Should we make this a platform-specific type or just big enough?
Josh: I think we should just make it big enough.
Amanieu: If it doesn't have a 64-bit ID, just return None and say to use a OS-specific API (if it's an OS that has bigger numbers).
Josh: Yeah.
Josh: If we had that opaque type, do we want to add an unsafe function to get the ID.
The 8472: It's not unsafe.
Amanieu: I don't think we should be using unsafe here.
Josh: I was thinking in terms of FD safety.
Amanieu: It's safe to get a raw ID of FD safely.
Josh: Fair enough. Should we have an opaque type that we return and then have a way to give you a u64?
Amanieu: The opaque type is Thread. That's already an opaque type.
Josh: I'm talking about a ThreadID -- that's a wrapper around the thread ID number.
Amaineu: We have ThreadID.
The 8472: That's internal and doesn't map to the operating system ID.
Josh: Yes, we'd have somethnig like OSThreadID here.
Amanieu: I don't see a need for a wrapper type. I'd just return u64.
The 8472: Why don't we have a wrapper type for our internal type?
Josh: Amanieu, I'd happily check a box for either one.
Amanieu: I think thread_id as an opaque type makes sense. You can use a hashmap that could identify a thread. For process_id you'd be doing a raw stuff and there you'd need the integer anyways.
The 8472: The main idea was printing for logs.
Amanieu: There's pritting for logs but there's also "I'd like to change scheduling priorities for the thread"
Chris: But at that point you'd talk to the OS APIs
Amanieu: Yes and that's why you'd need the integer. I just see very little value in an extra wrapper type
The 8472: The only value is that if some exotic OS has a different mechanism we can wrap in.
Josh: The other thing is the thread id reuse.
Amanieu: I don't expect forward compatibility issues. If we want to make this OS specific, make this be an os-specific function on the thread. Otherwise return u64
Josh: Hypothecilally, if a platform didn't have unique IDs and did have pointers, is anyone going to be silly enough to cast u64 to a pointer and return as thread ID?
The 8472: Doesn't windows do that all the time?
Chris: This is a handle that's now an indexing array rather than a direct pointer.
Amanieu: This proposes putting the pthread_t inside thread. I don't think we can do that as it would allow use-after-free. The thread object would outlive the thread itself because it's wrapped around an Arc. I don't think we can store the thread_t, we have to store the raw number.
Amanieu: are we happy to accept this mostly as is? And with a lot of warning.
The 8472: May we could survey the more exotic targets? But I guess that happens during implementation.
Amanieu: Sure. We only have an option to return None when it's not supported.
TC: Do we know if WASM will be using u64 when they get around doing threads?
Chris: It says here they do 32-bit integers.
TC: I'm just checking there's not someone out there that's using UUIDs for thread ids.
Chris: The 8472 brough that up. Might happen in the future by someone.
Josh: As far as I can tell there aren't any current OSes that do that.
The 8472: Either UUIDs or pointers would be interesting types that might need special handling.
### (new change proposal) rust.tf/libs634 *ACP: Exact file read at offset on Windows*
Amanieu: read_exact_at is on the unix extension trait.
Josh: We have read_exact, we have read_at, we should have read_exact_at
Chris: Seems like a natural extensions of what we already do.
The 8472: On Windows, it updates the cursor afterwards.
Chris: But if you're only using read_at, it doesn't matter.
The 8472: Yeah, you're going to have issues if you're using exact and position-based functions anyway so this shouldn't matter.
Josh: On Windows we don't have read_exact, we have seq_read. This would add seq_exact.
Josh: Has anything ever added the equivalent of p_read and p_write that doesn't touch the cursor? Cursor-independent way of saying "read at this offset".
Amanieu: The ReadFile/WriteFile API doesn't seek and accepts an offset but it does update the cursor afterwards.
Chris: I don't think so?
Josh: Ok, seemed worth asking. But even if it did exist it wouldn't be something we could count on anytime soon.
Amanieu: This seems like an easy accept?
The 8472: Sounds fine
Chris: I'm happy with it.
### (new change proposal) rust.tf/libs618 *ACP: Implement fmt::Binary for \[u8\] and friends*
Amanieu: We tentatively accepted it last time. I asked Mara to comment and Mara is firmly against it.
Josh: The fact that the hash fol alternate formatting affects both is an argument that this is a little weird.
Josh: Josh Stone linked to the itertools format method and I find myself immediately thinking "let's ship that". Botho `format` and `format_with`
https://docs.rs/itertools/latest/itertools/trait.Itertools.html#method.format
Josh: That would certainly be a better way to solve this problem
TC: What's the proble mwe're trying to solve?
Josh: You have a list of bytes etc. and you want to print each of them as binary.
Josh: I'd happily sign off on an FCP for adding `format` and `format_with` from itertools.
The 8472: When I do a bunch of formatting for binary, I'd rarely want to look at what the `0b1` that's in the problem statement. The variable length is not hepful when looking at large amounts of data. I'd love everything be padded to the same length.
Josh: I'm agreeing to Mara's objection to this proposal.
The 8472: Then what I'm saying you probably want that more often anyway to have finer control over the elements so +1.
Josh: Given that, I think the correct answer is to close this and invite an ACP for bringing in the iterools funcitons.
Amanieu: Sounds good.
TC: Would we not want binary to do exactly what hexadecimal does?
This is a valid program:
```rust!
fn main() {
let x = [0, 1, 2, 3];
println!("{x:x?}");
}
```
This is not a valid program:
```rust!
fn main() {
let x = [0, 1, 2, 3];
println!("{x:b?}"); //~ ERROR
}
```
Wouldn't we expect the second to do exactly what the former would do but for binary?
See Mara's comment: https://github.com/rust-lang/libs-team/issues/618#issuecomment-3180369556
Amanieu: The `x?` is not a formatting thing, it's a special flag that's being passed directly to the formatter.
Josh: There is no way to say I want the individual hex values to be formatted with `#` (to add the `0x`) without also having that `#` affect the Debug to switch to the alternate mode (with lots of additional newlines).
TC: What the author asked for in the ACP was more magic than this, and Mara explained why that doesn't work. What I'm distinguishing is whether we might want it to just do exactly what the hex version does, for all of its flaws.
Josh: Exactly what we do for hexadecimal isn't necessarily obviously what people want.
TC: I agree, but if we're not planning to change that, it seems we should do that thing for binary for consistency.
The 8472: Not necessarily. If the existing thing is a mistake, we shouldn't add more of it.
TC: In that case we should document that as we're essentially treating this as a soft deprecation. That deprecation then becomes our rationale for not extending the surface area.
Chris: We're also not expanding it to octal.
Josh: There are very few people who will need that as there's approximately one legitimal use for octal left (printing unix file permissions).
### (new change proposal) rust.tf/libs603 *A \`Static\` receiver type for dyn\-trait methods that do not take values*
Josh: We talked about it last time and the conclusion was to have a broader discussion and possibly conversation with lang.
### (new change proposal) rust.tf/libs568 *Implement \`Read\` and \`Write\` traits for \`BorrowedFd\` and \`OwnedFd\`*
The 8472: I think the counter proposal was an unsafe `FdView` trait. And since then we haven't heard back from the author. Should we ping the author or consider the alternative on our own?
Josh: Seems like a good idea. The 8472 would you be interested in writing an FCP for `FdView`?
The 8472: I can do that.
Josh: That FCP should state which types in the standard library we'd implemnt `FdView` for. Basically everything we have that can be converted from the file descriptor.
The 8472: We would constarint the layout so the file descriptor would be offset at zero so we can convert it to a pointer.
Josh: The type must consist of nothing but the file descriptor.
The 8472: I'll list that and I'll list a wrapper type as an alternative. I'll write that up.
### Sidebar discussion: Tracking Issue for RFC: Supertrait Item Shadowing #89151
https://github.com/rust-lang/rust/issues/89151
This would unblock a lot of Standard Library things (all of itertools).
It's waiting for a stabilization. Which is waiting for the lint levels.
TC: Didn't settle this on the lang side and are just now waiting on a stabilization?
TC: I think we settled this when we did the (second) RFC. We decided against warn-by-default at the use site and left an unresolved question about warn-by-default at the def site.
https://github.com/rust-lang/rfcs/pull/3624#issuecomment-2448085108
TC: The comment from Tyler prompting the unresolved question at the def site:
https://github.com/rust-lang/rfcs/pull/3624#issuecomment-2435731835
Josh: Lint at call site allowed by default, lint at definition site being ?? and capped
The 8472: We're not always going to mirror the signature correctly. And then we get an error.
Josh: I'd expect that to be less of an issue. To get shadowing, the subtrait needs to be something you're importing. If you're importing a subtrait, you exepcte to call the method of the subtrait. So I wouldn't expect that to be an issue.
The 8472: I guess it's unlikely itertools would add a method after the std added it. If itertools add something, std adopts it, someone is on an old itertools without that, starts using the std method and then they update itertools, you have a conflict.
Josh: Or we add an exciting new method that everyone wants and itertool adds it and then we add it to the standard library.
The 8472: Will the error let you know there's a shadowed method here?
Josh: That seems like a reasonable ask.
The 8472: I'll leave a comment on the tracking issue.
TC: There's an open question about the def-site lint based on Tyler's question. What needs to be resolved is answering Tyler's question whether the def-site warn by default link makes sense.
The 8472: But the comment was on the RFC but the RFC has been accepted. So the unresolved questions in the tracking issue needs to be updated to add that question form Tyler.
The 8472: I can do that too.
### (new change proposal) rust.tf/libs553 *Add a "shrink\-if\-excessive" API to \`Vec\`*
Teh 8472: Last time we asked if giving the caller a way to do the calculation themselves and call shrink_to would be sufficient. @hanna-kruppe replied to the exact proposal but not to the general idea for just providing a function to let the user do the calculation for what reserve currently does.
Josh: This does seem hard.
TC: Last time we talked about this I liked having some way to do this. You do end up writing this yourself. It's a bit hairy and it'd be great to do the hairy thing once and provide it as a service to people.
The 8472: I recall there were still arguments that different usecases may need different shrinking strategies and it may be difficult to provide a common strategy. Especially in a loop.
TC: That's true for growing too. We provide a default strategy and if you want to do something else you can do that yourself.
The 8472: So your argument is that if you want finer control you use your own shrinking strategy?
TC: Exactly.
The 8472: Any heuristic you provide, if you use it in a loop, it will oscilate. There's no safe strategy. Which would make your loop quadratic.
TC: Is the idea that we'd be adding enough elements to trip the loop and cause shrinking? You shouldn't do that. As long as we document the quadratic case in the API docs.
The 8472: Apparently you yourself said you'd be calling this in a loop. Who voiced opposition last time?
TC: I don't recall anyone voicing strong opposition. Just recall it being complicated and maybe running out of time?
Josh: I recall there being a lot of back-and-for having a magic shrink method vs. having more specific function.
The 8472: If I look back than my main concern is basically calling it in a loop. If we get it wrong you end up with this terrible behaviour. And people are explicitly saying they want to call it in a loop.
TC: Normally that's okay as long as the behavior is that it reduce the capacity to two times the length of the array. As long as you're not adding a double of the numbers in there.
The 8472: Yeah, I was thinking of a scratch buffer that you add a mix of small and large amount of data.
TC: I think that'd look pretty obvious in the code if you had one of those usecases. If you ran into a performance problem, you'd notice that obviously. It has an intuitiveness to it.
The 8472: But that basically means people need to use profilers to find it. Otherwise you'l just see that your program is slow.
TC: That's true in the other direction too. You can have a usecase where you're using an absurd amount of memory and holding on to an absurd amount of memory.
The 8472: I think providing the method would work with our current strategy. And hanna's counter argument is if your strategy grows more complicated, exposing the function would not be enough? I'm not sure what she's saying.
TC: This is an ACP, it's Scott's proposal. If we want this in any form, we can let Scott go and do something really smart and then have it come back for stabilization? That sounds like an easy call for me.
The 8472: My idea is people know their allocation patterns better than Vec does. The question is: can we easily give them that information? Hanna is arguing that this would be difficult, but I'm not following the argument. Does anyone understand the point?
TC: What's the intended semantics of your `Vec::calc_capacity_growth`?
The 8472: I havent't specified an exact semantics. It could do what what Vec::reserve does. Or expose our growth heuristic so you could decide based on your current vec whether it's in excess of the std library's growth strategy and in how much excess it is. And then you can decide. You can choose how much excess you consider acceptable?
TC: If we had capacity 256 and length 64, what would this return?
The 8472: It depends on the signature. We might even calculate it differently whether it's a new allocation vs. growing. It doubles the current capacity. So the actual growth takes the current capacity and the question whether it needs to grow at all or not. So the current length is only indirectly in input. But in the future we could change this in principle. A fully generic method could take all three inputs: the current length, current capacity and how much the length would increase. I don't we'd ever use all three but if you want to future-proof it, you could. It would calcuate the current capacity from that and use the growth behaviour and decide whether the current size is in excess. But then the output could chaneg whether you change the current length or reserved ammount. But this makes the calculation complicated. I guess that's probably Hanna's point.
TC: Is the idea here that `xs.shrink() === xs.shrink_to(xs.calc_growth_capacity(..))` ?
The 8472: Not necessarily. You could do that if you wanted. But that's basically a type envelope because then you'd be shrinking what you would have reserved. If you first called shrink to fit then ??. What you would do is:
The 8472:
```rust
// additional = v.reserve(additional)
let target = Vec::calc_growth(v.len(), v.cap(), additional);
// adjustable margin
if v.capacity() > target * margin + margin2 { v.shrink_to(target); }
```
The 8472: You could decide on an additional margin, you can add a constant factor. That's what the user could do when you expose the growth strategy and let them shrink.
TC: Isn't the `target` already going to contain some extra margin?
The 8472: Yes but if you want to shrink, you probably don't want to shrink to the exact amount that reserve does. That might end up calling shrink in every iteration. You could be oscilating around the threshold where it would shrink or grow the vector. If your `additional` says `1` then shrink could turn the vector to a half.
TC: I understand the problem. But what is the intended behavior of `calc_growth`? If length is `64` and `calc_growth` returns 128 then it already has a margin in there where you won't experience this kind of thrashing if e.g. adding one element.
The 8472: calc_growth would be the function that's used by reserve. Which primarily goes by the capacity and not length. It only looks at length to see if there's enough room in the capacity.
TC: But if you're only looking at the capacity, `calc_growth` would never return anything lower than the capacity, right? You'd have to look at length if you want to return something lower.
The 8472: I see, yes. I guess that's why you need to poke the growth function to see what the reasonable threshold is. The problem is that length and capacity are loosly coupled. You cannot give the universal formula for what the capacity should be because it depends on the previous capacity. If you always use the default strategy it will be a power of two. But if you start with an odd size, we'll double the odd size and you'll have an odd factor in your multiplication.
TC: When you're growing the distinction between length and capacity doesn't seem to matter much. Because you grow when length == capacity.
The 8472: Sometimes you pass an estimate to the `additional`. For example for allocating a `String` you might count number of characters times 4 for UTF-8 encoding.
TC: But when you have length 1 and capacity of 1000 and you reserve with an additional 10 elements it shouldn't allocate anything new; it has to be looking at the length, not just the capacity.
The 8472: What I mean by it not being tightly coupled to length is that you can have an average ??. But I see that just providing a function on its own doesn't help with coming up with your own shrinking behaviour.
The 8472: We probably could calculate some sort of upper bound based on the current length and the assumption that the capacity would have grown to that length naturally how much excess is there and then return the excess. There's some wiggle room if you have a certain length and you kept growing the Vec to that length, you don't know exactly with what capacity you'd end up at some point. But there's an upper bound and we could return a number beyond that number. Or you could return the upper bound, see that the current capacity is larger than the upper bound.
TC: What I've always wanted is exactly the capacity-affecting behavior of this, if fused, `xs.shrink_to_fit(); xs.push(/* some value */); xs.pop();`.
(Is that the same as a fused `xs.shrink_to_fit(); x.reserve(0)`? I'd expect so.)
It'd be OK for `shrink` to take an `additional` argument as well, so that `xs.shrink(n) == fused { xs.shrink_to_fit(); xs.reserve(n) }`.
The 8472: Maybe this would be a bit better
```rust
// calculates the highest possible capacity that the vec could arrived at
// by growing naturally to its current `length`.
let bound = vec.growth_bound(/* additional? */);
// constant excess. could also be multiplicative or anything of the user's choice
if vec.capacity().saturating_sub(bound) > 1000 {
vec.shrink_to(bound);
}
```
The 8472: Shrink to fit followed by reserve is not the same as calculating whether you're excessively above some threshold and whether you're srinking to the lowest power of two or divisor of capacity.
TC: It could do that though, right? We could change the behavior of reserve.
The 8472: Our default growth if you start with an empty Vec is you end up with a power of two. I remember there being a few pull requests that try to specify in which cases we try to be faithful and in which cases we do amortized growth. `with_capacity` is faithful, we don't apply the heuristics there so you don't necessarily end up with power of two there. The growth strategy mainly implies `push` and `extend`.
TC: Actually, what I really want is a version of `reserve(n)` that would either grow or shrink the capacity to whatever target `reserve` calculates. Right now `reserve` only grows. Rather than having one that only shrinks, it seems better to have one that either grows or shrinks to the target.
TC: I'm interested in unblocking Scott and seeing what he comes up with.
The 8472: I guess shrink with an additional is an improvement of the proposal. I'm not sure it addresses the fundamental issue. If you use this in a loop ideally you'd have a look back larger than 1. In the extreme case you can oscilate between large and small between each iterator. If you could look at the history, you can have an average and not shrink if your average is larger than capacity.
TC: The usecase here is you're making small additions and once a while you get a large one. You make the capacity for that, process it and shrink down to the smaller one.
The 8472: But then someone will go and send you ten big files.
TC: That's the tradeoff you're opting into here.
The 8472: But there's a middle ground between never deallocating and a super tight shrinking. E.g. the JVM has an default option that's to shrink in steps. `-XX:+ShrinkHeapInSteps`.
(the meeting ended here)
### (new change proposal) rust.tf/libs552 *Provider\-style API for \`Context\`*
### (new change proposal) rust.tf/libs549 *Add \`uNN::checked\_sub\_nonzero\`*
### (new change proposal) rust.tf/libs548 *ACP: Add \`NonZero::\<T\>::from\_str\_radix\`*
### (stalled change proposal) rust.tf/libs194 *Fix:Introduce an interface to expose the current \`Command\` captured env var logic*
### (stalled change proposal) rust.tf/libs360 *make FromResidual #\[fundamental\]*
### (stalled change proposal) rust.tf/libs133 *Add fmt::Write to io::Write adapter*
### (stalled change proposal) rust.tf/libs484 *Support \`vec!\[const { ... }; n\]\` syntax for creating a \`Vec\` of non\-\`Clone\` values*
### (stalled change proposal) rust.tf/libs438 *ACP: \`ForwardInit\<'a, T\>\` to complement \`MaybeUninit\<T\>\`*
### (stalled change proposal) rust.tf/libs131 *Add \`std::fs::rename\_noreplace\`*
### (stalled change proposal) rust.tf/libs347 *Context reactor hook*
### (stalled change proposal) rust.tf/libs371 *ACP: primitive numeric traits*
### (stalled change proposal) rust.tf/libs296 *ACP: Designing an alternative \`FromStr\`*
### (stalled change proposal) rust.tf/libs304 *ACP: Avoid the common mistake of \`for x in 0u8..\`*
_Generated by [fully-automatic-rust-libs-team-triage-meeting-agenda-generator](https://github.com/rust-lang/libs-team/tree/main/tools/agenda-generator)_