owned this note
owned this note
Published
Linked with GitHub
# rustc jemalloc futzing
*Warning: I ramble a decent amount in this. You've been warned.*
So, I did some experimenting based on the comment <https://github.com/rust-lang/rust/pull/81782#issuecomment-784438001>, in a very hacky way -- essentially trying to answer the question of "First probably measure the impact to see whether it's at all worth it.". In particular, macOS doesn't have a good way of overriding the system allocator (some quirks of the malloc zone API causes `free` to be extremely expensive), so I wanted to see if using a `global_allocator` would help there.
The TLDR is that on macOS (which aside from perhaps Windows is probably the OS that will benefit from this the most) jumping through those hoops in a hacky (possibly unrealistically hacky? Hopefully not) way leads to around a 3%-5% speedup for `cargo check` and `cargo build`.
The branch I did this in is <https://github.com/thomcc/rust/tree/very-hacky-alloc-experiments>, and I mostly did it to see how full of crap I was in the postconf when I went on about the allocator situation in rustc on macOS (answer: at least not *entirely* full of crap, but 5% is less than I expected, although it's not bad for just changing build configuration). Note that very many parts of this are total hacks -- the way I "decoupled `librustc_driver.so` from `libstd.so`" in particular was just to get rid of libstd.so. There's almost no way this is what we actually want (it kills rustdoc, and probably all other tools/anything else using rustc-private APIs).
That said, I do think there is a way to combine libstd.so into librustc_driver.so, which is likely the right way to do this. Or maybe not, in which case I misunderstood what the initial comment meant.
## Process
Basically, I did the following things:
1. Moved jemalloc and such into `rustc_driver`.
2. Add a `#[global_allocator]` using jemallocator.
3. Tried to "Decouple `librustc_driver.so` from `libstd.so`" in the hackiest way possible, which is to say "remove `"dylib"` from std's crate_type".
Doing this breaks rustdoc, and presumably several other things, but rustc seems to function otherwise.
I don't think this is how this should be done (actually, I'm 100% it is *not* how this should be done, but doing this correctly probably requires changes to bootstrap and seems like a real pain, and this works for now for getting rough measurements).
(Note that I tried several combinations of steps 1, 2, and 3, and it seems like all 3 are needed for there to be any benefit to doing this, at least on macOS)
4. Do some builds. Basically all the numbers are done on an aarch64-apple-darwin machine using a stage1 build with the following config.toml:
```toml
profile = "library"
changelog-seen = 2
[rust]
codegen-units-std = 1
incremental = false
jemalloc = true
```
which hopes to be semi-realistic for my platform? It's at least roughly the same speed as my normal nightly toolchain.
5. Get very rough numbers for perf by using hyperfine in the rustc-perf repo.
I use macOS mainly (and expect this to be mostly beneficial on macOS), so getting concrete numbers isn't easy for me. Also, this will almost certainly benefit macOS more than other platforms, since it allows rust allocations to bypass the zone deallocator.
Also worth noting: There was a lot of flailing with missng symbols, which both motivated switching to `tikv-*` versions of the crates (which are probably the same so this is likely unnecessry), and some hacky changes in rustc_driver to make `#[used]` actually work. I don't think any of this was actually necessary, but if you read the patch you'll see it.
## Numbers
Anyway, here's some numbers. They're rough, as mentioned:
In all of these `+stage1src` is the baseline (rust-lang/rust @ [`8064a495086c2e63c0ef77e8e82fe3b9b5dc535f`](https://github.com/rust-lang/rust/commit/8064a495086c2e63c0ef77e8e82fe3b9b5dc535f)) and `+stage1` is with my changes (<https://github.com/thomcc/rust/tree/very-hacky-alloc-experiments>). Yeah, the names are weird, it's just the names of the rustup linked toolchains I already had set up.
#### Ripgrep
```
$ cd collector/benchmarks/ripgrep-13.0.0
$ hyperfine --warmup 1 --runs 5 --prepare "cargo clean" --cleanup "cargo clean" -L rustc stage1,stage1src -L subcmd check,build "cargo +{rustc} {subcmd}"
Benchmark 1: cargo +stage1 check
Time (mean ± σ): 5.810 s ± 0.177 s [User: 18.002 s, System: 2.316 s]
Range (min … max): 5.670 s … 6.118 s 5 runs
Benchmark 2: cargo +stage1src check
Time (mean ± σ): 6.147 s ± 0.133 s [User: 18.809 s, System: 2.329 s]
Range (min … max): 6.019 s … 6.374 s 5 runs
Benchmark 3: cargo +stage1 build
Time (mean ± σ): 8.025 s ± 0.239 s [User: 29.187 s, System: 3.245 s]
Range (min … max): 7.804 s … 8.364 s 5 runs
Benchmark 4: cargo +stage1src build
Time (mean ± σ): 8.486 s ± 0.079 s [User: 30.444 s, System: 3.299 s]
Range (min … max): 8.404 s … 8.594 s 5 runs
```
This shows around a 5% improvement for both `cargo check` and `cargo build`. This gets drowned out more in `--release` builds, FWIW, but I don't have the numbers.
#### Cargo
```
$ cd collector/benchmarks/cargo-0.60.0
$ hyperfine --warmup 1 --runs 5 --prepare "cargo clean" --cleanup "cargo clean" -L rustc stage1,stage1src -L subcmd check,build "cargo +{rustc} {subcmd}"
Benchmark 1: cargo +stage1 check
Time (mean ± σ): 13.230 s ± 0.307 s [User: 47.052 s, System: 9.785 s]
Range (min … max): 12.993 s … 13.766 s 5 runs
Benchmark 2: cargo +stage1src check
Time (mean ± σ): 13.796 s ± 0.185 s [User: 48.728 s, System: 10.000 s]
Range (min … max): 13.611 s … 14.092 s 5 runs
Benchmark 3: cargo +stage1 build
Time (mean ± σ): 26.540 s ± 0.688 s [User: 91.693 s, System: 13.300 s]
Range (min … max): 25.386 s … 27.081 s 5 runs
Benchmark 4: cargo +stage1src build
Time (mean ± σ): 27.569 s ± 0.479 s [User: 93.950 s, System: 13.341 s]
Range (min … max): 26.942 s … 28.136 s 5 runs
```
Less impressive, with 3% for `build` and 4% for `check`
#### Externs
I'm not sure what this one is (it's whatever [this](https://github.com/rust-lang/rustc-perf/tree/master/collector/benchmarks/externs) is), but we apparently use it as part of our PGO stuff, so I figured I'd run it. It went much faster so I did far more iterations, and even tried with `--release`. Ideally I'd have done this for all of them, probably.
That said, it's much faster so seems more sensitive to noise, and I wasn't able to get a run that didn't warn about statistical outliers. Hopefuly the larger iteration count helps compensate for this some
```
$ cd collector/benchmarks/externs
$ hyperfine --warmup 5 --runs 20 --prepare "cargo clean" --cleanup "cargo clean" -L rustc stage1,stage1src -L subcmd check,build,"build --release" "cargo +{rustc} {subcmd}"
Time (mean ± σ): 244.9 ms ± 51.9 ms [User: 147.8 ms, System: 51.9 ms]
Range (min … max): 220.0 ms … 455.8 ms 20 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: cargo +stage1src check
Time (mean ± σ): 256.6 ms ± 59.9 ms [User: 153.6 ms, System: 57.1 ms]
Range (min … max): 233.8 ms … 508.2 ms 20 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 3: cargo +stage1 build
Time (mean ± σ): 273.9 ms ± 9.8 ms [User: 188.6 ms, System: 52.7 ms]
Range (min … max): 258.2 ms … 288.3 ms 20 runs
Benchmark 4: cargo +stage1src build
Time (mean ± σ): 290.1 ms ± 12.3 ms [User: 197.6 ms, System: 60.5 ms]
Range (min … max): 276.2 ms … 316.3 ms 20 runs
Benchmark 5: cargo +stage1 build --release
Time (mean ± σ): 234.3 ms ± 67.2 ms [User: 145.0 ms, System: 42.6 ms]
Range (min … max): 208.5 ms … 517.8 ms 20 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 6: cargo +stage1src build --release
Time (mean ± σ): 238.4 ms ± 27.2 ms [User: 152.7 ms, System: 49.7 ms]
Range (min … max): 222.3 ms … 350.3 ms 20 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
```
So, 4%-5% for `build` and `check`, and really no difference for `build --release` (but we wouldn't expect one).
## Conclusion
So, this is really unscientific. These aren't nearly enough iterations, I did it on a machine that was running other things (although I did quit most programs while running), and just generally it's not a good way to do it. I think it's pretty likely that it *does* represent a possible speedup of 3%-5%, since this was fairly consistent in the end.
Additionally, my branch is really really hacky. I think almost none of it is the right way to go about this. So, in the list of steps given here <https://github.com/rust-lang/rust/pull/81782#issuecomment-784438001>, it's probably just the first one: "First probably measure the impact to see whether it's at all worth it.".
So, is it worth it? Well, 3%-5% isn't exactly incredible, but it's also probably not worth giving up unless this is a total dead end. Thsis also *might* unblock using `#[global_allocator]` on Windows, which would be nice because we currently use the system allocator for everything there, AFAICT.
One note is this undoes the reason we made this change in the first place. Basically, it forces everybody who depends on `rustc_driver` to use the same global allocator. This apparently was a problem for RLS back in <https://github.com/rust-lang/rust/pull/56986>, but with RLS on its way out, I'm unsure how much we care. (Also worth noting: this is also a much newer version of jemalloc which may not even have the issue that caused problems for RLS in the first place...). Who knows if we still care about this.