# 2021 Sprint Goal Suggestions
## lcnr (Bastian Kauschke)
Focussing on the developer experience, especially self-compile times, during the first sprit(s) seems like the best approach to me.
Looking at what we can actually do during the sprint week I think that moving the compilation of the query system out of `rustc_middle` is one of the bigger, hopefully somewhat achievable goals we can strive for.
Another good topic would be to look into extending the perf to get more comprehensive data about compiling rustc itself, for example by also looking at the time needed for small incremental changes.
I personally do not have much experience with both of these topics, so I probably won't be too helpful here. Looking the areas I am familiar with there are two possible fairly large wins:
The first being polymorphization, which should be able to noticably improve compile times, especially if we're able to get the design recently discussed in https://rust-lang.zulipchat.com/#narrow/stream/216091-t-compiler.2Fwg-polymorphization/topic/type_id.20analysis/near/222183845 to work.
The other is `feature(const_evaluatable_checked)` which will allow us to manually polymorphize more complex functions. It is currently not yet working well enough for me to be comfortable with using it in the standard library or rustc itself.
I expect the two biggest improvements here to be from `rustc_arena::TypedArena::grow` and `alloc::raw_vec::RawVec::grow_amortized`.
## simulacrum (Mark Rousskov)
These goals are a stretch, but other than haggling over the numbers, I think aiming for:
sub-minute clean builds for rustc + std (i.e., x.py build library/std --stage 1) with excellent hardware (e.g., 3950x)
would be good.
Notably I say "clean" and not "incremental" here; I personally think incremental is great but would rather prioritize efforts to make from-scratch builds fast. I think that is likely to get benefits to incremental comp too, and ultimately you need a from scratch build quite a bit of the time I find. I'd be happy with incremental focus on the first sprint though.
## lqd
Q4) reducing build times seems like a fine first task to test out the sprint model on t-compiler: it seems wide enough so that multiple people can work on different tasks and parts of rustc without stepping too much on each other's toes, and deep enough so that some progress and good work can actually be achieved during the sprint
## cjgillot
Q4- Definitely ok with reducing the compiler build cycle!
This was my initial motivation with getting involved in rustc development,
and led to https://github.com/rust-lang/rust/pull/70951 .
I would like to suggest focusing on compiler memory usage too.
This is one of my largest pain points: I can't compile rustc with more than -j2.
IIRC, the AST/HIR/rustdoc tree structures are a large contributor.
However, I am not convinced much can be gained focusing on
incremental modifications to the query system.
## mw
Some ideas for promising topics:
- Try to get PGO working for the LLVM part of the compiler. Having a
PGOed bootstrap compiler can reduce build times quite a bit. Mark has
already gotten things to work for the Rust part on x64 Linux, but PGOing
the LLVM part will need a small feature extension of sccache:
https://github.com/rust-lang/rust/issues/79562#issuecomment-744530938
- Add self-profiling support to `x.py`. That should allow for getting
better insights into where optimizations make sense. This would be
mostly a UI improvement. It should already be possible to use
self-profiling during bootstrapping, but having `x.py` also invoke the
post-processing and printing a report or providing clickable links would
make a big difference.
- Look into providing an intermediate optimization level in `x.py`. It
might, for example, make a big difference for build times if we turned
off either ThinLTO or explicit cross-module inlining while not making
much of a difference in the output compiler's performance. Also, sharing
generic implementations between crates might reduce build times while
not affecting runtime performance that much.
- Look into better distributing code between codegen units for extreme
cases like librustc_middle. I seem to remember that the compiler spends
quite a bit of time on a single core which points to some CGUs being
excessively large.
- Use self-profiling to find out what parts of the query system make
rustc_middle so large. I imagine that there's quite a bit of unnecessary
code duplication at the binary levels, generated by macros and generic
instantiation that could be reduced by making things monomorphic (e.g.
by using trait objects where possible).
## nmatsakis
I think we should focus on things that improve bootstrapping time, but I'm not sure what within that.
I admit I would also like to push the chalk-ty and type library work forward, but I don’t think that would be a good topic for the sprint overall.
## xanewok
I'm more than happy to help with reducing compiler build cycle times or "developer experience" in general but would also love to be a part of the specialization/GAT task force if there will be one.