---
title: lints, bugs vs quality of impl (2023-05-12 meeting)
tags: steering, rustc
---
# const-prop vs lints in unreachable code; bugs vs Quality of Impl/Life
https://github.com/rust-lang/compiler-team/issues/627
## Main questions to resolve
Q1: When, if ever, is it an outright *regression* to have a specific change to a **lint**; namely in either:
1. a lint's behavior, or
2. a lint's [default level](https://doc.rust-lang.org/rustc/lints/levels.html) (i.e. {allow, warn, deny}-by-default).
Q2: Do we want to do something *specifically* about the arithmetic-overflow lint due to the current state of things as described in [#109731]?
## Background References
"const-propagated arithmetic_overflow in unreachable code" [#109731].
[#109731]: https://github.com/rust-lang/rust/issues/109731
[Triage Meeting Discussion #1](https://rust-lang.zulipchat.com/#narrow/stream/238009-t-compiler.2Fmeetings/topic/.5Bweekly.5D.202023-04-20/near/351345957)
[Triage Meeting Discussion #2](https://rust-lang.zulipchat.com/#narrow/stream/238009-t-compiler.2Fmeetings/topic/.5Bweekly.5D.202023-04-27/near/353335702)
## Thoughts
Lints identify coding patterns that are correlated with some kind of problem, be it social (e.g. "this does not conform to the expected style for readabilty") or operational (e.g. "this operator is subtle and you are probably better off using one of its variants that more explicitly express intent")
Rust tries to reserve the use of hard-errors for things that are imply unsoundness (e.g. taking multiple `&mut`-borrows of the same place).
* (Pnkfelix thinks there are some cases where we also issue hard-errors for problems that correspond to logical mistakes rather than unsoundness, but cannot think of an example off the top of his head right now.)
Note that the problems identified by lints may be *strongly- or weakly-correlated* with misbehavior in the end program.
* Consider e.g. readability issues: any resulting misbehavior is going to be dependent on other external factors with development, so its hard to make a firm case for how strong the correlation is.
Furthermore, separate from the severity of the problem being identified, is the fact that the lints are static analyses of the source code. When the problem in question is about the operational behavior of the end program, such lints are inherently *imprecise*. Any such lint will have false negatives (fail to detect some instance of the problem of interest) or false positives (flagging a coding pattern as problematic when in fact it will never actually encounter the problem in question).
So, pnkfelix proposes that these are the factors that feed into the choice of what default level to assign to a lint:
* the (im)precision of the analysis: a more precise analysis can afford to be tilted towards warn- or deny-by-default, because we assume the number of false readings will be low.
* the severity of the associated problem: even if the analysis is imprecise, if the *problem* associated with it has massive cost (in the sense of how broad of an impact the resulting misbehavior could have), then we reserve the right to tilt the default level towards warn- or deny-by-default.
* the impact of the lint on the existing ecosystem: a lint that is imprecise in theory, but in practice has a low rate of producing false results, might still tilt towards warn- or deny-by-default
* the pain of addressing the lint: if a lint is easy to fix (perhaps can even *automatically* by a tool like `rustfix` that auomatically applies rustc's suggestions), then that *might* be an argument for turning its level from allow- to warn- by default. (But ... does this ever generalize to full deny-by-default?)
* pnkfelix is not actually really convinced by this specific claim, despite writing it down. The team should debate this matter.
## On overflow specifically
Overflow itself is a somewhat tricky subject.
Arithmetic overflow **is** correlated with real world problems.
Often these are problems that are not caught by testing unless people are being really thorough.
:::info
Aside: pnkfelix was recently reading the Joshua Bloch post "Nearly All Binary Searches and Mergesorts are Broken" [bloch]
[bloch]: https://ai.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html
If you have time, its worth a skim. But its also interesting to note that (if pnkfelix understands the situation correctly), the scenario described there, which does cause one kind of failure for Java (because Java has only *signed* int types and thus `(a+b)/2` when it overflows can yield a negative value) and another kind of failure for C (because `(a+b)/2` when it overflows is undefined behavior) ... neither of those cases apply to Rust, assuming you're using a `usize` as one would expect. The only thing Rust has to worry about is that the overflow there may cause a panic if you have the runtime overflow checks turned on...
:::
So, anyway: one can/should turn on the runtime checking here, depending on one's application domain. How should that affect what the default-level is for `arithmetic_overflow`, if at all?
## Discussion Topics
### How do I add a discussion topic?
pnkfelix: Like this!
### Clippy's stance
Clippy has a more lenient stance towards false alarms: If the found case is in your opinion (or objectively) a false alarm, then you should add an `#[allow]` on that expression to silence the lint (ideally with documentation why you are doing so). The idea is that weird edge cases deserve documentation.
### overflow in release mode
Note that the `arithmetic_overflow` lint triggers in release mode, too, where overflow still happens, but does not cause a panic: https://play.rust-lang.org/?version=stable&mode=release&edition=2021&gist=adfcd687635226458630d79faade69e3
wesleywiser: I don't think we should over index too much on this being `arithmetic_overflow` as const prop will trigger other kinds of similar lints as well. Eg this compiles on stable and lints on beta:
```rust
use std::cmp::Ordering::*;
fn main() {
let a = [0, 1, 2];
let b = 3;
match (usize::MAX as u64).cmp(&u64::MAX) {
Greater => {
dbg!(a[b]);
}
Equal | Less => {}
}
}
```
### impossible to be precise
you can always have an
```rust
if solve_hard_problem(x) {
1 / x // warns if `x` is statically `0`
}
```
where we cannot possibly figure out that for `x == 0` `solve_hard_problem` returns `false`.
On the other hand
```rust
if x != 0 {
1 / x
}
```
is very obvious to users and we should potentially figure this out. I believe with dataflow const prop that may be feasible (cc @cjgillot)
### precise in theory, high false rate in practice
pnkfelix: I put this as an inline comment, but figured team should debate it: Should an in-practice false positive rate force a lint to stay as allow-by-default? (pnkfelix thinks "yes"; the practice trumps theory here.)
### const-prop effects
cuviper: the arithmetic lint may be precise on its own, but const-prop introduces new scenarios for overflow that may or may not have happened in practice. Another aspect is that this introduces an error at build time that does not show up in check.
```rust
use std::cmp::Ordering::*;
fn main() {
let max = u64::MAX as usize;
match (usize::MAX as u64).cmp(&u64::MAX) {
Greater => {
dbg!(max + 1);
}
Equal | Less => {}
}
}
```
^ This example passes `cargo check`, but not `cargo build`.
Would be solved by https://github.com/rust-lang/rust/pull/108730
### Should automation be part of equation?
pnkfelix: topic was raised in doc: should an auto-fixable lint be somthing that we allow to tilt towards warn-/deny-by-default?
### doc never talks about *regressions* per se
pnkfelix: one of the primary points of the doc was to ask the question "when, if ever, should a change here be a treated as a *regression*" (in the formal sense of the issue being tagged and tracked in triage meetings and us collectively working to address the regression). But the doc never actually addressed that topic. :cry:. We should still talk about it in the meeting.