# Meeting proposal: when can we export ~~UB~~ safety preconditions to the environment?
Rust programs run in an environment. As such, any interactions with the environment may impact the semantics and safety of Rust programs. This often manifests itself in "global" safety preconditions that impact the entire program, or even its semantics.
With that said, [there](https://rust-lang.github.io/rfcs/3128-io-safety.html) [have](https://github.com/rust-lang/rust/issues/32670) [been](https://www.reddit.com/r/rust/comments/ltnpr1/totallysafetransmute/) [many](https://docs.rs/memmap/latest/memmap/struct.Mmap.html) attempts to write safe abstractions to paper over (or misuse) the environment, and occasionally these goals come into conflict with each other, or with other goas. We should establish an underlying policy for exporting safety preconditions to the environment.
## Do we need this?
It is tempting to say that Rust should strive to never export safety preconditions to the environment, but that is incompatible with current practice. Some examples where we have chosen to export safety preconditions are given below.
### The underlying hardware
Rust floating point semantics have famously been a mess, and although [there has been recent work](https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Pre-RFC.3A.20floating.20point.20guarantees/near/376475212) on fixing them, on some platforms they remain unfixable due to platform limitations. There is not much to do about this other than gnash our teeth and route around the damage, so to speak.
Other issues such as CPU bugs and erratas fall into this camp, as well as unstable memory from space radiation. In such situations, Rust programs may exhibit arbitrary behavior, with no real recourse to speak of. The rest of the document will not focus on these, as it's not really helpful for establishing an underlying policy.
### System Libraries
Rust sometimes requires system libraries to have semantics stronger than what the C standard promises - see [this PR on the mem* functions](https://github.com/rust-lang/rust/pull/114412) alongside this [URLO post on `exit()`](https://internals.rust-lang.org/t/rusts-exit-forwards-to-cs-exit-but-doesnt-warn-about-ub/19458/9). Such actions are probably justified for the standard library, but may not compose well with arbitrary libraries.
### Privileged Manipulation Functions
Operating systems generally allow functions to muck about with arbitrary processes - after all, the user is the ultimate authority of the system. Functions like [`CreateRemoteThreadEx`](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-createremotethreadex?redirectedfrom=MSDN) or [`process_vm_readv`](https://man7.org/linux/man-pages/man2/process_vm_readv.2.html) allow other processes to violate Rust guarantees, and we must assume that these functions are not used, or only used in intended ways by supervisors/other types of demons.
## When has this come up?
Or, more importantly, for what cases do we get to make decisions?
### Regular libraries
Rust currently requires all imported `extern` functions be `unsafe`, although there [is an RFC to change that](https://hackmd.io/tQW0eMJkSrma8eN6yTBhJA). The proof burden for calling thse functions falls on the code calling these functions in the status quo, or the library with the proposed RFC. Neither of these entities however, may neccessarily have all of the information neccessary to properly discharge the proof burden. System libraries are inherently provided by the system, and may have (possible subtly) different semantics from system to system, while user libraries can face situations from incompatible reimplementations to plain-old namespace collisions. Ideally, the end user should discharge all FFI imports [similar to Java](https://openjdk.org/jeps/8307341#:~:text=The%20JNI%20Invocation%20API%20allows,line%20without%20enabling%20native%20access.), but this may not be a good fit for a system language like Rust.
Other issues with libraries occur when attempting to build a safe abstraction over a system library, such as Vulkan. Safe wrappers can be written for a particular library, but composing multiple safe wrappers can lead to unsoundness - a problem that today cannot be solved by those safe wrappers.
### I/O Safety
[I/O Safety](https://github.com/rust-lang/rfcs/pull/3128) is an attempt to formalize (placeholder word, feels too strong) control of underlying FDs in a similar style to how Rust formalizes control of memory, using Rust terms such as ownership, moving, and borrowing. This means that only code that has properly acquired an FD can read, write, or close it [^lesser-io-safety].
By placing a previously global resource under unique control, I/O safety allows for safe abstractions that were previously impossible or unsound, such as those for mmap[^1], but imposes a littany of new restrictions on Rust programs.
Some of these restrictions interfere with programs system programmers seek to write, such as interfaces with daemons or other services passing FDs via `fork` + `exec`. This will be discussed a bit later.
### `/proc/self/mem` and `/dev/fd`
Currently, the FCP'ed position is that `/proc/self/mem` is treated as outside of the safety guarantees of Rust, similar to the priviliged functions described above and outside of the soundness model of Rust. Despite this, `/proc/self/mem` differs from priviliged functions in that the program can manipulate its own memory space itself, without requiring external action[^2].
For example, is the following function: sound, unsound, or not compatible with Rust's soundness model?
```rust
fn perform_copy() {
let _ = std::fs::copy("./foo", "/proc/self/mem");
}
```
How about the following, where rust-cp is a program that exposes `std::fs::copy` directly?
```shell
rust-cp ./foo /proc/self/mem
```
In addition to `/proc/self/mem`, I/O safety means that `/dev/fd` is now unsound to open, but it is unclear if the same rationale of `/dev/fd` being outside of the soundness model is applicable.
### Applications that require external setup to run
Some applications are designed not to be run indepdently, but to be run via a demon or other parent process. These include the previously mentioned [jobserver protcol]((https://github.com/rust-lang/rust/issues/116059), but also [systemd's `sd_listen_fds`](https://www.freedesktop.org/software/systemd/man/sd_listen_fds.html) or older protocols for Unix mail or web servers.
These problems can also occur on Windows with HANDLE inheritance, also that is more niche and less widespread, as compared to Linux.
One minor issue with these protocols is that they are often passed via environment variables, which are safely[^3] editable by other code, and as over a socket.
This is perhaps the clearest example of this meeting title - can such an application export it's safety precondition to the environment that calls it?
## What can we do about this?
Much of this proposal is dependent on the fact that we want each unsafe operation to have a `// SAFETY: comment` that properly discharges the proof obligation incurred by the unsafety. Some of the examples above can be resolved by saying that libraries cannot discharge such proof obligations on their own: they must be bubbled up to the application level to be discharged there. Such actions may result in major ergonomic pain for programmers, who know have to bubble up a proof obligation that is trivial at the application level but impossible for libraries to answer [^4].
## Conclusion
TODO
[^1]: Without I/O safety, exposing `mmap` as references is impossible, as safe code could spray bytes into arbitrary FDs, breaking the safety preconditions of references.
[^2]: Here we do get into (admittedly contrived) cases such as an OS providing an interface to an actuator which can type a command into the keyboard that runs something to break Rust's safety invariants...
[^3]: Although `set_var` is intended to be unsafe, it was intended to be unsafe solely for concurrency reasons, and not for reasons of violating IO safety.
[^4]: For example, a safe Vulkan implementation that has to contort it's API in order to thread the proof that a user is only using one Vulkan implmentation, or a game engine that uses a Vulkan implementation that could be entirely safe but for that proof obligation.
[^lesser_io_safety]:
We could also consider a lesser form of I/O safety that only prevents closing FDs, but while that may change the extent of its impact on code, it doesn't really solve the problem that I/O safety alone incurs: some protocols may want you to close the pipe after you're done writing to it so the other side can detect that.
Another concern is that I/O safety is a relatively unique use of Rust's library to restrict arbitrary Rust code, this does generalize to arbitrary libraries - the fact we allowed this previously was a conscious decision on behalf of Rust. Given an arbitrary `fn (int) -> ()` for instance, one would *not* imagine you to be able to call this with arbitrary values - we do want to be able to bind to APIs that return ints or indexes into a table rather than pointers without users being able to just forge values.