# Salsa round-up
## People
* Carl Meyer (astral.sh)
* David Barsky (meta)
* David Richey (meta)
* Lukas Wirth (rust-analyzer)
* Micha (working on ruff, python linter/formatter/cli)
## Possible projects and goals
rust-analyzer uses forked version of salsa 0.x
* parallel execution
* serialization to disk
* memory usage/garbage collection/interning?
## Goals
* A plan of action for what we can be doing.
## High-level design
### basic db sketch
```rust
#[salsa::tracked]
struct Function<'db> {
....
}
```
```rust
[salsa::tracked]
fn parse<'db>(
db: &'db dyn crate::Db,
input: &str,
) {
for each item in input {
let x = Function::new(db, ..);
}
}
```
* can free all data
### parallel execution sketch
```rust
#[salsa::tracked]
fn parse<'db>(
db: &'db dyn crate::Db,
functions: &[Function<'db>],
) {
functions
.par_iter()
.map(|f| type_check(db, f))
.collect();
}
fn type_check<'db>(
db: &'db dyn crate::Db,
function: Function<'db>,
) { }
```
```rust
fn autocomplete<'db>(
db: &'db dyn crate::Db,
functions: &[Function<'db>],
) {
possibilities
.par_iter()
.map(|f| is_plausible(db, f))
.collect();
}
#[salsa::tracked]
fn is_plausible<'db>(
db: &'db dyn crate::Db,
function: Function<'db>,
) { }
```
would interact with cancellation:
* when main thread makes a change to the inputs, would cancel and just propagate out
* in LSP:
* threads are done with different priorities
* what is the desired level of concurrency?
* probably needs to be configurable
* in Meta at least builds are on a dev server with 80 cores where each core is not super powerful
* parallelism is a big win
* rust-analyzer launches its own threads
* has a main threadpool, configurable by user, defaults to physical cores
* every incoming request dispatches to an idle thread
* if all threads are working, queue it up
* don't do parallelism within a request
* apart from a few special cases
* https://github.com/rust-lang/rust-analyzer/pull/16555
### managing memory in r-a today
* LRU used for a few key queries
* but otherwise not so much
* tradeoff on speed
* would ideally have something tracking older revisions
### serialization to disk
### representative examples
* elaborate on the calculator one?
* use some of the subdatabases in r-a, e.g., macro expansion?
* 2948 "database" results. doable.
### interning
options, there are 3 basic options
* today:
* just track "that you interned something" but not what
* low memory usage and can be collected at a very *rough* level
* fine-grained tracking and some kind of "collect old things" strategy
* more work and memory
* can be collected nicely
* "stop interning" (only intern within the context of a specific function or thing)
* duplication across functions potentially
* but can be collected naturally
in ruff abusing interning for:
* sometimes have ingredients that cannot be ingredients, e.g., name of python module
* not guaranteed it will run as part of a query
problems with interning and the new trait solver in r-a: https://rust-lang.zulipchat.com/#narrow/stream/185405-t-compiler.2Frust-analyzer/topic/interning.20and.20requiring.20.60Copy.60
### astral / python
Would be nicer if we can be very lazy.
Problem we run into with salsa: certain amount of overhead.
```rust
#[salsa::tracked]
struct Foo {
...
}
#[salsa::tracked] // <-- this will create an ingredient for `something_lazy`
impl Foo {
fn something_lazy(db: &dyn) -> computedresult {
self.fields()
}
}
let foo = Foo::new();
foo.something_lazy(db);// memoized after 1st execution
```
### next discussion and what to do by then
* every 2 weeks?
* lettucemeet
* meet on a recurring basis?
* PS Niko to be on PTO for 3 weeks starting July 29
* what are some tasks to do
* what Niko would want to focus on is the recursive structure
* sync up with Carl + Micha when have some thoughts
*