Try   HackMD

Associated Effects paper:

https://dl.acm.org/doi/pdf/10.1145/3656393

Read:

  • Section 2
  • Section 5.1

Flix effect system documentation:

https://doc.flix.dev/effect-system.html

Plan on timing:

  • Read quietly: 20 minutes
  • Presentation: 10 minutes
  • Live programming: 15 minutes
  • Discussion

Discussion

Attendance

  • People: TC, nikomatsakis, scottmcm, tmandry, Josh, Magnus Madsen, Jonathan Lindegaard, Matthew Lutze, Nadri, Urgau, Jack Huey, Yosh, CE, Santiago, eholk, RalfJ, Trevor Gross

Meeting roles

  • Minutes, driver: TC

Paper questions about specific effect types: Heap

Josh:

the fold functions on a mutable data structure have some heap effect

Why is this an issue? What is the significance of tracking this as an effect? Is this about the possibility that heap operations might fail, or is there some other reason to treat it as an effect even if it can't fail?

As far as I can tell from the rest of the paper, it looks like this ties in with how Flix does region tracking to handle memory reclamation and make sure memory can't be used after the region ends. If we're relying on Rust lifetimes to handle all of that, is there any reason we'd still need to treat Heap as an effect?

TC: Our borrow checker is actually a limited kind of effect system, so we don't need to use effects to track regions. The parallel is interesting though, and Flix uses their effect system to track lifetimes.

TC: (In fact, Section 2.2 of the paper can be read as a kind of proof of how a borrow checker is a limited kind of effect system.)

TC: The key point here is that while they need to be generic over things for their reasons, and we need to be generic over things for our reasons, the mechanism by which we can be generic might be the same.

Nadri: I think a good comparison is an arena: their Array[r, t] is like a bumpalo::Vec<'a, T>, and the heap effect tracks that we can't access the vec outside the scope of liveness of the arena.

Paper questions about specific effect types: IO

But, we cannot readily implement readers or writers using algebraic effects because the signatures of the read and write functions are fixed to have the global, unhandleable IO effect.

Josh: What is the significance of the IO effect? Is this an issue for a language like Rust that does not have any concept of "purity" the way that e.g. Haskell does? Is there a reason we would need to track this as an effect even though we don't care about the pure-vs-IO distinction?

TC: Issues of purity recall some of the same issues that come up for us with const. Which is to say that const isn't exactly pure, but there are mechanical similarities. (Yosh and Nadri concur)

Nadri: it's not an issue for rust in the sense that we have the choice to reify IO as an effect or not. if we do, this would enable us to talk about pure functions, along with all the benefits of that (eg. their "parallelize when pure" optim) and compatibility + complexity drawbacks. if we don't, we'd just implicitly have all functions potentially doing IO like we do today.

Josh: Fair enough. I wanted to make sure there was no fundamental reason that we would need to do so. It's certainly an interesting one to explore, and seems related to the concept of what you can do in const and our discussion about inverting const to runtime. Though, it doesn't seem like a given that const is exactly !IO.

"Purity reflection"

Josh: Is the concept of "purity reflection" mentioned at https://doc.flix.dev/effect-system.html a kind of specialization? "If the provided function is limited to these effects, do this optimization"?

Is this done via a conditional within a single implementation, or dispatch to different implementations?

Nadri: there's an example in the linked docs, it's some sort of conditional on types

Josh: Found the link to that, thank you. Makes sense.

Multiple resumption and async

Nadri: the docs say "multiple resumptions enable things like async". What does that look like?

Nadri: I ask because for linearity reasons we probably can't to multiple resumptions in rust, and it doesn't seem necessary for async given that we implement async using our coroutine transform which doesn't support multiple resumption.

Jonathan: The comment in the doc is about implementing a kind a scheduler, but if you already have an implementation of it you don't need it no. The handlers use the effect tracking system but the tracking is makes sense even without handlers.

Nadri: still curious to see an example of what the docs were alluding to

Jonathan: The start of the section with the list of features just refer to algebraic effects in general, multiple resumptions wouldn't be required for a scheduler.

Nadri: I'm quoting:

Flix supports algebraic effects with multiple resumptions. We can use such effects to implement async/await, backtracking search, cooperative multi-tasking, and more.

Nadri: I completely see how to do backtracking search with multiple resumption, what would the other two look like? like, what's the effect?

Nadri: I'd personally encode rust's async into something Flix-like as:

eff Async[Out] {
    def wake_me_up_later(): () 
    def return(_: Out): Void // Jonathan: return(f: () -> Out): Void
}

Maybe there's a Waker somewhere but that's beyond my knowledge.

Jonathan: Yes that's also what i'd do, just thunking the return value

Nadri: ok fair, maybe the docs writer had something else in mind

Jonathan: Oh, although the Async effect should also have a method to run a new function. And I think instead of return, we would return a value through side-effects. otherwise you have to link the process back to its "spawner"

Nadri: I guess we don't even need return since functions return values already

Nadri: also we're missing one of async's main superpowers: intra-task concurrency

eff Async {
    def wake_me_up_later(): Unit
}

// Alternate running `f` and `g` whenever the other yields, until both complete.
fn join(f: () -> A \ ef + Async, g: () -> B \ ef + Async) -> (A, B) \ ef + Async {
    let f_state = Unstarted(f);
    let g_state = Unstarted(g);
    while true {
        match f_state {
            Unstarted(f) => {
                run {
                    f_state = Done(f());
                } with Async(resume_f) {
                    // can I extract the suspended computation out of the handler?
                    // Jonathan: yes, you can do anything in here you'd like, fx storing `resume_f`.
                    f_state = Pending(resume_f);
                    // what's the return type of a handler block?
                    // Jonathan: the handler branch is in some sense a conditional branch of
                    // the main body, so the same type as inside the run block 
                }
            }
            Pending(resume_f) => {
                run {
                    // does the suspended computation still do the effect? what's it's return type?
                    // Jonathan: depends on the handler flavor (deep vs shallow). we have deep handlers
                    // so resume_f is recursive under this handler. This means that resume_f does not have Async
                    f_state = Done(resume_f());
                } with Async(resume_f) {
                    f_state = Pending(resume_f);
                }
            }
            Done(_) => {}
        }
        // same for g
        if both_done { break }
        // yield so that progress can be made
        wake_me_up_later()
    }
    (f_state.unwrap_done(), g_state.unwrap_done())
}

fn select(f: () -> A \ ef + Async, g: () -> B \ ef + Async) -> Either<A, B> \ ef + Async

Nadri: does that work?

Jonathan: Ill look it over. I also wanted to code it up but my laptop can only handle the call

Jonathan: I made in-code comments

"Interesting properties" of effect system as compared to Rust

tmandry:

  • Set operations
  • Handling an effect without caring about the other effects
  • Polymorphism

Interesting properties of Rust's "monadic" system

  • You can express e.g. a function that fails to return an iterator; effect orderings are encoded by the types.