# Conductor/NeuronBuilder design meeting 2022-05-24 ## Updates on NeuronBuilder.jl The O'Leary group has fleshed out additional features in NeuronBuilder.jl, including exploring some new ideas about the backend (e.g. defining Flows to help logically construct equations for ODESystems) According to Dhruva, the current plan is to cap off the work as it relates to some current projects with Andrea and Thiago. We can then see about merging the "lessons learned" between the two code bases. ## Updates on Conductor.jl `main` branch is once again stable and passing tests. All components now implement ModelingToolkit's `AbstractSystem` interface. That means we can call (for example) `equations(x::Compartment)` and it will lazily construct a system of equations based on the membrane currents contained in the compartment. Next sprint goal is to get small multi-compartment neurons working according to this [pull request](https://github.com/wsphillips/Conductor.jl/pull/26). After that I will work on small-medium populations of neurons that are either connected together via explicitly or implicitly defined topologies. ## Other discussed topics ### Specification of networks Conductor.jl and NeuronBuilder.jl can both work with a list of edges to construct networks of neurons. The object that defines these and usage is almost identical in both packages. In NeuronBuilder.jl: ```julia= neurons = [AB, PY, LP] ABLP_chol = directed_synapse(AB, LP, Chol(30.0 * synaptic_conv)) ABPY_chol = directed_synapse(AB, PY, Chol(3.0 * synaptic_conv)) # ... etc connections = [LPAB_glut, ABPY_chol #= ... etc =#] # returns ODESystem stg_2 = add_all_connections(neurons, connections) ``` In Conductor.jl this is specified as: ```julia= topology = [Synapse(ABPD => LP, Glut(30nS), EGlut), Synapse(ABPD => LP, Chol(30nS), EChol), #= ... etc =#]; network = NeuronalNetwork(topology) ``` NeuronBuilder.jl is also specifying networks via predefined methods that implement a simple adjaceny matrix. Specifically, [adjacency matrices are hand-written by the user](https://github.com/Dhruva2/NeuronBuilder.jl/blob/06ab50874d84c33ae25e120ca4b29d2b1ac293a9/scripts/connected_STG.jl#L85-L102) and the user provides their own [lookup table of named neurons](https://github.com/Dhruva2/NeuronBuilder.jl/blob/06ab50874d84c33ae25e120ca4b29d2b1ac293a9/scripts/connected_STG.jl#L79-L83), which collectively are used by `build_network` to construct an `ODESystem` that represents the network. I [previously hinted](https://github.com/wsphillips/Conductor.jl/issues/15#issue-1002942395) that having network constructors accept a user-supplied topology backed by a Graphs.jl type might work here. I will get back to you all with a demo. ### Backend logic/model validation Both Conductor.jl and NeuronBuilder.jl have to reason about which channels, currents, reversal potentials, etc. belong together and also how to resolve connections among component systems. At minimum, this involves matching objects by the type of associated ion species and checking whether the object is marked as an input or output. BOTH packages use custom structs to help organize this kind of data. If I understand correctly, NeuronBuilder.jl uses `FlowChannel` + "actuators" and "sensors", which have a family of associated functions. I can also see that the type signature is being used to generate symbolics via a lookup table of internal names..? Conductor.jl solves the same problem by providing a handful of optional primitives that are really just symbolics. These are things like `IonConcentration()`, `IonCurrent()`, `MembranePotential()`. Calling these constructors returns a [symbol that contains extra embedded metadata](https://github.com/wsphillips/Conductor.jl/blob/4a2674beaa764adf52c8ddc9b7d835df95cee503/src/ions.jl#L23-L54). All [symbols used by ModelingToolkit.jl carry metadata](https://mtk.sciml.ai/stable/basics/Variable_metadata/)--it's normally used to store things like default values but can be user extended. The user may then write their equations using these "special" symbols (or just stick to plain symbolics) and everything gets traced as normal ModelingToolkit.jl variables. While building components, Conductor.jl parses equations to build a list of inputs and outputs. When a system in Conductor.jl gets compiled to a `ODESystem`, it then checks whether all detected inputs have a corresponding mathematically generated state. If not, it tries to resolve it. The metadata embedded in the symbols serves as an aid for object matching and resolving undefined states. In the end the difference is mainly "where is the information stored". In NeuronBuilder.jl it is embedded into type parameters. In Conductor.jl it's embedded into the metadata dictionary of symbols. I have two comments here: 1. Be careful about hijacking parametrics - See [Julia docs](https://docs.julialang.org/en/v1/manual/performance-tips/#The-dangers-of-abusing-multiple-dispatch-(aka,-more-on-types-with-values-as-parameters)) and Tim Holy's [talk on compiler latency](https://youtu.be/rVBgrWYKLHY?t=1128) Here it isn't too bad since this isn't "hot" code, but good to keep in mind before you get fully committed to a heavily parameterized type system and then try to scale up. 2. ModelingToolkit.jl does have both "flows" and the concept of input/output metadata built-in. I'm not sure whether that would help in your case, but just an FYI (see MTK metadata linked above and also [here](https://mtk.sciml.ai/stable/basics/ContextualVariables/))