# Julia/SciML + Comp Neurosci meeting notes 2021/07/23
During the meeting, we talked about some of the motivations behind building tools for simulating neuronal models in Julia. The broad points that I touched on were:
- Inflexibility of existing simulator engines
- Productivity
- Scaling (including models of varying computational size, as well as "multiscale" and/or layered modeling)
There was also excellent feedback and points raised by others during the call. Below I've outlined specific ways that we might further address those topics as we move forward and finally provided a list of next action items.
## 1. User-Friendliness/Approachability
Any tool or solution for building models/simulations should be friendly enough to new comers that it isn't overwhelming or off-putting to get started. Obviously we want to strive for a clearly-defined, intuitive, and robust API for building models and running simulations. But there's also more concrete steps we can take to support newcomers.
### Use of Julia macros to simplify syntax further
The current demos for Conductor.jl use direct API calls, with a programmatic model building approach that vaguely resembles the Brian2 API. But the specification of models could be even simpler. For example, in Symbolics.jl/ModelingToolkit.jl, variables are declared using convenient macros:
```julia
@parameters t
@variables V(t) m(t)=0.0 x[1:5](t)
```
The same approach could be taken with specifying various neuroscience-specific primitives. As a *fictitious* example, we could decide to have a syntax for a symbol `ICa` representing the net $Ca^{2+}$ flux:
```julia
@ioncurrent ICa{Calcium}=0.05µM [net = true]
```
Model components can also be specified in a simpler way. We can take the specification of a chemical reaction in [Catalyst.jl](https://github.com/SciML/Catalyst.jl) as an example:
```julia
rn = @reaction_network Reversible_Reaction begin
k1, A --> B
k2, B --> A
end k1 k2
```
Within macro calls, we can repurpose the meaning and usage of syntax to our liking. This lets us expose any programmatic interface to users as a more concise DSL. So as another mock example, one might specify an ion channel with nested macros:
```julia
NaV = @ionchannel NaV{Sodium} begin
@gate m begin
α = 0.1*(Vₘ + 40)/(1 - exp(-(Vₘ + 40)/10))
β = 4*exp(-(Vₘ + 65)/18)
end p = 3
@gate h begin
α = 0.07*exp(-(Vₘ+65)/20)
β = 1/(1 + exp(-(Vₘ + 35)/10))
end
end
```
As of writing this document, convenience macros for Conductor.jl have not yet been implemented, but they are very much on the agenda (albeit lower priority).
### GUI and/or web frontend for non-technical users
For model interaction, [Makie.jl](https://github.com/JuliaPlots/Makie.jl) is an excellent package for creating plots with sliders and input boxes to update models in a way analogous to the GUI provided by [Xolotl](https://github.com/sg-s/xolotl).
For more advanced interaction and plotting, I intend to make use of [CImGui.jl](https://github.com/Gnimuc/CImGui.jl) and a complimentary plotting interface that I personally maintain, [ImPlot.jl](https://github.com/wsphillips/ImPlot.jl) (also see the source package [implot](https://github.com/epezent/implot)). We can potentially also use [imnodes](https://github.com/Nelarius/imnodes) via [available binaries](https://github.com/JuliaBinaryWrappers/CImGuiPack_jll.jl) for composing components.
### Frequently-updated, well-written documentation + Pluto.jl notebooks and/or Weave.jl tutorials
Establish an early habit of documenting each feature/type/function as they are added (both public API and internal API/dev docs). A first action item would be to get `Documenter.jl` fully setup and integrated with Conductor.jl
### Keep a record of discussions for posterity
There are several possibilities here:
- [issues](https://github.com/wsphillips/Conductor.jl/issues) + [discussions](https://github.com/wsphillips/Conductor.jl/discussions) on GitHub
- collaborative platforms like hackmd.io or Google Docs
- Julia Discourse.
We should maintain a persistent record somewhere of design decisions, troubleshooting, and background material associated with the project(s). Where such information lives on the internet can change, but let's start saving all the scraps of information now!
## 2. Modularity
We want to be able to change parts of models and how they are simulated as desired. This
would include (but is not limited to):
- Borrowed or extended code from other packages (e.g. reuse parts of LightGraphs.jl for network
topology)
- Freely changing code generation/code running components: solvers, discretization methods, compute backends
- Use custom and/or stock models to represent different model components with a high degree of flexibility (e.g. cotinuous time Markov chain, FluxML)
Julia's JIT compilation and multiple dispatch solves the first example, allowing an enormous amount generic code reuse while maintaining performance. Building models symbolically via ModelingToolkit.jl allows generating efficient functions for use with all the solvers offered by DiffEq, automatically restructuring code for efficiency and parallelism, etc.
Building up models in a symbolic intermediate representation also contributes toward modularity with the broader Julia ecosystem. Models can be rewritten ad hoc such that the generated Julia code is compatible with other downstream packages. For instance, transforming `PDESystem` for solution by multiple PDE solving backends, including those outside SciML. [GalacticOptim.jl](https://galacticoptim.sciml.ai/stable/) acts as a compatibility layer for other optimization packages in the broader Julia ecosystem. Its [support for ModelingToolkit.jl](https://galacticoptim.sciml.ai/stable/API/modelingtoolkit/) is an example of how this can work.
The last example is dependent on our own careful design choices. The core types, organization, and common functions should be generic enough such that it's easy to substitute different representations of, for example, a synapse model.
## 3. Appropriate levels of abstraction/flexibility
The `Gate` data type is treated as a fundamental building block, and there were concerns raised that it might be too specialized. The `Gate` type constructor used in the demo is subtyped from `AbstractGate`, and is for illustrative purposes. It's intended that additional `Gate` constructors can be built ad hoc, and that the `AbstractGate` interface should be flexible, accepting any arbitrary equation. For instance:
```julia
kdr_kinetics = [
Gate(AlphaBetaRates,
αₙ = IfElse.ifelse(Vₘ == -55.0, 0.1, (0.01*(Vₘ + 55.0))/(1.0 - exp(-(Vₘ + 55.0)/10.0))),
βₙ = 0.125 * exp(-(Vₘ + 65.0)/80.0),
p = 4)]
```
Here, the first argument is just a stub type for selecting a specialized constructor method. Keyword arguments like `αₙ` are handled by switch statements and eventually passed to more generic, naming-agnostic functions. This is evident in the type definition for `Gate`:
```julia
struct Gate <: AbstractGatingVariable
sym::Num # symbol/name (e.g. m, h)
df::Equation # differential equation
ss::Union{Nothing, Num} # optional steady-state expression for initialization
p::Float64 # optional exponent (defaults to 1)
end
```
`Gate` can therefore be directly called with its default constructor, for example:
```julia
@variables m(t)
# Provide "any" function to a Gate
m = Gate(m, D(m) ~ 2*m, nothing, 1)
```
And in fact, one could write `MySpecialGate <: AbstractGate` with additional fields and methods as long as it adheres to the interface we decide upon. In any case, I agree the design could be better. I propose that we extend `AbstractGate` to hold any subtype of `AbstractSystem`, rather than standalone equations. The same should apply to synapse models, too. What does this mean? Aside from simple equations, one could describe a channel gate with a whole system of equations, or use alternative models like continuous time Markov chain via [`JumpSystem`](https://mtk.sciml.ai/stable/systems/JumpSystem/).
## 4. Next steps
1. Improve `AbstractGate` by allowing it to be backed by an `AbstractSystem` rather than just a set of `Equation` types. For now only require that the system held by a gate returns a single scalar output. Apply a similar design to synapses. See comments above.
2. Implement multi-compartment models ranging in size from 2-20 compartments (i.e. non-Cable equation). I plan on developing this by testing against previously published models.
Currently, example models might include:
[Pinsk & Rinzel 1994](https://www.imsc.res.in/~sitabhra/temp/pinsky_rinzel_94.pdf)
[Davison et al 2000](https://www.dcs.warwick.ac.uk/~feng/papers/A%20reduced%20compartmental%20model.pdf)
[Traub et al. 1991](https://pubmed.ncbi.nlm.nih.gov/1663538/)
:::info
If someone has suggestions for additional/alternative models that fall in this category, please let me know!
:::
3. Begin prototyping a system for discrete synaptic propagation (as opposed to continously integrated synaptic thresholding/events)
4. Get documentation generation in place via `Documenter.jl` and write at least basic doc strings for each object and function currently in the source directory. Potentially transform one or more of the demo scripts into a `Weave.jl` tutorial with commentary and expected output from plots.
5. Write unit tests and setup a continuous integration pipeline on GitHub for automated testing.