###### tags: `NDDL`, `code`
# Comparing LTC-SN model formulas with code
This note serves to compare the implementation of the LTC-SN model with the description of the model in the paper 'Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time'.
***Model***

Note that the update formulas for $u_t$ lead to a strange multiplication factor of $2\alpha-1$.
***Code***
def forward(self, x_t, mem_t, spk_t, b_t):
if self.is_rec:
dense_x = self.layer1_x(torch.cat((x_t, spk_t), dim=-1))
else:
dense_x = self.layer1_x(x_t)
tauM1 = self.act1(self.layer1_tauM(torch.cat((dense_x, mem_t), dim=-1)))
tauAdp1 = self.act1(self.layer1_tauAdp(torch.cat((dense_x, b_t), dim=-1)))
mem_1, spk_1, _, b_1 = mem_update_adp(
dense_x, mem=mem_t, spike=spk_t, tau_adp=tauAdp1, tau_m=tauM1, b=b_t
)
return mem_1, spk_1, b_1
def mem_update_adp(inputs, mem, spike, tau_adp, tau_m, b, dt=1, isAdapt=1):
"""
Parameters:
- mem : membrane potential
- spike : 0 or 1, whether there was a spike
- tau_adp : time-constant for decay of the threshold
- tau_m : time-constant for decay of the membrane potential
- b : parameter for adaptive change of the firing threshold theta
- dt : [NOT USED]
- isAdapt : 0 or 1, whether neuron uses adaptive threshold
"""
# Alpha expresses the single time-step decay of the membrane potential
# with time-constant tau_m
alpha = tau_m
# Ro expresses the decay single-timestep decay of the threshold
# with time-constant tau_adp
ro = tau_adp
# Beta is a constant that controls the size of adaptation of the threshold;
# We set beta to 1.8 for adaptive neurons as default
if isAdapt:
beta = 1.8
else:
beta = 0.0
# B is a dynamical threshold comprised of a fixed minimal threshold b_j0
# and an adaptive contribution beta x b
b = ro * b + (1 - ro) * spike
B = b_j0 + beta * b
# Update membrane potential
d_mem = -mem + inputs
mem = mem + d_mem * alpha
# Use approximation firing function to generate a spike, or not
inputs_ = mem - B
spike = ActFun_adp.apply(inputs_)
# Reset mem to zero after spike
mem = (1 - spike) * mem
return mem, spike, B, b
***Comparing model and code***
| | Model | Code |
| --- | --- | --- |
| $\tau_{adp}update$ | $\rho = \exp(-dt/\tau_{adp})$ <br> $= \sigma(Dense_{adp}[x_t, b_{t-1}]$ | `tauAdp1 = self.act1( self.layer1_tauAdp( torch.cat((dense_x, b_t), dim=-1)))`|
| $\tau_{m}update$| $\alpha = \exp(-dt/\tau_{m})$ <br> $= \sigma(Dense_{m}[x_t, u_{t-1}]$ | `tauM1 = self.act1( self.layer1_tauM( torch.cat((dense_x, mem_t), dim=-1)))`|
| | Perform update| `mem_update_adp(...)` |
| | | `alpha = tau_M1` <br> `ro = tau_Adp1` |
| $\theta_t update$ | $b_t = \rho b_{t-1} + (1-\rho) S_{t-1}$ <br> $\theta_t = 0.1 + 1.8 b_t$ | `b = ro * b + (1-ro) * spike` <br> `B = b_j0 + beta * b` |
| $u_t update$| $du = -u_{t-1} + x_t$ <br> $u_t = \alpha u_{t-1} + (1-\alpha) du$| `d_mem = -mem + inputs` <br> `mem = alpha * d_mem + mem`|
| spike $s_t$| $s_t = f_s(u_t, \theta)$ | `inputs_ = mem - B` <br> `spike = ActFun_adp.apply(inputs_)` |
| resetting | $u_t=u_t (1-s_t) + u_{rest} s_t$ | `mem = (1 - spike) * mem` |
***Observations***
:::info
Need to use:`b_j0 = 0.1` , `beta = 1.8` and $u_{rest} = 0$
:::
:::danger
Code and model description for the $u_t update$ do not seem to be equivalent!
1. model description:
$u_t = \alpha u_{t-1} + (1-\alpha) du$
$= \alpha u_{t-1} + (1-\alpha) (-u_{t-1} + x_t)$
$= \alpha u_{t-1} -u_{t-1} + \alpha u_{t-1} + x_t -\alpha x_t$
$= (2 \alpha - 1) u_{t-1} + (1-\alpha) x_t$
2. code:
`mem = alpha * d_mem + mem`
`= alpha * (-mem + inputs) + mem`
`= -alpha * mem + alpha * inputs + mem`
`= (1 - alpha) * mem + alpha * inputs`
3. substitute:
$\alpha = 1 -$ `alpha`
$u_t = (2 (1-$ `alpha` $) - 1) u_{t-1} + (1 - (1 -$ `alpha` $) x_t$
$=(1 - 2$ `alpha` $)u_{t-1} +$ `alpha` $x_t$
So the update of $u_t$ as defined in the model is not equivalent to the update of `mem` in the code.
:::