# Binocular Balance Through Hebbian Learning and BCM Theory
![Generated by DALL·E](https://raw.githubusercontent.com/Jim137/bbalance/main/doc/fig/head.png)
Table of Contents:
[toc]
:::warning
<div class="center">
:golf: Github Repo of this work: [Jim137/binocular_balance](https://github.com/Jim137/binocular_balance)
</div>
:::
## Introduction
Neural networks, the intricate webs of neurons in our brains, are not just conduits for electrical impulses but the very foundation of learning and perception.
Among the various mechanisms governing their adaptability, Hebbian learning stands out as a pivotal concept.
In the followings, we will briefly introduce some basic concepts of Hebbian learning, BCM theory and binocular balance.
### 1. Hebbian Learning
[Hebbian learning](https://en.wikipedia.org/wiki/Hebbian_theory), a fundamental concept in neuroscience, is named after Donald Hebb who proposed it in his 1949 book "The Organization of Behavior."
It's often summarized by the phrase "**neurons that fire together, wire together**."
This principle suggests that synaptic connections between neurons are strengthened when they are activated simultaneously.
Hebbian learning is crucial for understanding how experiences and behaviors can lead to changes in the brain's neural networks.
It's a form of synaptic plasticity, playing a key role in learning and memory.
This concept has been instrumental in the development of theories about neural network function and is a foundational element in various fields, including computational neuroscience and psychology.
::: info
:information_source: **Note on Hebbian Learning**
The important concept in Hebbian learning is that of synaptic plasticity, the ability of synapses to strengthen or weaken over time.
Consider $i$-th presynaptic neuron and $j$-th postsynaptic neuron.
The general dynamics of synaptic weight is described as following equation:
$$
\tau_w \frac{dw_{ji}}{dt} = c_0 + c_1^{pre}(w_{ji}) x_i + c_1^{post}(w_{ji}) y_j + c_2^{pre}(w_{ji}) x_i^2 + c_2^{post}(w_{ji}) y_j^2 + c_{11}^{corr}(w_{ji}) y_jx_i + \mathcal{O}(3)
$$
The Hebb rule make a simplest approach to fix $C_{11}^{corr}$ to a constant. The discrete version of Hebb rule is:
$$
w_{ji}(t+1) = w_{ji}(t) + \gamma y_j(t) x_{i}(t)
$$
Or, in continuous limit:
$$
\tau_w \frac{dw_{ji}}{dt} = y_j x_i
$$
where $\tau_w$ is the time constant of the synaptic weight, $y_j$ is the postsynaptic activity, and $x_i$ is the presynaptic activity.
:::
::: success
:bulb: **Tip on Neuronal Activity**
On the other hand, we can write down the the dynamics for neurons' activity as following:
$$
\tau \frac{dy_j}{dt} = -y_j + G\left(\sum_i w_{ji} x_i\right)
$$
where $\tau$ is the time constant of the neuronal activity, and $G$ is the gain function.
From experiments, we know that the dynamics of neuronal activity is faster than the dynamics of synaptic weight, which is $\tau \ll \tau_w$.
And we will have the following approximation (Steady-state approximation):
$$
y_j = G\left(\sum_i w_{ji} x_i\right)
$$
Then substitute $y_j$ into the equation of synaptic weight, we will have:
$$
\tau_w \frac{dw_{ji}}{dt} = G\left(\sum_k w_{jk} x_k\right) x_i
$$
We can see that if $x_i$ dominate the sum and hit the threshold of the gain function $G$, the synaptic weight will be increased.
:::
:::danger
:warning: **Important**
However, the synaptic weight will diverge if there is no inhibition.
Consequently, we need to introduce the [Oja's rule](https://en.wikipedia.org/wiki/Oja%27s_rule) to renormalize the synaptic weight.
The oja-modified Hebbian learning is:
$$
\tau_w \frac{dw_{ji}}{dt} = y_j x_i - w_{ji}y_j^2
$$
And the discrete version is:
$$
w_{ji}(t+1) = w_{ji}(t) + \eta \left[G\left(\sum_k w_{jk}(t) x_k\right)x_i-w_{ji}(t)G\left(\sum_k w_{jk}(t) x_k\right)^2\right]
$$
where $\eta$ is the learning rate.
:::
### 2. BCM Theory
[BCM theory](https://en.wikipedia.org/wiki/BCM_theory), a pivotal concept in neuroscience, extends the principles of Hebbian learning by introducing a dynamic threshold for synaptic plasticity.
BCM theory, formulated in the early 1980s, proposes that the strength of synaptic connections is not only determined by simultaneous neuron activations but also influenced by the history of neuronal activity.
The BCM model's innovative feature is its variable threshold, which adapts based on the neuron's previous firing patterns, allowing for a more nuanced understanding of learning and memory processes in the brain.
This dynamic threshold mechanism is key to explaining both synaptic strengthening (long-term potentiation) and weakening (long-term depression), offering significant insights into neural adaptability and function.
::: info
:information_source: **Note on BCM Theory**
The BCM theory can be described as following equation:
$$
\tau_w \frac{dw_{ji}}{dt} = y_j\left(y_j - \theta_j\right)x_i
$$
where $\theta_j$ is a dynamical threshold of the BCM theory, which can be described as following:
$$
\theta_j(t) = \langle y_j^2(t) \rangle = \frac{1}{\tau} \int_0^t y_j^2(t')e^{-(t-t')/\tau}dt'
$$
:::
:::danger
:warning: **Important**
Combine the Hebbian learning and BCM theory, we will have the following equation:
$$
\tau_w \frac{dw_{ji}}{dt} = y_j\left(y_j - \theta_j\right)x_i - w_{ji}y_j^2
$$
And the discrete version is:
$$
w_{ji}(t+1) = w_{ji}(t) + \eta \left[y_j\left(y_j - \theta_j\right)x_i - w_{ji}y_j^2\right]
$$
and
$$
y_j(t) = G\left(\sum_i w_{ji}(t) x_i\right)
$$
Which are the iteration what we will use in the following simulations.
:::
### 3. Binocular Balance
Binocular balance, a critical aspect of our visual system, ensures a unified and coherent visual experience.
It integrates distinct images from each eye, harmonizing them for depth perception and spatial awareness.
This process is vital for constructing a stable, accurate representation of our three-dimensional world, illustrating the complexity of neural processing in visual perception and the balance between sensory input and neural activity.
The Hebbian learning rule, if applied simplistically to a multiple-input system like the visual cortex, might lead to a dominance of stronger inputs over weaker ones.
However, in reality, sensory inputs are not always of equal strength, and the brain must adapt to this imbalance.
In extreme cases, such as with a sensory impairment, this can disrupt neural balance.
::: info
:information_source: **Note on Binocular Balance**
In clinical practice, the treatment for amblyopia (lazy eye) often involves covering the normal eye to enhance neural connections in the amblyopic eye.
This approach is effective in children, where neural connections are still adaptable, but less so post-adolescence when these connections become more fixed.
:::
## Methodology
### 1. Binocular Balance
Consider a bi-sensory system model, we introduce a negative bias to one of sensory while maintaining a neutral bias in the other.
The system undergoes three distinct phases:
1. Pre-treatment, where both inputs receive equal random arrays.
2. Treatment, where the input strength to the normal sensory is deliberately reduced.
3. Post-treatment, where the normal sensory input strength is restored, and the learning rate is significantly reduced to simulate aging effects in neural plasticity.
:::danger
:exclamation:The detailed methods of binocular balance are in [:arrow_right:bb.ipynb:arrow_left:](https://github.com/Jim137/binocular_balance/blob/main/bb.ipynb).
:::
### 2. Binocular Deprivation
We construct another bi-sensory system model.
In this bi-sensory system model focusing on binocular deprivation, both sensory inputs are initially unbiased.
The model involves five phases:
1. Normal Rearing, where both inputs receive identical arrays.
2. Monocular Deprivation, reducing the input strength to one sensory system.
3. Binocular Deprivation, reducing input strength to both sensory systems.
4. Reverse Suture, restoring the initially reduced input to one sensory.
5. Binocular Recovery, where both systems receive full-strength, unbiased inputs.
:::danger
:exclamation:The detailed methods of binocular deprivation are in [:arrow_right:bd.ipynb:arrow_left:](https://github.com/Jim137/binocular_balance/blob/main/bd.ipynb).
:::
## Results
### 1. Binocular Balance
![Weight value before/after the treatment](https://raw.githubusercontent.com/Jim137/binocular_balance/main/doc/fig/bbalance.png)
The provided figure illustrates synaptic weight (*average) changes before, during, and after the treatment.
Initially, the synaptic weight of the normal sensory input is stronger compared to the amblyopic sensory input.
During the treatment phase, we observe a more rapid reduction in the synaptic weight of the normal sensory input.
Post-treatment, the synaptic weight of the amblyopic sensory input becomes slightly stronger than that of the normal sensory, achieving a more balanced signal transmission to the cortex.
### 2. Binocular Deprivation
![Weight value between sensory neuron and cortex neuron under different conditions](https://raw.githubusercontent.com/Jim137/binocular_balance/main/doc/fig/bdeprivation.png)
The above figure illustrates synaptic weight changes between the right/left sensory neuron and the cortex neuron under various conditions:
1. Normal Rearing: Equal synaptic weights for both sensory inputs.
2. Monocular Deprivation: Reduced synaptic weight in the deprived sensory neuron.
3. Binocular Deprivation: Slight reduction in synaptic weights for both neurons.
4. Reverse Suture: Restoration of the initially deprived neuron's synaptic weight, coupled with a reduction in the other.
5. Binocular Recovery: An increase in synaptic weights for both neurons.
Comparing these results with real experimental data.
![Real experiment](https://raw.githubusercontent.com/Jim137/binocular_balance/main/doc/fig/bdeprivation_r.png)
Our model aligns closely except in the final phase, where the actual experiment achieves binocular balance.
This discrepancy offers an opportunity for further investigation into the model's parameters or assumptions.
## Conclusions
Based on the results, we have following conclusions:
1. Effective amblyopia treatment involves reducing input strength to the normal sensory, demonstrating the adaptability of neural connections.
2. Under typical conditions, binocular balance is naturally achieved through the mechanisms of Hebbian learning and BCM theory.
3. In the case of binocular deprivation, early-age synaptic weight adjustments are feasible due to higher learning rates.
4. Real-world experiments on binocular deprivation show that post-recovery phase synaptic weights can balance out with equal strength inputs, even if initial synaptic weights were imbalanced.
## References
* H.-H. Lin, Handouts in the course, "Introduction to Neurophysics", National Tsing Hua University, 2023.
* [W. Gerstner, W. M. Kistler, R. Naud and L. Paninski, "Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition", Cambridge University Press, 2014.](https://neuronaldynamics.epfl.ch/index.html)
## Appendix: Code
In the followings, I will list the codes I used in the jupyter notebook of methods.
All the source codes can be found in my Github repo: [Jim137/binocular_balance](https://github.com/Jim137/binocular_balance).
### 1. [src/neuron.py](https://github.com/Jim137/binocular_balance/blob/main/src/neuron.py)
In weight update, the method we used we called "cocktail" which is combined Hebbian learning and BCM theory.
```python=
import numpy as np
id = 0
class Neuron:
def __init__(self):
global id
self.value = np.float64(0)
self.presynaptic_neuron = []
self.weights = []
self.bias = None
self.input = np.float64(0)
self.input_fluctuation_rate = None
self.timestamp = 0
self.tag = None
self.id = id
id += 1
def __call__(self):
return self.value
def update(
self,
gain=lambda x: np.max([x, np.float64(0)]),
):
sum = [
weight.value * weight.presynaptic_neuron.value for weight in self.weights
]
if self.input_fluctuation_rate:
sum.append(
np.random.normal(
loc=self.input,
scale=self.input_fluctuation_rate,
)
)
else:
sum.append(self.input)
if self.bias:
sum.append(self.bias)
self.value = gain(np.sum(sum))
self.timestamp += 1
def add_presynaptic_neuron(self, neuron):
self.presynaptic_neuron.append(neuron)
self.weights.append(weight(neuron, self))
def weights_metadata(self):
return [weight.metadata() for weight in self.weights]
def metadata(self):
return {
"value": self.value,
"presynaptic_neuron": self.presynaptic_neuron,
"weights": self.weights_metadata(),
"bias": self.bias,
"input": self.input,
"timestamp": self.timestamp,
"tag": self.tag,
"id": self.id,
}
class weight:
def __init__(self, pre: Neuron, post: Neuron):
self.value = np.float128(1e-1)
self.presynaptic_neuron = pre
self.postsynaptic_neuron = post
self.threshold = np.float64(0)
self.timestamp = 0
def __call__(self):
return self.value
def _threshold(self, th_time_constant=1e3):
self.threshold *= np.exp(-1 / th_time_constant)
self.threshold += self.postsynaptic_neuron.value / th_time_constant
def update(self, learning_rate=0.1, method="cocktail", th_time_constant=1e3):
if method == "hebbian" or method == "hebb":
tmp = np.float128(
self.postsynaptic_neuron.value * self.presynaptic_neuron.value
- self.value * np.square(self.postsynaptic_neuron.value)
)
elif method == "bcm":
self._threshold(th_time_constant)
tmp = np.float128(
self.postsynaptic_neuron.value
* (self.postsynaptic_neuron.value - self.threshold)
* self.presynaptic_neuron.value
)
elif method == "cocktail":
self._threshold(th_time_constant)
tmp = np.float128(
self.postsynaptic_neuron.value
* (self.postsynaptic_neuron.value - self.threshold)
* self.presynaptic_neuron.value
- self.value * np.square(self.postsynaptic_neuron.value)
)
else:
raise ValueError("method must be hebbian, bcm or cocktail")
self.value += learning_rate * tmp
def metadata(self):
return {
"value": self.value,
"presynaptic_neuron_id": self.presynaptic_neuron.id,
"postsynaptic_neuron_id": self.postsynaptic_neuron.id,
"threshold": self.threshold,
"timestamp": self.timestamp,
}
class sensory:
def __init__(self, number_of_neurons: int):
self.number_of_neurons = number_of_neurons
self.neurons = [Neuron() for _ in range(number_of_neurons)]
for neuron in self.neurons:
neuron.tag = "sensory"
def input(self, data):
n_data = len(data)
num_cluster = int(np.ceil(n_data / self.number_of_neurons))
for neuron in self.neurons:
neuron.input = np.float64(0)
for i in range(n_data):
j = i // num_cluster
self.neurons[j].input += data[i] / num_cluster
def update(self, learning_rate=0.1, method="cocktail"):
for neuron in self.neurons:
neuron.update()
for weight in neuron.weights:
weight.update(learning_rate, method=method)
def collect(self):
collection = [neuron.metadata() for neuron in self.neurons]
return collection
class cortex:
def __init__(self, number_of_neurons: int):
self.number_of_neurons = number_of_neurons
self.neurons = [Neuron() for _ in range(number_of_neurons)]
for neuron in self.neurons:
neuron.tag = "cortex"
def fully_connect(self):
for i in range(self.number_of_neurons):
for j in range(self.number_of_neurons):
if i == j:
continue
self.neurons[i].add_presynaptic_neuron(self.neurons[j])
def add_sensory(self, sensory: sensory):
for i in range(self.number_of_neurons):
for j in range(sensory.number_of_neurons):
self.neurons[i].add_presynaptic_neuron(sensory.neurons[j])
def update(self, learning_rate=0.1, method="cocktail"):
for neuron in self.neurons:
neuron.update()
for weight in neuron.weights:
weight.update(learning_rate, method=method)
def collect(self):
return [neuron.metadata() for neuron in self.neurons]
class motor:
def __init__(self, number_of_neurons: int):
self.number_of_neurons = number_of_neurons
self.neurons = [Neuron() for _ in range(number_of_neurons)]
for neuron in self.neurons:
neuron.tag = "motor"
def add_cortex(self, cortex: cortex):
for i in range(self.number_of_neurons):
for j in range(cortex.number_of_neurons):
self.neurons[i].add_presynaptic_neuron(cortex.neurons[j])
def update(self, learning_rate=0.1, method="cocktail"):
for neuron in self.neurons:
neuron.update()
for weight in neuron.weights:
weight.update(learning_rate, method=method)
def collect(self):
return [neuron.metadata() for neuron in self.neurons]
```
### 2. [src/neural_networks.py](https://github.com/Jim137/binocular_balance/blob/main/src/neural_networks.py)
```python=
from abc import ABCMeta, abstractmethod
from .neuron import sensory, cortex, motor
class neural_network(object, metaclass=ABCMeta):
def __init__(self, n_sensory: int, n_cortex: int, n_motor: int):
self.n_sensory = n_sensory
self.n_cortex = n_cortex
self.n_motor = n_motor
@abstractmethod
def add_input(self, data):
pass
@abstractmethod
def record(self):
pass
@abstractmethod
def _update(self, learning_rate=0.1, method="cocktail"):
pass
@abstractmethod
def dynamic(self, time, learning_rate=0.1, method="cocktail", is_record=False):
pass
@abstractmethod
def __iter__(self):
pass
class nn(neural_network):
def __init__(
self, n_sensory: int, n_cortex: int, n_motor: int, is_cortex_fully_connect=False
):
super().__init__(n_sensory, n_cortex, n_motor)
self.sensory = sensory(n_sensory)
self.cortex = cortex(n_cortex)
self.motor = motor(n_motor)
if is_cortex_fully_connect:
self.cortex.fully_connect()
self.cortex.add_sensory(self.sensory)
self.motor.add_cortex(self.cortex)
def add_input(self, data):
self.sensory.input(data)
def record(self):
collection = {}
collection["sensory"] = self.sensory.collect()
collection["cortex"] = self.cortex.collect()
collection["motor"] = self.motor.collect()
return collection
def _update(self, learning_rate=0.1, method="cocktail"):
self.sensory.update(learning_rate, method)
self.cortex.update(learning_rate, method)
self.motor.update(learning_rate, method)
def dynamic(self, time, learning_rate=0.1, method="cocktail", is_record=False):
if is_record:
recoding = []
for _ in range(time):
self._update(learning_rate, method)
if is_record:
recoding.append(self.record())
if is_record:
return recoding
else:
return None
def __iter__(self):
neurons = []
neurons.extend(self.sensory.neurons)
neurons.extend(self.cortex.neurons)
neurons.extend(self.motor.neurons)
return iter(neurons)
class bisensory_nn(neural_network):
def __init__(self, n_sensory: int, n_cortex: int, n_motor: int):
super().__init__(n_sensory, n_cortex, n_motor)
self.right_sensory = sensory(n_sensory)
self.left_sensory = sensory(n_sensory)
self.cortex = cortex(n_cortex)
self.motor = motor(n_motor)
self.cortex.fully_connect()
self.cortex.add_sensory(self.right_sensory)
self.cortex.add_sensory(self.left_sensory)
self.motor.add_cortex(self.cortex)
def add_input(self, data0, data1=None):
if data1 is None:
data1 = data0
self.right_sensory.input(data0)
self.left_sensory.input(data1)
def record(self):
collection = {}
collection["right_sensory"] = self.right_sensory.collect()
collection["left_sensory"] = self.left_sensory.collect()
collection["cortex"] = self.cortex.collect()
collection["motor"] = self.motor.collect()
return collection
def _update(self, learning_rate=0.1, method="cocktail"):
self.right_sensory.update(learning_rate, method)
self.left_sensory.update(learning_rate, method)
self.cortex.update(learning_rate, method)
self.motor.update(learning_rate, method)
def dynamic(self, time, learning_rate=0.1, method="cocktail", is_record=False):
if is_record:
recoding = []
for _ in range(time):
self._update(learning_rate, method)
if is_record:
recoding.append(self.record())
if is_record:
return recoding
else:
return None
def __iter__(self):
neurons = []
neurons.extend(self.right_sensory.neurons)
neurons.extend(self.left_sensory.neurons)
neurons.extend(self.cortex.neurons)
neurons.extend(self.motor.neurons)
return iter(neurons)
```
### 3. [src/utils.py](https://github.com/Jim137/binocular_balance/blob/main/src/utils.py)
```python=
import numpy as np
import matplotlib.pyplot as plt
def record_splitter(recording: list, nn_type: str):
if nn_type == "bisensory":
sensory = [[], []]
else:
sensory = []
cortex = []
motor = []
for collection in recording:
if nn_type == "bisensory":
sensory[0].append(collection["right_sensory"])
sensory[1].append(collection["left_sensory"])
else:
sensory.append(collection["sensory"])
cortex.append(collection["cortex"])
motor.append(collection["motor"])
return sensory, cortex, motor
def plot_neuron_activity(
collections: list,
ax,
neuron_index: int | list | None = None,
is_box_plot: bool = False,
is_mean_plot: bool = False,
**kwargs
):
"""
If neuron_index is None, plot the average activity of all neurons.
"""
if type(neuron_index) == int:
neuron_index = [neuron_index]
activity = []
for collection in collections:
if neuron_index is None:
activity.append(np.mean([neuron["value"] for neuron in collection]))
else:
tmp = []
for neuron in collection:
if neuron["id"] in neuron_index:
tmp.append(neuron["value"])
activity.append(np.mean(tmp))
if is_box_plot:
ax = box_plot(activity, ax, **kwargs)
elif is_mean_plot:
ax = mean_plot(activity, ax, **kwargs)
else:
ax.plot(activity, **kwargs)
return ax
def plot_weight_value(
collections: list,
ax,
presynaptic_neuron_id: int | list | None = None,
postsynaptic_neuron_id: int | list | None = None,
is_box_plot: bool = False,
is_mean_plot: bool = False,
**kwargs
):
"""
If presynaptic_neuron_id or postsynaptic_neuron_id is None, plot the average weight value of missings arguments.
"""
if type(presynaptic_neuron_id) == int:
presynaptic_neuron_id = [presynaptic_neuron_id]
if type(postsynaptic_neuron_id) == int:
postsynaptic_neuron_id = [postsynaptic_neuron_id]
values = []
for collection in collections:
if presynaptic_neuron_id is None and postsynaptic_neuron_id is None:
values.append(
np.mean(
[
weight["value"]
for neuron in collection
for weight in neuron["weights"]
]
)
)
elif presynaptic_neuron_id is None:
tmp = []
for neuron in collection:
if neuron["id"] in postsynaptic_neuron_id:
tmp.extend([weight["value"] for weight in neuron["weights"]])
values.append(np.mean(tmp))
elif postsynaptic_neuron_id is None:
tmp = []
for neuron in collection:
for weight in neuron["weights"]:
if weight["presynaptic_neuron_id"] in presynaptic_neuron_id:
tmp.append(weight["value"])
values.append(np.mean(tmp))
else:
tmp = []
for neuron in collection:
if neuron["id"] in postsynaptic_neuron_id:
for weight in neuron["weights"]:
if weight["presynaptic_neuron_id"] in presynaptic_neuron_id:
tmp.append(weight["value"])
values.append(np.mean(tmp))
if is_box_plot:
ax = box_plot(values, ax, **kwargs)
elif is_mean_plot:
ax = mean_plot(values, ax, **kwargs)
else:
ax.plot(values, **kwargs)
return ax
def box_plot(seq, ax, **kwargs):
if "num_box" in kwargs:
num_box = kwargs["num_box"]
del kwargs["num_box"]
else:
num_box = 10
n = len(seq)
box_size = int(np.ceil(n / num_box))
boxes = []
for i in range(num_box):
boxes.append(seq[i * box_size : (i + 1) * box_size])
ax.boxplot(boxes)
ax.plot(np.arange(1, num_box + 1), [np.mean(box) for box in boxes], **kwargs)
ax.set_xticklabels(np.arange(1, num_box + 1) * box_size)
return ax
def mean_plot(seq, ax, **kwargs):
num_cluster = len(seq) // 100
boxes = []
for i in range(100):
boxes.append(seq[i * num_cluster : (i + 1) * num_cluster])
ax.plot(np.arange(100) * num_cluster, [np.mean(box) for box in boxes], **kwargs)
return ax
```
<!-- ## Appendix: Self-Evaluation
1. Assessment of the report grade: A (85)
I think I did well on numerical simulation part, but lack of further interpretation on the results.
However, I think it is excusable since this is my introduction to neuroscience.
Also, there are still many optimizations and parameters to adopt.
If I have the time to do it, I would make it better.
2. Learning and writing process and time
I spent about a month on trying and exploring different topics, such as neural circuits, decision making, logistic map, recurrent neural networks with hebbian learning.
<img src="https://hackmd.io/_uploads/HylYfrVvT.png" style="width:50%">
<img src="https://hackmd.io/_uploads/rk4ifS4w6.png" style="width:50%">
But I couldn't get good results on these topics.
Therefore, I decided to take "binocular balance" from suggested topics.
I spent a week on learning and around 2 weeks on writing the report and doing the numerical simulation.
3. Strengths and weaknesses
The neural networks I designed is designed for variety usages.
It can be adopted to simulate not only hebbian learning or BCM theory, but also a complex system such as recurrent neural networks.
Additionally, I prepared easily-accessible API to reach the data of each neuron and synaptic weight and also the plotting tool to visualize the data.
Such as the boxplot (left) and the time-domain mean plot (right) to better visualize the quickly-changing and fluctuating neuron activity.
<img src="https://hackmd.io/_uploads/SJA4OrEwp.png" style="width:50%">
<img src="https://hackmd.io/_uploads/SJUDuB4PT.png" style="width:50%">
However, I spent most of the time on coding.
The further detailed explainations are not completed very well.
It make this report only about the numerics and cannnot make a good interpretation on the phenomena.
Yet, I think this is the top 3 best reports of mine and the most challenge one in my college life. -->
<!--style-->
<style>
.center{
text-align: center;
vertical-align: middle
}
</style>