###### tags: `Prof Thomas` `Brown` # Why Neurons have Thousand of Synapses, a Theory of Sequence Memory in Neocortex - 2016 :::success It is common to think of a neuron as comuting a single weighted sum of all its synapses. This notion, sometimes called a "point" neuron, forms the basis of almost all artificial neural networks. ::: :::warning The most fundamental operation of all neocortical tissue is learning and recalling sequences of patterns, what Karl Lashley famously called "the most important and also the most neglected problem of cerebral physiology". ::: ## Something very interesting The 2016 paper has a paragraph, which says > Lacking a theory of why neurons need active dendrites, almost all artificial neural networks, such as those used in deep learning and spiking neural networks, use artificial neurons with simplifies dendritic models, introducing the possibility they may be missing key functional aspects of the biological neural tissue. After reading this, I suddenly remembered that I have read the same proposition in one of the other papers. The same theory is suggested in the 2017 paper too, which says > Lacking a theory of why the neocortex is organized in columns and layers, almost all artificial neural networks, such as those used in deep learning and spiking neural networks, do not include these features, introducing the possibility they may be missing key functional aspects of the biological neural tissue. It was a great experience when I was able to relate both the papers, and made me think how well the literature is connected, even though it tries to explain different problems. The 2016 paper, tries to find a solution to the above proposition by understanding how biological neurons integrate input from thousand of synapses and whether active dendrites play an essential role. ## Some interesting points the paper tries to address 1. Neuron with several thousand synapses segregated on active dendrites can recognize hundreds of independent patterns of cellular activity even in the presence of large amounts of noise and pattern variations. 2. A neuron model is proposed where patterns detected on proximal dendrites lead to action potentials, defining the classic receptive field of the neuron, and patterns detected on bassal and apical dendrites act as predictions by slightly depolarizing the neuron without generating an action potential. ## Some interesting points in the paper 1. Pyramidal neurons represent the majority of excitatory neurons in the neocortex. 2. The proximal synapses, those closest to the cell body, have a relatively large effect on the likelihood of a cell generating an action potential. 3. Dendrite branches are active processing elements. 4. The activation of several distal synapses within close spatial and temporal proximity can lead to a local dendritic NMDA spike and consequently a significant and sustained depolarization of the soma. 5. A cycle of activation leading to prediction leading to activation etc. forms the basis of sequence memory. 6. A small set of neighboring synapses acts as a pattern detector. 7. By forming more synapses than necessary to generate an NMDA spike, recognition becomes robust to noise and variation. 8. There is always a probability of false match, and even though, the increase in number of synapses (point number 7), increases the chances of error too. But, when the patterns are sparse (active cell population << size of cell population), doubling the synapses introduces a tolerance of 50% towards noise, but adds just a 1.6 x 10^-18^ chance of error. 9. If synapses that recognize different patterns are mixed together on the dendritic segment, it introduces an additional possibility of error by co-activating synapses from different patterns. 10. It is proposed in the paper that basal dendrites of a neuron recognize patterns of cell activity that precede the neuron firing, in this way the basal dendrites learn and store transitions between activity patterns. 11. There are two predictions occuring at the same time. Lateral connections to basal dendrites predict the next input, and top-down connections to apical dendrites predict multiple sequence elements simultaneously. 12. Each potential synapse is assigned a scalar value called "permanence". Permanence was also referred to in the 2017 paper, as the variable whose increments or decrements help the model neuron learn. 13. Permanence value close to 0 represents an axon and dendrite with the potential to form a synapse but that have not commenced growing one. A 1 permanence value represents an axon and dendrite with a largely full formed synapse. 14. HTM neurons and HTM networks rely on distributed patterns of cell activity, thus the activation strength of any one neuron or synapse is not very important. Therefore, in HTM simulations the neuron activations and synapse weights are modeled with binary states. ## Some understanding of the model (my views) 1. Some required properties of the network were mentioned, in which one was **High-order predications**. > Making correct predictions with complex sequences requires the ability to incorporate contextual information from the past. The network needs to dynamically determine how much temportal context is needed to make the best predictions. The term "high order" refers to "high-order Markov chains" which have this property. From what I can think of, this can be somewhere related to a LSTM (or maybe GRU) layer in the ANN being built, as it talks about remebering the context, but at the same time also talks about just the relevant context. ## Difference from most neural models 1. The neuron model suggested in the paper requires two changes to learning rules. First, the learning occurs by growing and removing synapses from a pool of "potential" synapses. Second, Hebbian learning and synaptic change occur at the level of the dendritic segment, not the entire neuron. ## Some important concepts/terms to know 1. NMDA Spike - NDMA spike comprise the cellular substrate for multisite independent subunit computations that enrich the computational power and repertoire of cortical pyramidal cells, and are likely to play significant roles in cortical information processing in awake animals. 2. Sequence memory - A critical feature of episodic memory is the ability to remember the order of events as they occured in time, a capacity shared accross species including humans, non human primates and rodents. 3. Spiny stellate cell - The second most common excitatory neuron in the neocortex. Considered similar to pyramidal cells minue the apical dendrites.] 4. Hidden Markov Models (HMM) - HMMs are widely applied, specially in speech recognition. The basic HMM is a first-order model. Variations of HMMs can model restricted high-order sequences by encoding high-order states by hand. ## Questions