This is part one of a series on neuron based online learning systems. To learn more, check out my About.
Neural Building Blocks
The human brain is a complex livewired system, characterized by the constant formation and reorganization of neural networks.1 Brain development is a highly dynamic process that involves not only the creation and physical movement of neurons within the 3D space of the brain, but also complex connection changes between neurons. These connections can be modified in various ways, such as changing the weights of existing connections, forming new ones, or eliminating unnecessary ones. This list is far from complete and enumerating neuron dynamics is a subject of active study, something that would leave us lost in the biological soup that makes us who we are.
In contrast, mainstream approaches to artificial intelligence (AI) utilize only a small fraction of this complexity, if any at all. The difficulty that these methods face in solving general-purpose tasks suggests that there is value in incorporating more of the complexity seen in biological systems. However, the messy and intricate nature of wetware makes it challenging to develop AI models that can fully capture its workings. This series will explore balancing biological complexity with functionality in new ways to open new doors to intelligent systems. We will significantly diverge from conventional approaches to create dynamic systems with the ability to modify the underlying network at run time. To accomplish this, we need a very different perspective requiring new ideas and a reframing of the problem.
In this post, we are going to enumerate a subset of network change dynamics, create circuits that can drive network change, and put this into a basic example of how these elements can be integrated into an online learning system. By starting with simple components and iteratively building upon them, we lay the foundation for a more complex and adaptive AI model.
In order to give us the best footing possible in describing these systems and making them as realistic as is useful, we are going to limit our usage of Machine Learning literature. Instead, we will more regularly pull from neuroscience and electrical circuit design. Our aim is to construct an emulation of the brain with sufficient fidelity to facilitate understanding, hypothesis testing, and learning. In building this emulation, we will prioritize stability and usefulness as our main criteria.
The Old & The New
A model for a neuron needs to be complex enough to leverage important dynamics, but simple enough to be simulated at the scale of the brain. To do that we are going to build a type of spiking neuron. So called because the neuron pulses on and off as the result of getting stimulated. This type of neuron captures temporal dynamics of biological neurons that integrate information over time.
These neurons have connections to one another that stimulate or suppress the neuron on the target end. If a neuron receives enough stimulation in a short period of time, then the neuron activates, it spikes. When the neuron activates it resets its own stimulation and stimulates downstream neurons according to its connections. In the typical neuron diagram, the outgoing connections are axons and axons connect to target neuron’s dendrites.
Unlike popular Machine Learning neural networks with specialized activation functions we will have a state machine for each neuron that will receive stimulation and compare it to a threshold. Our neuron will also track accumulated stimulation and slowly leak the stimulation over time. Leaking stimulation is important to ensure that a neuron only activates when stimulations arrive together. In this way neurons form a coincidence detector, meaning that neurons detect temporally proximal patterns. Like pouring water into cupped hands, if you do not pour fast enough, you will never overflow your hands. In this way, individual neurons perform a low-level computation. Another important piece is that leakage current can be varied to enable the recognition of larger time scale patterns.
Let us put this together. Neurons have connections. These connections target other neurons and are either stimulating or suppressive. Neurons have a threshold and a stimulation level. When neurons receive stimulation it is added to the stimulation level and compared against the threshold. Neurons activate when accumulated stimulation crosses the threshold. Finally, stimulation leaks over time. We will use these details as a starting point.
Neural Circuits
We next need to think about collections of neurons that can be easily reused. There are a lot of similarities and a lot of differences between the electronics we use every day and the brain utilizing a cocktail of electro-chemical biology. The foundation of nearly all electronics relies on the usage of collections of logic gates. Logic blocks are an abstraction of a specific combination of logic gates, forming a functional unit that can be reused across a computing system. Implementations of these logic blocks serve any number of purposes such as adding two numbers. The outside of the logic block features well defined inputs and outputs for interconnecting blocks and logic. This is done so that the logic block is modular. As long as you connect up the inputs and outputs correctly, you do not need to worry about what is happening inside the logic block. The power here is that the same block that adds two numbers, can also be used to multiply two numbers by correctly connecting copies of the block together.
The same utility of modularity can be captured with neural circuits. Let us not get stuck on digital logic as others have spent time constructing circuits of neurons that model digital logic blocks. We are not trying to recreate the modern digital computer with neurons. Instead, let us explore neural circuits that compute predictive error and pattern analysis. We will dig into these requirements soon, but from digital logic, we will take the idea of logic blocks. We need something we'll call the NeuroBlock, that will consist of predefined circuits of neurons that can be reused across the network.
Connecting this to the brain, there is a regular repeating structure in the brain that is often referred to as cortical columns. Cortical columns in the brain can be described as a fundamental unit in the neocortex.2 By leveraging combinations of these neural blocks, we can recreate the emergent, self organizing features we see in the neocortex.
Many explanations of AI mechanics start in a similar place. They touch on what is called Hebbian Learning. Hebbian learning is often summarized as "neurons that fire together, wire together and neurons that fire apart wire apart". While Hebbian learning alone has been shown to be insufficient for all the capabilities seen in the brain, it is a great starting point. We will construct our first NeuroBlock to determine when neurons are firing together and when they are not.
Hebbian Learning Circuit
We have kept things high-level while establishing a simple neuron model that performs pattern recognition by activating when a neuron receives stimulations close together in time. In the context of Hebbian learning, we want to remove patterns and relationships that don't fire together.
As an example, let us say that an important pattern is the activity of A and B activating together. If A and B fail to activate together some number of times, perhaps it would be useful to recycle those neurons for capturing new patterns. Alternatively, we may find that the failure of A and B to fire together is important and the failure pattern itself can drive additional activity. Let’s examine a simple example of this happening with the following figures. We will see that by connecting neurons in a procedural way, we can detect when neurons fire together and when they do not fire together.
Figure 1 shows a collection of our leaky neurons that are arranged in a predefined way. On the left are the input neurons. External incoming connections would target one of these input neurons. The activation of these neurons cause an internal state change that we will break down. This may or may not lead to activations of output neurons. Output neurons are connected to other NeuroBlocks or neurons with outgoing connections. Neurons can have different states as show in Figure 1.
Neurons That Fire Together
The simple case is when the two input neurons, here A and B, activate together. As can be seen in Figure 2 below, neuron A and neuron B activate at the same time. This causes the Out neuron to activate then in turn stimulating down stream neurons. This NeuroBlock captures the pattern that A and B occur together.
We can also see activity in the L1 through L3. Neurons labeled L1 through L3 form a loop component of the circuit that is activated when either A or B fire. Connections within the loop are strong enough to activate subsequent neurons in the loop creating a chain of activations L1->L2->L3. This neural chain creates a delay component that allows for the chain to be interrupted if the pattern is recognized. We can see this play out in Figure 2 when Out activates, it suppresses L1.3 While Error is stimulated, it never activates. In this way the NeuroBlock has successfully recognized a pattern by Out activating and not Error.
While Figure 2’s pattern could be substituted by a single neuron, there are other uses for this loop structure. To start, the case where only A or B activate leads to Error activating.
In Figure 3, neuron A activates starting the chain, but the Out neuron never interrupts the loop. This failed activation of Out highlights the fact that the weights of the positive connections A→Out and B→Out are balanced such that both neurons need to fire together to activate Out. The other connections shown are individually strong enough to activate the targeted neuron. We will explore connection weights more in the future. As we can see in Figure 3, if the chain is not interrupted, the loop completes an iteration and the Error neuron activates. This configuration is not limited to two input neurons, weights from inputs to Out can be balanced to accommodate larger patterns.
Error Pathways
The NeuroBlock in Figure 1 generates neural activity in the presence of a failed pattern or a correct pattern accordingly. Instead of being assessed externally, error is calculated in circuit. The Error neuron can be connected to downstream patterns, but this signal can also be leveraged to change the network itself. As stated before, if a pattern fails to be recognized many times, maybe that is not a useful pattern. There are real limitations in biology and our simulations of these systems that would benefit from the repurposing of resources. This error signal can be used to drive localized or non-localized change, creating a feedback loop that drives the network to better predictions.
New Patterns
To round out our explanation, let’s think about the formation of patterns. We do not have a clear count of how many neurons are in our brain, but it is staggeringly high. As such, the the length of our DNA, that makes us who we are, is not complex enough to prespecify the full architecture of our brain. It can however, define modular structure like the NeuroBlocks. As an example, it is possible to construct a neural network consisting of the NeuroBlock in Figure 1 to capture activation patterns in the network. These patterns would be formed by a naive approach, by random chance.
Our human development is characterized by a explosion of neural growth in the first years of birth until a great pruning of neurons that continues for the rest of development.4 This “bloom and prune” is likely not played out with NeuroBlock like mechanics, but NeuroBlocks can create feedback to drive this pruning cycle. Neuron based feedback is a key building block for more complex systems.
Summary
The tools presented in this post are quite limited, but still serves as an introduction to a new way of building neural networks that can self modify. By generating signals for error from the neurons themselves, we can drive constructive changes in the network. In the next post we will start to describe the underlying framework that emulates the brain using NeuroBlocks and we will introduce a new circuit that will drive more useful changes in the network.
https://eagleman.com/books/livewired/
Mountcastle, Vernon B. "Modality and topographic properties of single neurons of cat's somatic sensory cortex." Journal of neurophysiology 20.4 (1957): 408-434.
I’m not aware of single neurons having both positive and negative connections, but this makes things more readable
Gilmore, John H et al. “Imaging structural and functional brain development in early childhood.” Nature reviews. Neuroscience vol. 19,3 (2018): 123-137. doi:10.1038/nrn.2018.1
Made a simple demo to interact with two neurons stimulating a third that gives a rough feel for activations, refractory periods, and stimulation decay.
https://metal-mind.github.io/two-neuron-sim/