In this project we will dissociate ourselves from using power-consuming algorithms to perform unsupervised clustering in order to recognise different objects. Instead, we will show that simple objects can be learned by neuromorphic hardware in real-time and with low power consumption when configured in a ‘soft’ Winner-Take-All (WTA) network and by exploiting the variability of the silicon neurons on the chip. We will use a neuromorphic processor with 256 reconfigurable silicon neurons, whose synapses are implemented with a long-term plasticity mechanism, namely the Reconfigurable OnLine Learning System (ROLLS). (Qiao, 2015)
First, we connect a large group of silicon neurons in a ‘soft’ Winner-Take-All network; neurons are excitatory connected to themselves and to their nearest neighbors with fixed non-plastic synapses to enhance their activity. At the same time these neurons excite a small population of inhibitory neruons which globally inhibits the activity of all neurons. When stimulating the neurons with an input having more than one peak (Fig. 1a), only the neurons that receive the input with the highest peak will remain active, while the other neurons will be suppressed until completely being switched off by the inhibitory group (Fig. 1b).
Secondly, we will will randomly potentiate some of the 256 plastic synapses of every neuron. These synapses will be stimulated with high or
low frequencies depending on the input pattern. The dynamic vision sensor (DVS) sends events when local pixel level
changes are detected. We reduce the 128 x 128 pixels to 16 x 16 and vectorise it to stimulate the 256 plastic synapses of every silicon neuron of the ROLLS chip. If an event occurred at a position of the 16 x 16 image, then the plastic synapse that represents this location will be stimulated with a high frequency. Otherwise with a low frequency.
In previous work we have shown that this type of patterns can be learned in neuromorphic hardware. However these patterns were much simpler and consisted of 126 synapses receiving a high frequency and 126 synapses receiving a low frequency.
In order to recognize simple objects even when the object is shifted, we use two more neuron groups configured in a WTA network. These neurons represent the x- and the y-axis respectively. If the distribution of events on the x- or on the y-axis is shifted, the activity in these neurons in the groups will be shifted. These neurons therefore detect the change and will shift the input that is used to stimulate the plastic synapses of the ‘recognizing’ neurons, so that these will still be stimulated at the correctly learned synapses.
This project focuses on the learning abilities of neuromorphic hardware. We will also discuss Up’s and Down’s of learning mechanisms and
different possibilities to realize learning. In addition, we will examine how supervised learning, i.e. externally stimulating the group of neurons that is supposed to learn a specific object, improves the performance.
Reading material and code should be downloaded before coming to the workshop from:
https://www.dropbox.com/sh/pd8fwozjr9vb25c/AACgb7GycweKZFaw8L5nXpoqa?dl=0