Learning a Cognitive Map in Neuromorphic hardware
Building a representation of an unknown environment is a key capability for autonomous robots to be able to plan and generate goal-directed actions in real-world tasks. Such representation requires both recognition of objects or places and solving the SLAM (simultaneous localisation and mapping) problem in order to construct a correct map of the environment, which can be used for planning future actions.
Neuromorphic computing is well-suited both for the recognition task (realising feed-forward networks, but also ‘cognitive’ processes, such as attention and memory formation using recurrent networks, e.g., a winner-take-all architecture) and for the map-formation task (implementing a well-studies biological neural system for localisation and navigation — grid and place cells). In this project, we will aim to implement both object recognition and map formation using mixed-signal analog/digital neuromorphic hardware (ROLLS, CXQUAD) and a miniature computer with a parallel co-processor “parallella”. We will use small vehicles “Pushbot” and “Omnibot” equipped with a neuromorphic camera DVS as the robotic platform to develop and test the neural architecture in real-world settings.
Subprojects:
- Grid and place cell system in hardware
- Robot navigation: obstacle avoidance and target acquisition
- “Shallow” place-recognition in hardware
- Sequence learning
- Relational representations
-
Bonus project: Neuromorhic controller for a UAV