Gridbot with ROS
Gridbot: An autonomous robot controlling by a spiking neural network mimicking the brain's navigational system
It is true that the “best” neural network is not necessarily the one with the most “brain-like” behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain’s structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input.
In ICONS, 2018
In NICE, 2018
NeuRobotics: A Spiking Neural Network Model of the Brain’s Spatial Navigation System for Autonomous Robots
Orienting in an unknown, fast-changing environment is a crucial challenge met “effortlessly” by the brain. At ComBra Lab, we are developing the Gridbot, an autonomous neurobot controlled by a “bottom-up” Spiking Neural Network (SNN) model of brain networks that are associated with self-orientation and motor planning. By mimicking neurobiology, we developed an SNN that combined the neural representations of visual and self-motion cues and produced the behavior of accurately estimating head orientation. The SNN employed a spike-based Bayesian inference on the outputs of simulated head direction (HD) and border cells in a recursive way: The HD cell layer encoded in its spiking activity the HD likelihood distribution by integrating self-motion inputs; Similarly, the Border cell layer encoded the landmark likelihood distribution from visual observation and environmental mapping; Finally, a Bayesian inference layer generated a corrective distribution for the HD layer. Here we show results from implementing our model in the Robot Operating System and show how the SNN mimics the behavioral abilities observed in mammals, in localizing the HD and learning the environment.
In CCN, 2017