Deep neural networks uncover what brains likes to see
Opening the eyes immediately provides a visual perception of the world 鈥� and it seems so easy. But the process that starts with photons hitting the retina and ends with 鈥榮eeing鈥� is far from simple. The brain鈥檚 fundamental task in 鈥榮eeing鈥� is to reconstruct relevant information about the world from the light that hits the eyes. Because this process is rather complex, nerve cells in the brain 鈥� neurons 鈥� also react to images in complex ways.
Experimental approaches to characterize their responses to images have proven challenging in part because the number of possible images is endless. In the past, seminal insights often resulted from stimuli that neurons in the brain 鈥榣iked.鈥� Finding them relied on the intuition of the scientists and a good portion of luck.
Researchers at 草榴社区入口 and the University of T眉bingen in Germany have now developed a novel computational approach to accelerate finding these optimal stimuli. They built deep artificial neural networks that can accurately predict the neural responses produced by a biological brain to arbitrary visual stimuli. These networks can be thought of as a 鈥榲irtual avatar鈥� of a population of biological neurons, which can be used to dissect the neural mechanisms of sensation. They demonstrated this by synthesizing new images that made particular neurons respond very strongly. Their study was published today in the journal .
鈥淲e want to understand how vision works. We approached this study by developing an artificial neural network that predicts the neural activity produced when an animal looks at images. If we can build such an avatar of the visual system, we can perform essentially unlimited experiments on it. Then we can go back and test in real brains with a method we named 鈥榠nception loops,鈥欌€� said senior author Dr. Andreas Tolias, professor and Brown Foundation Endowed Chair of Neuroscience at Baylor.
To make the network learn how neurons respond, the researchers first recorded a large amount of brain activity using a mesoscope, a recently developed large scale functional imaging microscope.
鈥淔irst, we showed mice about 5,000 natural images and recorded the neural activity from thousands of neurons as they were seeing the images,鈥� said first author Dr. Edgar Y. Walker, former graduate student in the Tolias lab and now a postdoctoral scientist at Unviersty of T眉bingen and Baylor. 鈥淭hen, we used these images and the corresponding recordings of brain activity to train a deep artificial neural network to mimic how real neurons responded to visual stimuli.鈥�
鈥淭o test whether the network had indeed learned to predict neural responses to visual images like a living mouse brain would do, we showed the network images it had not seen during learning and saw that it predicted the biological neuronal responses with high accuracy,鈥� said co-first author Dr. Fabian Sinz, adjunct assistant professor of neuroscience at Baylor and group leader at the University of T眉bingen.
鈥淓xperimenting with these networks revealed some aspects of vision we didn鈥檛 expect,鈥� said Tolias, founder and director of the Center for Neuroscience and Artificial Intelligence at Baylor. 鈥淔or instance, we found that the optimal stimulus for some neurons in the early stages of processing in the neocortex were checkerboards, or sharp corners as opposed to simple edges which is what we would have expected according to the current dogma in the field.鈥�
鈥淲e think that this framework of fitting highly accurate artificial neural networks, performing computational experiments on them, and verifying the resulting predictions in physiological experiments can be used to investigate how neurons represent information throughout the brain. This will eventually give us a better idea of how the complex neurophysiological processes in the brain allow us to see,鈥� Sinz said.
Other contributors to this work include Erick Cobos, Taliah Muhammad, Emmanouil Froudarakis, Paul G. Fahey, Alexander S. Ecker, Jacob Reimer and Xaq Pitkow.
Financial support was provided by the Intelligence Advanced Research Projects Activity via Department of Interior/Interior Business Center contract no. D16PC00003. This research was also supported by grant no. R01 EY026927, National Eye Institute/National Institutes of Health Core Grant for Vision Research (no. T32-EY-002520-37), National Science Foundation NeuroNex grant no. 1707400 and grant no. F30EY025510, which is provided by the Institutional Strategy of the University of T眉bingen (ZUK 63) and the Carl-Zeiss-Stiftung. Further support was provided by the German Federal Ministry of Education and Research (BMBF) through the T眉bingen AI Center (FKZ: 01IS18039A), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany鈥檚 Excellence Strategy 鈥� EXC-Number 2064/1 鈥� Project number 390727645, Amazon AWS through a Machine Learning Research Award and the 草榴社区入口 Medical Scientist Training Program, no. F30-MH112312.