Neuroscience has entered a golden age in which experimental technologies now allow us to record thousands of neurons, over many trials during complex behaviors, yielding large-scale, high dimensional datasets. However, while we can record thousands of neurons, mammalian circuits controlling complex behaviors can contain tens of millions of behaviorally relevant neurons. Thus, despite significant experimental advances, neuroscience remains in a vastly undersampled measurement regime. Nevertheless, a wide array of statistical procedures for dimensionality reduction of multineuronal recordings uncover remarkably insightful, low dimensional neural state space dynamics whose geometry reveals how behavior and cognition emerge from neural circuits. What theoretical principles explain this remarkable success; in essence, how is it that we can understand anything about the brain while recording an infinitesimal fraction of its degrees of freedom?
We present a theory that addresses this question, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover neural state space dynamics in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior on single trials as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. We will also discuss non-negative tensor analysis methods to perform multi-timescale dimensionality reduction and demixing of neural dynamics that reveal how rapid neural dynamics within single trials mediate perception, cognition and action, and how slow changes in these dynamics mediate learning.
Prof. Surya Ganguli triple majored in physics, mathematics, and electrical engineering and computer science at MIT, completed a masters in mathematics and a PhD in string theory at Berkeley, and a postdoc in theoretical neuroscience at UCSF. He is now a professor of Applied Physics at Stanford where he leads the Neural Dynamics and Computation Lab, and is also a consulting professor at the Google Brain Research Team. His research spans the fields of physics, machine learning and neuroscience, focusing on understanding and improving how both biological and artificial neural networks learn striking emergent computations. He has been awarded a Swartz-Fellowship in computational neuroscience, a Burroughs-Wellcome Career Award at the Scientific Interface, a Terman Award, NIPS Outstanding Paper Award, an Alfred P. Sloan foundation fellowship, a James S. McDonnell Foundation scholar award in human cognition, a McKnight Scholar award in Neuroscience, and a Simons Investigator Award in the mathematical modeling of living systems.