The trend in neural recording capabilities is clear: we can record orders of magnitude more neurons now than we could only a few years ago, and technological advances do not seem to be slowing. Coupled with rich behavioral measurements, genetic sequencing, and connectomics, these datasets offer unprecedented opportunities to learn how neural circuits function. But they also pose serious modeling and algorithmic challenges. We need flexible yet interpretable probabilistic models to gain insight from these heterogeneous data and algorithms to efficiently and reliably fit them. I will present some recent work on recurrent switching linear dynamical systems (rSLDS) — models that couple discrete and continuous latent states to model nonlinear processes. I will discuss some approximate Bayesian inference algorithms we've developed to fit these models and infer their latent states, and I will show how these methods can help us gain insight into complex spatiotemporal datasets like those we study in neuroscience.
The Information Systems Laboratory Colloquium (ISLC) is typically held in Packard 101 every Thursday at 4:30 pm during the academic year. Coffee and refreshments are served at 4pm in the second floor kitchen of Packard Bldg.
The Colloquium is organized by graduate students Joachim Neu, Tavor Baharav and Kabir Chandrasekher. To suggest speakers, please contact any of the students.
Scott Linderman is an assistant professor of Statistics and a member of the Wu Tsai Neurosciences Institute at Stanford University. He works in the fields of machine learning and computational neuroscience. His research focuses on developing probabilistic models and inference algorithms to better understand complex biological data. Previously, he was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University, and he completed his PhD in Computer Science at Harvard University with Ryan Adams and Leslie Valiant.