The trend in neural recording capabilities is clear: we can record orders of magnitude more neurons now than we could only a few years ago, and technological advances do not seem to be slowing. Coupled with rich behavioral measurements, genetic sequencing, and connectomics, these datasets offer unprecedented opportunities to learn how neural circuits function. But they also pose serious modeling and algorithmic challenges. How do we develop probabilistic models for such heterogeneous data? How do we design models that are flexible enough to capture complex spatial and temporal patterns, yet interpretable enough to provide new insight? How do we construct algorithms to efficiently and reliably fit these models? I will present some of our recent work on recurrent switching linear dynamical systems and corresponding Bayesian inference algorithms that aim to overcome these challenges, and I will show how these methods can help us gain insight into complex neural and behavioral data.