I consider how Bayesian inference can address the analytical and theoretical challenges presented by increasingly complex, high-dimensional neuroscience datasets. With the advent of Bayesian deep neural networks, GPU computing and automatic differentiation it is becoming increasingly possible to perform large-scale Bayesian analyses of data, simultaneously inferring complex biological phenomena and experimental confounds. I present a proof-of-principle: inferring causal connectivity from an all-optical experiment combining calcium imaging and cell-specific optogenetic stimulation. The model simultaneously infers spikes from fluorescence, models low-rank activity and the extent of off-target optogenetic stimulation, and explicitly gives uncertainty estimates about the inferred connection matrix. Further, there is considerable evidence that humans and animals use Bayes theorem to reason optimally about uncertainty. I show that one particular Bayesian inference method — sampling — emerges naturally when combining classical sparse-coding models with a biophysically motivated energetic cost of achieving reliable responses. We understand these results theoretically by noting that the resulting combined objective approximates the objective for a classical Bayesian method: variational inference. Given this strong theoretical underpinning, we are able to extend the model to multi-layered networks modelling MNIST digits, recurrent networks, and fast recurrent networks.
Laurence graduated from the University of Cambridge with degrees in Physics and Systems Biology. He went on to do a PhD in Gatsby Computational Neuroscience Unit, UCL, with Peter Latham, where he worked on topics ranging from Zipf's law and (social) decision making to Bayesian inference in neural circuits and synapses. He has now returned to Cambridge, and is a postdoctoral researcher with Mate Lengyel.