EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring & Summer Quarters 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

Reinforcement Learning & ISL present "Do Deep Generative Models Know What They Don't Know?"

Topic: 
Do Deep Generative Models Know What They Don't Know?
Tuesday, May 28, 2019 - 4:00pm
Venue: 
Packard 101
Speaker: 
Balaji Lakshminarayanan (DeepMind)
Abstract / Description: 

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs.

In this talk, we challenge this assumption. We find that the density learned by deep generative models (flow-based models, VAEs, and PixelCNNs) cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood.

Bio:

Balaji Lakshminarayanan is a senior research scientist at Google DeepMind. He is interested in scalable probabilistic machine learning and its applications. Most recently, his research has focused on probabilistic deep learning, specifically, uncertainty estimation, out-of-distribution robustness, and deep generative models. He received his Ph.D. from the Gatsby Unit, University College London where he worked with Yee Whye Teh.