Image
![event seminar](/sites/default/files/styles/max_325x325/public/2022-04/event-pkdbldg-general7.jpg?itok=7aOmAGM7)
Attacking the privacy of machine learning models
Summary
coffee and snacks at 3:30pm, Packard Grove
Sep
29
Date(s)
Content
Abstract
Current machine learning models are not private: they reveal particular details about the individual examples contained in datasets used for training. This talk studies various aspects of this privacy problem. For example, we have shown how to query GPT-2 (a pretrained language model) to extract personally-identifiable information from its training set. This talk discusses how and why these attacks work, and what can be done to prevent them both in theory and in practice.
Bio
Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML, USENIX Security and IEEE S&P. He obtained his PhD from the University of California, Berkeley in 2018.