IEEE IT Society, Santa Clara Valley presents From Differential Privacy to Generative Adversarial Privacy

From Differential Privacy to Generative Adversarial Privacy
Wednesday, February 28, 2018 - 6:00pm to Thursday, March 1, 2018 - 5:55pm
Packard 202
Peter Kairouz (Stanford)
Abstract / Description: 

6:00PM Refreshments and Conversation

6:30PM The explosive growth in connectivity and data collection is accelerating the use of machine learning to guide consumers through a myriad of choices and decisions. While this vision is expected to generate many disruptive businesses and social opportunities, it presents one of the biggest threats to privacy in recent history. In response to this threat, differential privacy (DP) has recently surfaced as a context-free, robust, and mathematically rigorous notion of privacy.
The first part of my talk will focus on understanding the fundamental tradeoff between DP and utility for a variety of unsupervised learning applications. Surprisingly, our results show the universal optimality of a family of extremal privacy mechanisms called staircase mechanisms. While the vast majority of works on DP have focused on using the Laplace mechanism, our results indicate that it is strictly suboptimal and can be replaced by a staircase mechanism to improve utility. Our results also show that the strong privacy guarantees of DP often come at a significant loss in utility.
The second part of my talk is motivated by the following question: can we exploit data statistics to achieve a better privacy-utility tradeoff? To address this question, I will present a novel context-aware notion of privacy called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to arrive to a unified framework for data-driven privacy that has deep game-theoretic and information-theoretic roots. I will conclude my talk by showcasing the performance of GAP on real life datasets.


Peter Kairouz is a postdoctoral scholar at Stanford University. He received his PhD in ECE from the University of Illinois at Urbana-Champaign (UIUC). He interned twice at Qualcomm and more recently at Google where he designed privacy-aware machine learning algorithms. He is the recipient of the 2015 ACM SIGMETRICS Best Paper Award, the 2012 Roberto Padovani Scholarship from Qualcomm's Research Center, and the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC. His research interests are interdisciplinary and span the areas of data and network sciences, privacy-preserving data analysis, machine learning, and information theory.