Federated Learning enables mobile devices to collaboratively learn a shared prediction model or analytic while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. It embodies the principles of focused collection and data minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning. In this talk, I will discuss: (1) how federated learning differs from more traditional distributed machine learning paradigms, focusing on the main defining characteristics and challenges of the federated learning setting; (2) practical algorithms for federated learning that address the unique challenges of this setting; (3) extensions to federated learning, including differential privacy, secure aggregation, and compression for model updates, and (4) a range of valuable research directions that could have significant real-world impact.
Peter Kairouz is a research scientist at Google, where he leads research efforts on distributed, privacy-preserving, and robust machine learning. Prior to joining Google, he was a postdoctoral research fellow at Stanford University, and before that, he was a PhD student at the University of Illinois Urbana-Champaign (UIUC). He is the recipient of the 2012 Roberto Padovani Scholarship from Qualcomm's Research Center, the 2015 ACM SIGMETRICS Best Paper Award, the 2015 Qualcomm Innovation Fellowship Finalist Award, and the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC.