In this talk, we will review a few recent works that combine randomized perturbation techniques with optimization. Randomized perturbations have a long history in optimization, where they are used to alleviate difficulties associated with non-smooth, non-convex, or other restrictions on optimization procedures. We will focus more specifically on the ways randomized smoothing leads to improvements in stochastic approximation and optimization, including in parallel computation and zero order (problems in which only function values are available) optimization. In particular, we will discuss optimal algorithms for a variety of problem families, where the only known techniques for achieving this optimality require randomization and smoothing, and we will provide experimental evidence showing that the insights are not simply theoretical in nature.
Based on joint work with Peter Bartlett, Michael Jordan, Martin Wainwright, and Andre Wibisono
John Duchi is an assistant professor of Statistics and Electrical Engineering at Stanford University. He completed his PhD in computer science at Berkeley in 2014. His research interests span statistics, computation, optimization, and machine learning. At Berkeley, he worked in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Michael Jordan and Martin Wainwright. He obtained his MA in statistics in Fall 2012, and received a BS and MS from Stanford University in computer science under the supervision of Daphne Koller. John has won several awards and fellowships, including a best student paper award at the International Conference on Machine Learning (ICML) and the NDSEG and Facebook graduate fellowships. John has also worked as a software engineer and researcher at Google Research under the supervision of Yoram Singer and Fernando Pereira.