In many large-scale machine learning applications, data is acquired and processed at the edge nodes of the network such as mobile devices, users' devices, and IoT sensors. Federated learning is a recent distributed learning paradigm according to which a model is trained over a set of edge devices. While federated learning can enable a variety of new applications, it faces major bottlenecks that severely limit its reliability and scalability including communications bottleneck as well as data and system's heterogeneity bottleneck. In this talk, we first focus on communication-efficient federated learning, and present FedPAQ, a communication-efficient and scalable Federated learning method with Periodic Averaging and Quantization. FedPAQ is provably near-optimal in the following sense. Under the problem setup of expected risk minimization with independent and identically distributed data points, when the loss function is strongly convex the proposed method converges to the optimal solution with near-optimal rate, and when the loss function is non-convex it finds a first-order stationary point. In the second part of the talk, we develop a robust federated learning algorithm that achieves strong performance against distribution shifts in users' samples. Throughout, we show several numerical results to empirically support our theoretical results.
Bio: Ramtin Pedarsani is an assistant professor in the ECE department at UCSB. He obtained his Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley in 2015. He received his M.Sc. degree at EPFL in 2011 and his B.Sc. degree at the University of Tehran in 2009. His research interests include machine learning, optimization, information and coding theory, stochastic networks, and transportation systems. He is the recipient of the Communications Society and Information Theory Society Joint Paper Award in 2020, the best paper award in the IEEE International Conference on Communications (ICC) in 2014, and the NSF CRII award in 2017.