Image
Stanford EE

EE Colloquium Series: Training Private and Fair Models in Federated Learning

Summary
Yahya Ezzeldin (Univ of Southern California)
Packard 101
May
18
Date(s)
Content

Abstract: Federated learning (FL) has gained popularity as a privacy-preserving machine learning framework that allows decentralized users to train models without sharing their data. However, there are still challenges to making FL systems reliable and effective in scenarios where fairness and trust are critical. In this presentation, we will discuss two key challenges that need to be addressed to enable trustworthy and fair training in FL: (1) Wha theoretical guarantees can we provide in FL by using the secure model aggregation protocol, and (2) How to train demographically fair machine learning models within FL without violating the privacy promise of FL. Finally, I will discuss exciting avenues for future work, including debiasing for distributed learning in unperfect scenarios and contending fairness interests.

Bio: Yahya Ezzeldin is a Postdoctoral Scholar in the Department of Electrical and Computer Engineering at the University of Southern California, where he works with Prof. Salman Avestimehr. Yahya received his Ph.D. in 2020 from UCLA, and his B.S. and M.S. in Electrical Engineering from Alexandria University in 2011 and 2014, respectively. Yahya’s research and interests lie at the intersection of information theory, fair machine learning, and distributed learning. Before joining USC, he worked as a Postdoctoral Researcher at UCLA in 2021 and as a machine learning platform engineer with Intel Corporation in 2018. Yahya was awarded the 2020-2021 Distinguished Ph.D. Dissertation Award in Signals and Systems from the Electrical and Computer Engineering Department at UCLA and the 2019-2020 Dissertation Year Fellowship (DYF) at UCLA.