Recent progress in machine learning (ML) provides us with many potentially effective tools to learn from datasets of ever increasing sizes and make useful predictions. How do we know that these tools can be trusted in critical and high-sensitivity systems? If a learning algorithm predicts the GPA of a prospective college applicant, what guarantees do we have concerning the accuracy of this prediction? How do we know that it is not biased against certain groups of applicants? I will introduce examples of diverse domain applications where these questions are important, as well as statistical ideas to ensure that the learned models apply to individuals in an equitable manner. In recent work with Yaniv Romano, Rina Barber, and Emmanuel Candes, we show how to achieve some fairness objectives we do not need to "open up the black box," and try understanding its underpinnings. Rather, we discuss broad methodologies — ex. conformal inference — that can be wrapped around any black box to produce results that can be trusted and that are "fair.''
Contact firstname.lastname@example.org for required meeting password.