Image
statistics image

From Model Explanations to Discovery: Explainable AI in Cancer Biology

Summary
Prof Su-In Lee (University of Washington, Seattle)
Clark S361
Jan
30
Date(s)
Content

Abstract: Explainable AI (XAI) has made significant strides in recent years, offering valuable theories and techniques to interpret complex machine learning models. However, these methods often struggle when applied to interpreting complex datasets for scientific discovery, particularly those involving high-dimensional omics data such as gene expression profiles. These datasets, crucial for understanding cancer biology, require novel approaches to fuly unlock the potential of XAI. In this talk, I will explore the practical challenges of applying XAI to gene expression data, highlighting both its potential and its limitations. I will present innovative strategies for adapting XAI techniques to accelerate data-driven discoveries in cancer pharmacology and cancer systems biology. The discussion will illuminate how addressing these challenges can lead to profound biological insights and impactful clinical implications. By bridging the gap between advanced XAI principles and techniques and the demands of real-world biomedical datasets, this talk aims to inspire the development of more robust methodologies at the intersection of AI and biomedicine, paving the way for a new era of innovation in biomedical research.

Reading List: 

  • Joseph D. Janizek, Ayse B. Dincer, Safiye Celik, Hugh Chen, Wi liam Chen, Kamila Naxerova* and Su-In Lee.* Uncovering expression signatures of synergistic drug responses via ensembles of explainable machine-learning models. Nature Biomedical Engineering 7, 811–829 (2023) https://www.nature.com/articles/s41551-023-01034-0 
  • Hugh Chen,* Ian C. Covert,* Scott M. Lundberg and Su-In Lee. Algorithms to estimate Shapley value feature attributions. Nature Machine Inteligence 5, pages 590–601 (2023) https://www.nature.com/articles/s42256-023-00657-x