James Zou and Londa Schiebinger call to ensure AI doesn't perpetuate biases

image of Londa Schiebinger and James Zou
June 2021

Debiasing artificial intelligence (AI)

In the medical field, AI encompasses a suite of technologies that can help diagnose patients’ ailments, improve health care delivery and enhance basic research. The technologies involve algorithms, or instructions, run by software. These algorithms can act like an extra set of eyes perusing lab tests and radiological images; for instance, by parsing CT scans for particular shapes and color densities that could indicate disease or injury.

Problems of bias can emerge, however, at various stages of these devices’ development and deployment, James explained. One major factor is that the data for forming models used by algorithms as baselines can come from nonrepresentative patient datasets.

By failing to properly take race, sex and socioeconomic status into account, these models can be poor predictors for certain groups. To make matters worse, clinicians might lack any awareness of AI medical devices potentially producing skewed results. 


In a new perspective paper, James Zou and Londa Schiebinger discuss sex, gender and race bias in medicine and how these biases could be perpetuated by AI devices. 
 
James and Londa suggest several short- and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and scientific publications to ensure the data collected for studies are diverse, and incorporating more social, cultural and ethical awareness into university curricula.

“The white body and the male body have long been the norm in medicine guiding drug discovery, treatment and standards of care, so it’s important that we do not let AI devices fall into that historical pattern,” said Londa Schiebinger, the John L. Hinds Professor in the History of Science in the School of Humanities and Sciences and senior author of the paper published in the journal EBioMedicine.

“As we’re developing AI technologies for health care, we want to make sure these technologies have broad benefits for diverse demographics and populations,” said James Zou, assistant professor of biomedical data science and, by courtesy, of computer science and of electrical engineering and co-author of the study.

The matter of bias will only become more important as personalized, precision medicine grows in the coming years, said the researchers. Personalized medicine, which is tailored to each patient based on factors such as their demographics and genetics, is vulnerable to inequity if AI medical devices cannot adequately account for individuals’ differences.

“We’re hoping to engage the AI biomedical community in preventing bias and creating equity in the initial design of research, rather than having to fix things after the fact,” said Londa Schiebinger.
 
 
 
 

Related News