Image
statistics image

Validity in decision-making algorithms: Addressing measurement error and selectively missing outcomes

Summary
Amanda Coston (UC Berkeley)
Sloan 380Y
Oct
15
Date(s)
Content

Across domains such as medicine, employment and criminal justice, predictive models often target labels that imperfectly reflect the outcomes of interest to experts and policymakers. For example, clinical risk assessments deployed to inform physician decision-making often predict measures of healthcare utilization (e.g., costs, hospitalization) as a proxy for patient medical need. These proxies can be subject to outcome measurement error when they systematically differ from the target outcome they are intended to measure. In the first part of this talk we situate the challenges of proxy outcomes and measurement error in a broader framework of common threats to validity in algorithmic decision-making. Informed by validity theory from the social sciences, our framework structures common types of missing data and selection bias as threats to internal, external, and construct validity. Next, we turn to the question of how to address these challenges in practice. Focusing on the challenges of measurement error and selectively missing outcomes, we develop an unbiased risk minimization method which, given knowledge of proxy measurement error properties, corrects for the combined effects of these challenges. We also develop a method for estimating selection-dependent measurement error parameters when these are unknown in advance. We demonstrate the utility of our approach theoretically and via experiments on real-world data from randomized controlled trials conducted in healthcare and employment domains.

Relevant papers:

  1. Outcome measurement error
  2. Validity in algorithmic decision-making