“Goal Orientation for Fair Machine Learning Algorithms”, a paper co-authored by Heng Xu and Nan Zhang, was accepted for publication at Production and Operations Management.

A key challenge facing the use of Machine Learning (ML) in organizational selection settings (e.g., the processing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while maintaining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. Instead, ML often serves a support role to human managers, allowing them to more easily exclude unqualified applications. This effectively assigns to ML a screening rather than selection task. It might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the admission rate. This paper, however, reveals a qualitative difference between the two. Specifically, we demonstrate that mis-categorizing a screening task as a selection one could not only degrade final selection quality but result in fairness problems such as selection biases within the minority group.