🔬 Research Summary by Diogo Leitão, a Machine Learning Researcher at Feedzai.
[Original paper by Diogo Leitão, Pedro Saleiro, Mário A. T. Figueiredo, Pedro Bizarro]
Overview: Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between human decision-makers and AI systems. State-of-the-art methods to manage assignments in human-AI collaboration entail unfeasible requirements, such as concurrent predictions from every human, and leave key challenges unaddressed, such as human capacity constraints. We aim to identify and review these limitations, pointing to where opportunities for future research in HAIC may lie.
Introduction
With the advent of machine learning (ML), artificial intelligence (AI) is now ubiquitous in decision-making processes (e.g., credit scoring [1], financial fraud prevention [2], and criminal justice [3]). Nevertheless, ML models have relevant drawbacks inhibiting full automation in high-stakes domains: they lack transparency, are limited in their worldview, are brittle in dynamic environments, and can discriminate against protected groups. Researchers have proposed human-AI collaboration as an alternative, arguing that humans and AI have complementary strengths. In such a system, instances are assigned either to a human or to the AI, or both, to optimize for global predictive performance and, optionally, fairness.
Nevertheless, the state-of-the-art method to manage assignments in human-AI collaboration — learning to defer (L2D) —entails several often unfeasible requirements, such as the availability of human predictions for every instance or ground-truth labels that are independent of said humans. Furthermore, neither L2D nor alternative approaches tackle fundamental issues of deploying HAIC systems in real-world settings, such as capacity management or dealing with dynamic environments. We aim to identify and review these and other limitations, pointing to where opportunities for future research in HAIC may lie.
Key Insights
Learning to Defer
The simplest approach to managing assignments in HAIC is to defer to humans based on ML model confidence. This approach is drawn from learning with a reject option, a framework first studied by Chow (1970) [4], where the goal is to optimize the performance of the non-rejected predictions by rejecting to predict in cases of high uncertainty. Madras et al. (2018) [5] argued that confidence-based deferral is suboptimal, as it fails to consider the predictive performance and fairness of the downstream human decision-maker. In some instances of high model uncertainty, they may be just as inaccurate as the model; it may be preferable to defer to them other lower uncertainty instances where they can outperform the model. Madras et al. (2018) expanded upon the learning framework with a reject option to model these differences. In particular, they adapted the work of Cortes et al. (2016) [6], who incorporated the reject option into the learning algorithm, allowing the classifier to reject to predict, incurring instead a constant, label-independent cost. L2D adds a variable cost of deferral to account for the performance of the human decision-maker.
Madras et al. (2018) showed how L2D improves upon confidence-based deferral even with unfair or inconsistent humans. Other authors have since joined the effort to expand and
improve L2D (Mozannar & Sontag, 2020 [7]; Keswani et al., 2021 [8], Verma & Nalisnick, 2022 [9]).
Limitations of Learning to Defer
Although L2D is the assignment framework for human-AI collaboration currently garnering more attention from researchers, it nevertheless entails several limitations and leaves significant challenges unaddressed. L2D requires predictions from every considered human for every training instance. This will often be unfeasible in real-world applications: teams will be staffed for regular operations, which may only cover a small subset of cases, with only one human being assigned to each decision. One significant drawback of this limitation is that it implies that L2D cannot update itself with new data in dynamic environments, as complete data will not be available in regular operations.
All L2D contributions propose jointly training the primary classifier and the deferral system. The advantage, the authors argue, is that the primary classifier can specialize on the instances that will be assigned to it, to the detriment of those that will not be. Nevertheless, this entails two major unacknowledged drawbacks. By design, specialization renders the primary classifier useless in the instances likely to be deferred, as gradients are stopped from back-propagating into it. This makes L2D unsuitable for domains where the AI advises the human decision-makers, as the ML model will not be performant in the instances being deferred. Furthermore, specialization makes the system brittle: by trading off generalization for specialization, the AI is not robust to post-deployment changes in the human capacity for review. If any subset of humans becomes temporarily unavailable, the AI will not be capable of covering for them, as it was not trained in those neighborhoods, inevitably harming performance.
Keswani et al. (2021) expanded L2D to allow for modeling the expertise of teams of humans at an individual level. Nevertheless, they fail to consider the fundamental challenge of
managing such teams: humans are limited in their work capacity. Capacity constraints are never considered in (multi-expert) L2D, where the goal is simply to find the best decision-maker for each instance without constraints. To extract an actionable policy from L2D under capacity constraints, one must sequentially assign the best available decision-maker, as ranked by the method. However, under such constraints, the best decision-maker may not be optimal. For example, if a decision-maker is universally better than the rest of the team, then, ideally, they would decide on only the most challenging cases, where others are most likely to err. Assigning instances sequentially would result in overusing the best decision-maker on the initial cases instead of saving them for the hardest ones.
Between the lines
In our paper, we review these and other limitations of the state-of-the-art method for managing assignments in human-AI collaborative decision-making — learning to defer. Most importantly, L2D requires concurrent human predictions from every human (to whom decisions may be deferred) for every training instance, and it does not consider and offers no solution to situations where human decision capacity is limited, as it will often be in reality. By identifying these limitations, we hope to motivate research toward a holistic human-AI collaboration system that learns to optimize performance and fairness from the available data while managing existing capacity constraints. Such a system would enable human oversight over high-stakes, mission-critical decision-making processes without sacrificing performance or fairness.
References
[1] A. E. Khandani, A. J. Kim, and A. W. Lo. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, 34(11):2767–2787, 2010.
[2] J. O. Awoyemi, A. O. Adetunmbi, and S. A. Oluwadare. Credit card fraud detection using machine learning techniques: A comparative analysis. In 2017 International Conference on Computing Networking and Informatics (ICCNI), pages 1–9. IEEE, 2017.
[3] T. Brennan, W. Dieterich, and B. Ehret. Evaluating the predictive validity of the COMPAS risk and needs assessment system. Criminal Justice and behavior, 36(1):21–40, 2009.
[4] C. K. Chow. On optimum recognition error and reject tradeoff. IEEE Trans. Inf. Theory, 16(1): 41–46, 1970.
[5] D. Madras, T. Pitassi, and R. Zemel. Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
[6] C. Cortes, G. DeSalvo, and M. Mohri. Learning with Rejection. In R. Ortner, H. U. Simon, and S. Zilles, editors, Algorithmic Learning Theory – 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings, volume 9925 of Lecture Notes in Computer Science, pages 67–82, 2016.
[7] H. Mozannar and D. A. Sontag. Consistent Estimators for Learning to Defer to an Expert. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 7076–7087. PMLR, 2020.
[8] V. Keswani, M. Lease, and K. Kenthapadi. Towards Unbiased and Accurate Deferral to Multiple Experts. In M. Fourcade, B. Kuipers, S. Lazar, and D. K. Mulligan, editors, AIES ’21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021, pages 154–165.ACM, 2021.
[9] R. Verma and E. T. Nalisnick. Calibrated Learning to Defer with One-vs-All Classifiers. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, and S. Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 22184–22202. PMLR, 2022.