The Value of Information in Human-AI Decision-making
Authors: Ziyang Guo, Yifan Wu, Jason Hartline, Jessica Hullman
Published: 2025-02-10 04:50:42+00:00
AI Summary
This paper introduces a decision-theoretic framework to characterize the value of information in human-AI decision-making, focusing on identifying complementary information between agents. It defines Agent-Complementary Information Value (ACIV) and Instance-Level Agent-Complementary Information Value (ILIV), and presents a novel explanation technique, ILIV-SHAP, which adapts SHAP to highlight human-complementing information. Experiments and demonstrations show that ACIV can identify AI models leading to better human-AI team performance, and ILIV-SHAP explanations significantly reduce errors compared to traditional SHAP or no explanations.
Abstract
Multiple agents are increasingly combined to make decisions with the expectation of achieving complementary performance, where the decisions they make together outperform those made individually. However, knowing how to improve the performance of collaborating agents requires knowing what information and strategies each agent employs. With a focus on human-AI pairings, we contribute a decision-theoretic framework for characterizing the value of information. By defining complementary information, our approach identifies opportunities for agents to better exploit available information in AI-assisted decision workflows. We present a novel explanation technique (ILIV-SHAP) that adapts SHAP explanations to highlight human-complementing information. We validate the effectiveness of ACIV and ILIV-SHAP through a study of human-AI decision-making, and demonstrate the framework on examples from chest X-ray diagnosis and deepfake detection. We find that presenting ILIV-SHAP with AI predictions leads to reliably greater reductions in error over non-AI assisted decisions more than vanilla SHAP.