The Value of Information in Human-AI Decision-making

Authors: Ziyang Guo, Yifan Wu, Jason Hartline, Jessica Hullman

Published: 2025-02-10 04:50:42+00:00

AI Summary

This paper introduces a decision-theoretic framework to characterize the value of information in human-AI decision-making, focusing on identifying complementary information between agents. It defines Agent-Complementary Information Value (ACIV) and Instance-Level Agent-Complementary Information Value (ILIV), and presents a novel explanation technique, ILIV-SHAP, which adapts SHAP to highlight human-complementing information. Experiments and demonstrations show that ACIV can identify AI models leading to better human-AI team performance, and ILIV-SHAP explanations significantly reduce errors compared to traditional SHAP or no explanations.

Abstract

Multiple agents are increasingly combined to make decisions with the expectation of achieving complementary performance, where the decisions they make together outperform those made individually. However, knowing how to improve the performance of collaborating agents requires knowing what information and strategies each agent employs. With a focus on human-AI pairings, we contribute a decision-theoretic framework for characterizing the value of information. By defining complementary information, our approach identifies opportunities for agents to better exploit available information in AI-assisted decision workflows. We present a novel explanation technique (ILIV-SHAP) that adapts SHAP explanations to highlight human-complementing information. We validate the effectiveness of ACIV and ILIV-SHAP through a study of human-AI decision-making, and demonstrate the framework on examples from chest X-ray diagnosis and deepfake detection. We find that presenting ILIV-SHAP with AI predictions leads to reliably greater reductions in error over non-AI assisted decisions more than vanilla SHAP.


Key findings
The framework successfully demonstrates that AI models with higher Agent-Complementary Information Value (ACIV) lead to greater improvements in human-AI team performance. Presenting ILIV-SHAP explanations with AI predictions reliably reduces errors for human-AI decisions more effectively than vanilla SHAP or no explanations, particularly when the AI provides sufficient complementary information. The framework also revealed distinct reliance on video-level features between human and AI agents in deepfake detection, highlighting opportunities for improved human-AI collaboration.
Approach
The authors propose a decision-theoretic framework to quantify the value of information, defining Agent-Complementary Information Value (ACIV) to measure the additional utility new signals (e.g., AI predictions) provide over existing agent decisions. For instance-level insights, they introduce ILIV and an explanation technique called ILIV-SHAP, which adapts SHAP explanations to specifically highlight features contributing to complementary information for a given instance.
Datasets
Ames, Iowa Housing Dataset, MIMIC-CXR database, MIMIC-IV database, Deepfake Detection Challenge (DFDC) dataset (via Groh et al. [2022]), Human-AI Interactions dataset (for observational study in Appendix C).
Model(s)
UNKNOWN
Author countries
United States