Explainable AI and Trust in the Real World


   

Overview

We conducted a qualitative case study with end-users of a real-world AI application to gain a nuanced understanding of explainable AI and trust.

In paper 1 on explainable AI, we discuss (1) what explainability needs end-users had, (2) how they intended to use explanations of AI outputs, and (3) how they perceived existing explanation approaches.

In paper 2 on trust, we describe multiple aspects of end-users' trust in AI and how human, AI, and context-related factors influenced each.

See below videos for an overview.
 
 

Paper 1 on Explainable AI

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
CHI 2023 Best Paper Honorable Mention🏅

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

Featured in the Human-Centered AI medium blog as CHI 2023 Editors' Choice. Position papers based on this work also appeared at CHI 2023 Human-Centered Explainable AI (HCXAI) Workshop and NeurIPS 2022 Human-Centered AI (HCAI) Workshop as panel presentation.
   

Paper

Supplement

30sec Preview

10min Talk

CHI HCXAI Workshop Paper

CHI HCXAI Workshop Talk

NeurIPS HCAI Workshop Paper

   

Paper 2 on Trust

Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application
FAccT 2023

Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understanding of trust in AI through a qualitative case study of a real-world computer vision application. We report findings from interviews with 20 end-users of a popular, AI-based bird identification app where we inquired about their trust in the app from many angles. We find participants perceived the app as trustworthy and trusted it, but selectively accepted app outputs after engaging in verification behaviors, and decided against app adoption in certain high-stakes scenarios. We also find domain knowledge and context are important factors for trust-related assessment and decision-making. We discuss the implications of our findings and provide recommendations for future research on trust in AI.

Featured in the Montreal AI Ethics' newsletter and website. Shorter version of this work also appeared at CHI 2023 Trust and Reliance in AI-assisted Tasks (TRAIT) Workshop.
   

Paper

10min Talk

Acknowledgements

We foremost thank our participants for generously sharing their time and experiences. We also thank Tristen Godfrey, Dyanne Ahn, and Klea Tryfoni for their help in interview transcription. Finally, we thank Angelina Wang, Vikram V. Ramaswamy, Amna Liaqat, Fannie Liu, and other members of the Princeton HCI Lab and the Princeton Visual AI Lab for their helpful and thoughtful feedback.

This material is based upon work partially supported by the National Science Foundation (NSF) under Grants 1763642 and 2145198 awarded to OR. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. We also acknowledge support from the Princeton SEAS Howard B. Wentz, Jr. Junior Faculty Award (OR), Princeton SEAS Project X Fund (RF, OR), Princeton Center for Information Technology Policy (EW), Open Philanthropy (RF, OR), and NSF Graduate Research Fellowship (SK).

Contact

Sunnie S. Y. Kim (sunniesuhyoung@princeton.edu)