I primarily do AI + HCI research on improving explainability and fairness of AI systems and helping people have appropriate understanding and trust in them. But I have also done other research with collaborators in psychology, neuroscience, environmental science, and art. I really enjoy and value interdisciplinary research.
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
FAccT 2024 ●
PAPER ●
OSF
* Featured in Axios, New Scientist, ACM showcase, Microsoft's New Future of Work Report, and the Human-Centered AI medium publication as Good Reads in Human-Centered AI.
Allowing Humans to Interactively Guide Machines Where to Look Does Not Always Improve Human-AI Team's Classification Accuracy
CVPR 2024 XAI4CV ●
PAPER ●
CODE ●
DEMO
Establishing Appropriate Trust in AI through Transparency and Explainability
CHI EA 2024 ●
PAPER
* Research description for the CHI 2024 Doctoral Consortium.
Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)
CHI EA 2024 ●
PAPER
* Proposal for the CHI 2024 Human-Centered Explainable AI Workshop.
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
CHI 2023 HONORABLE MENTION ●
PAPER ●
WEBSITE ●
TALK
* Featured in the Human-Centered AI Medium publication as CHI 2023 Editors' Choice. Also presented at the NeurIPS 2022 Human-Centered AI Workshop (spotlight), the CHI 2023 Human-Centered Explainable AI Workshop, and the ECCV 2024 Explainable Computer Vision Workshop (invited talk).
Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application
FAccT 2023 ●
PAPER ●
WEBSITE ●
TALK
* Featured in the Montreal AI Ethics Institute's blog. Also presented at the CHI 2023 Trust and Reliance in AI-assisted Tasks Workshop.
Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability
CVPR 2023 ●
PAPER ●
CODE
UFO: A Unified Method for Controlling Understandability and Faithfulness Objectives in Concept-based Explanations for CNNs
PAPER
HIVE: Evaluating the Human Interpretability of Visual Explanations
ECCV 2022 ●
PAPER ●
WEBSITE ●
CODE ●
TALK
* Also presented at the CVPR 2022 Explainable AI for Computer Vision Workshop (spotlight), the CHI 2022 Human-Centered Explainable AI Workshop (spotlight), and the CVPR 2022 Women in Computer Vision Workshop.
ELUDE: Generating Interpretable Explanations via a Decomposition into Labelled and Unlabelled Features
CVPR 2022 XAI4CV ●
PAPER
Shallow Neural Networks Trained to Detect Collisions Recover Features of Visual Loom-Selective Neurons
eLife 2022 ●
PAPER ●
CODE
Fair Attribute Classification through Latent Space De-biasing
CVPR 2021 ●
PAPER ●
WEBSITE ●
CODE ●
DEMO ●
TALK
* Featured in Coursera's GANs Specialization course and the MIT Press book Foundations of Computer Vision. Also presented at the CVPR 2021 Responsible Computer Vision Workshop (invited talk) and the CVPR 2021 Women in Computer Vision Workshop (invited talk).
Information-Theoretic Segmentation by Inpainting Error Maximization
CVPR 2021 ●
PAPER ●
WEBSITE
Cleaning and Structuring the Label Space of the iMet Collection 2020
CVPR 2021 FGVC ●
PAPER ●
CODE
[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias
ReScience 2021 ●
PAPER ●
CODE
* Selected for publication from the ML Reproducibility Challenge 2020.
Environmental Performance Index
WORLD ECONOMIC FORUM 2018 ●
PAPER ●
WEBSITE ●
ARTICLE ●
DISCUSSION
* Presented at the World Economic Forum. Covered by international media outlets. As the data team lead, I built the full data pipeline and led the analysis work.