Sunnie S. Y. Kim

I am a computer science PhD student and NSF graduate fellow at Princeton University advised by Olga Russakovsky in the Princeton Visual AI Lab. I also collaborate closely with Ruth Fong.

I primarily work in computer vision and machine learning, integrated with the fields of fairness, accountability, transparency, and ethics in AI and human-computer interaction. My focus is on building more interpretable visual AI systems with human-centered approaches.

Previously, I received a BS degree in Statistics and Data Science at Yale University and worked with John Lafferty in the Yale Statistical Machine Learning Group. After graduation, I spent a year at Toyota Technological Institute at Chicago doing computer vision and machine learning research with Greg Shakhnarovich in the Perception and Learning Systems Lab.

My first name is pronounced as sunny, and I use she/her/hers pronouns. I like to run and play tennis in my free time!

Email  /  Github  /  Google Scholar  /  Twitter

News

07/2022: HIVE: Evaluating the Human Interpretability of Visual Explanations has been accepted to ECCV 2022!
06/2022: Attended CVPR 2022 in-person and presented at 2 workshops: WICV (poster), XAI4CV (talk & poster).
06/2022: Posted a new preprint ELUDE: Generating Interpretable Explanations via a Decomposition into Labelled and Unlabelled Features.
05/2022: Attended the CHI 2022 HCXAI workshop and presented HIVE: Evaluating the Human Interpretability of Visual Explanations.
04/2022: Was awarded the NSF Graduate Research Fellowship!
01/2022: Passed my program's general qualifying exam. Huge thanks to my committee members Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández for helpful feedback on my research!
07/2021: Served as a research instructor for Princeton AI4ALL and mentored high school students on research projects on using computer vision for biodiversity monitoring.
06/2021: Attended CVPR 2021 virtually and presented at the main conference (2 posters) and 3 workshops: WICV (talk & poster), RCV (talk), FGVC (poster).
03/2021: [Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias has been published in the ReScience C journal.
02/2021: Two papers accepted to CVPR 2021: Fair Attribute Classification through Latent Space De-biasing & Information-Theoretic Segmentation by Inpainting Error Maximization.
08/2020: Started my PhD at Princeton University!
08/2020: Attended ECCV 2020 virtually and presented Deformable Style Transfer at the main conference and the WICV workshop.
07/2020: Wrapped up my time at TTIC as a visiting student. The year went by very quickly. I’ll especially miss the Perception and Learning Systems Lab, the 2019-2020 cohort friends, and the Girls Who Code team.

Research while at Princeton (2020 - Present)

I am currently working on evaluating the interpretability of visual explanations with human studies and correcting undesirable prediction rules in trained image classifiers, among other things.

I also believe open science improves transparency, accountability, and progress in the field. I try to document and open source my code as much as I can, and support initiatives like the ML Reproducibility Challenge that encourages our community to do more reproducible research (participated in the 2020 version, reviewed for the 2021 version).


* denotes equal contribution. Representative papers are highlighted.

ELUDE: Generating Interpretable Explanations via a Decomposition into Labelled and Unlabelled Features
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky
CVPR 2022 Explainable AI for Computer Vision Workshop (XAI4CV)
paper / bibtex

We present ELUDE, a novel explanation framework that decomposes a model's prediction into two parts: one that is explainable through a linear combination of the semantic attributes, and another that is dependent on the set of uninterpretable features.

HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
ECCV 2022
CHI 2022 Human-Centered Explainable AI Workshop (HCXAI)
CVPR 2022 Explainable AI for Computer Vision Workshop (XAI4CV) spotlight talk
CVPR 2022 Women in Computer Vision Workshop (WICV)
project page / paper / extended abstract / code / 2min talk / bibtex

We introduce HIVE, a novel human evaluation framework for visual interpretability methods that allows for falsifiable hypothesis testing, cross-method comparison, and human-centered evaluation. Using HIVE, we evaluate four existing methods and find that explanations engender human trust, even for incorrect model predictions, yet are not distinct enough for users to distinguish between correct and incorrect predictions.

Cleaning and Structuring the Label Space of the iMet Collection 2020
Vivien Nguyen*, Sunnie S. Y. Kim*
CVPR 2021 Fine-Grained Visual Categorization Workshop (FGVC)
paper / extended abstract / code / bibtex

We clean and structure the noisy label space of the iMet Collection dataset for fine-grained art attribute recognition. This work was done as a course project for Princeton COS 529 Advanced Computer Vision.

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias
Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister, Olga Russakovsky
ReScience C 2021
paper (journal) / paper (arXiv) / code / openreview / bibtex

We reproduce Singh et al. (CVPR 2020) that mitigates contextual bias in object and attribute recognition. Ours is one of 23/82 reproducibility reports accepted for publication from the ML Reproducibility Challenge 2020.

Fair Attribute Classification through Latent Space De-biasing
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky
CVPR 2021
CVPR 2021 Responsible Computer Vision Workshop (RCV) invited talk
CVPR 2021 Women in Computer Vision Workshop (WICV) part of an invited talk
project page / paper / code / demo / 2min talk / 5min talk / 10min talk / bibtex

We develop a GAN-based data-augmentation method for fairer visual classification. Our work is featured in Coursera's GANs Specialization course.

Research while at TTIC (2019 - 2020)

I was introduced to computer vision and deep learning during my gap year at TTIC. (Late by these days standards? Well I was a statistics major in undergrad. 😄)

Information-Theoretic Segmentation by Inpainting Error Maximization
Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Gregory Shakhnarovich, David McAllester
CVPR 2021
project page / paper

We introduce a cheap, class-agnostic, and learning-free method for unsupervised image segmentation.

Deformable Style Transfer
Sunnie S. Y. Kim, Nicholas Kolkin, Jason Salavon, Gregory Shakhnarovich
ECCV 2020
ECCV 2020 Women in Computer Vision Workshop (WICV)
project page / paper / code / demo / 1min talk / 10min talk / slides / bibtex

We propose an image style transfer method that can transfer both texture and geometry.

Research while at Yale (2016 - 2019)

In my undergraduate years, I was fortunate to gain research experience in various fields (e.g., statistics, neuroscience, environmental science, psychology) under the guidance of many great mentors.

Shallow Neural Networks Trained to Detect Collisions Recover Features of Visual Loom-Selective Neurons
Baohua Zhou, Zifan Li, Sunnie S. Y. Kim, John Lafferty, Damon A. Clark
eLife 2022
paper / code / bibtex

We find that anatomically-constrained shallow neural networks trained to detect impending collisions resemble experimentally observed LPLC2 neuron responses for many visual stimuli.

2018 Environmental Performance Index
Zachary A. Wendling, Daniel C. Esty, John W. Emerson, Marc A. Levy, Alex de Sherbinin, ..., Sunnie S. Y. Kim et al.
website / report & data / discussion at WEF18 / news 1 / news 2

We evaluate 180 countries' environmental health and ecosystem vitality. EPI is a biennial project conducted by researchers at Yale and Columbia in collaboration the World Economic Forum. I built the full data pipeline and led the data analysis work for the 2018 version.

Which Grades are Better, A’s and C’s, or All B’s? Effects of Variability in Grades on Mock College Admissions Decisions
Woo-kyoung Ahn, Sunnie S. Y. Kim, Kristen Kim, Peter K. McNally
Judgment and Decision Making 2019
paper

We study the effect of negativity bias in human decision making.


Academic Service

Reviewer/Program Committee:
Conferences: ICCV 2021, CVPR 2022, ECCV 2022
Workshops: CVPR 2021 Responsible Computer Vision
Challenges: ML Reproducibility Challenge 2020, 2021 outstanding reviewer

Conference Volunteer: NeurIPS 2019, 2020, ICLR 2020, ICML 2020, CVPR 2022

Organizing Committee: NESS NextGen Data Science Day 2018

Teaching & Outreach

I deeply care about increasing diversity and inclusion in STEM. As a woman in STEM who never considered pursuing a career in it before college, I experienced firsthand the importance of having a supportive environment to join and stay in the field. So I'm passionate about creating environments where women and other historically underrepresented minorities in STEM feel supported and happy :) through outreach, mentorship, and community building.

Princeton COS 429 Computer Vision
Graduate TA, Fall 2021
Princeton AI4ALL
Research Instructor, Summer 2021
TTI-Chicago Girls Who Code
Facilitator & Instructor, 2019-2020
Yale S&DS Departmental Student Advisory Committee
Co-founding Member, 2017-2019
Yale S&DS 365/565 Data Mining and Machine Learning
Undergraduate TA, Fall 2018
Yale S&DS 230/530 Data Exploration and Analysis
Undergraduate TA, Fall 2017

Website modified from here.