I'm a 5th-year Ph.D. student in AI at UC Berkeley,
co-advised by Trevor Darrell
and Joseph Gonzalez
as part of the BAIR and Sky labs.
Currently, I'm also a visiting researcher at FAIR within Meta, working with Kate Saenko.
Before coming to Berkeley, I obtained my B.S. in computer science at Cornell University.
I work on improving the reliability of vision and language models, such as flagging when multimodal models might be wrong, correcting their outputs, and addressing biases.
Being in tech, I stereotypically enjoy climbing, hiking, and woodworking. In recent years, I've survived a climbing accident, a nighttime bear encounter, and reviewer 2s. These have earned me metal screws in my ankle, a fear of Berkeley's mascot, and the papers below.
CLAIR: Evaluating Image Captions with Large Language Models
David M. Chan, Suzanne Petryk, Joseph E. Gonzalez, Trevor Darrell, John Canny
Simple Token-Level Confidence Improves Caption Correctness
Suzanne Petryk, Spencer Whitehead, Joseph E. Gonzalez, Trevor Darrell, Anna Rohrbach, Marcus Rohrbach
ICCV 2023 CLVL Workshop (Oral), WACV 2024
Reliable Visual Question Answering: Abstain Rather Than Answer Incorrectly
Spencer Whitehead*, Suzanne Petryk*, Vedaad Shakib, Joseph E. Gonzalez, Trevor Darrell, Anna Rohrbach, Marcus Rohrbach
On Guiding Visual Attention with Language Specification
Suzanne Petryk*, Lisa Dunlap*, Keyan Nasseri, Joseph E. Gonzalez, Trevor Darrell, Anna Rohrbach
Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell