Akhila Yerukola

Hi! I am a Ph.D. student at Carnegie Mellon University's School of Computer Science in the Language Technologies Institute. I am fortunate to be advised by Maarten Sap. I am a recipient of the 2025-2026 K&L Gates Presidential Fellowship.

My research focuses on enhancing the reasoning and generation capabilities of AI systems across various contexts. The term "context" is often overloaded and can encompass local, global, and cultural factors. My goal is to improve the interaction experience for diverse groups of real-world users with technology.

Underspecification (Pragmatics) : Human communication often relies on unstated assumptions and contextual cues that remain implicit. I'm interested in improving how AI systems interpret these underlying user intentions and handle ambiguity to support effective communication.

Cross‑Cultural Safety and Understanding: Communication patterns, non-verbal cues, and social norms vary significantly across cultures. I'm interested in developing culturally contextual safety guardrails that enhance sensitivity and awareness of these nuanced cultural factors across multiple modalities.


Previously, I spent ~3 years working on improving fine-grained natural language understanding (NLU) for Bixby as a Senior Research Engineer at the AI Center in Samsung Research America (SRA), where I was advised by Hongxia Jin. Prior to that, I earned my Masters in Computer Science from Stanford University, where I was part of the Stanford NLP Group under the guidance of Chris Manning. I completed my B.Tech in Computer Science from National Institute of Technology Tiruchirappalli (NIT Trichy), Tamil Nadu, India.

Email: ayerukol [at] andrew.cmu.edu

News:

[Aug 2025] Honored to be a recipient of the K&L Gates Presidential Fellowship!
[May 2025] Mind the Gesture has been accepted to ACL 2025 main!
[Jan 2025] NormAd is accepted to NAACL 2025 main!
[May 2024] New work at ACL 2024: Generative Evaluation of Non-Literal Intent Resolution (main), NormAd (C3NLP)
[Aug 2023] Two papers accepted to EMNLP 2023: Contextual Models and Evaluations (main), Counter Strategies for Stereotypes (findings)
[Aug 2022] I joined CMU's LTI to pursue my PhD!
[Aug 2019] I joined Samsung Research America as a Research Engineer!

Publications

2025

Mind the Gesture: Evaluating AI Sensitivity to Culturally Offensive Non-Verbal Gestures

Association for Computational Linguistics (ACL), 2025.

Words Like Knives: Backstory-Personalized Modeling and Detection of Violent Communication

under review.

Out of Style: RAG's Fragility to Linguistic Variation

under review.

PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages

under review.

Interactive Agents to Overcome Ambiguity in Software Engineering

under review.

NormAd: A Framework for Measuring the Cultural Adaptability of Large Language Models

Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL), 2025.

2024

Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs

Association for Computational Linguistics (ACL), 2024.

2023

Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting

Empirical Methods in Natural Language Processing (EMNLP), 2023.

Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.

Cobra frames: Contextual reasoning about effects and harms of offensive statements

Findings of Association for Computational Linguistics (ACL), 2023.

2022

Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.

2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Association for Computational Linguistics (ACL) Workshop, 2021.

Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase

European Chapter of the Association for Computational Linguistics (EACL), 2021.

2019

Do Massively Pretrained Language Models Make Better Storytellers?

Conference on Computational Natural Language Learning (CoNLL), 2019.