Akhila Yerukola

Hi! I am a third-year Ph.D. student at Carnegie Mellon University's School of Computer Science in the Language Technologies Institute. I am fortunate to be advised by Maarten Sap.

My research focuses on enhancing the reasoning and generation capabilities of language models across various contexts. The term "context" is often overloaded and can encompass local, global, and cultural factors. My goal is to improve the interaction experience for diverse groups of real-world users with technology.

Currently, I've been interested in two research areas:

pragmatics: I'm interested in better interpreting underlying user intentions and questions that go beyond explicit statements.

cross-cultural safety and understanding: I'm interested in studying AI systems' sensitivity, safety and awareness of various nuances of culture across multiple modalities -- moving beyond the commonly studied issues of coverage and diversity.

I have ~2.8 years experience as a Senior Research Engineer at the AI Center in Samsung Research America (SRA), where I was advised by Hongxia Jin. Prior to that, I earned my Masters in Computer Science from Stanford University, where I was part of the Stanford NLP Group under the guidance of Chris Manning. I completed my B.Tech in Computer Science from National Institute of Technology Tiruchirappalli (NIT Trichy), Tamil Nadu, India.

Email: ayerukol [at] andrew.cmu.edu


News:

[May 2024] New work at ACL 2024: Generative Evaluation of Non-Literal Intent Resolution (main), NormAd (C3NLP)
[Aug 2023] Two papers accepted to EMNLP 2023: Contextual Models and Evaluations (main), Counter Strategies for Stereotypes (findings)
[Aug 2022] I joined CMU's LTI to pursue my PhD!
[Aug 2019] I joined Samsung Research America as a Research Engineer!

Publications

2024

Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Non-Literal Intent Resolution in LLMs

Association for Computational Linguistics (ACL), 2024.

NormAd: A Framework for Measuring the Cultural Adaptability of Large Language Models

under submission.

2023

Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting

Empirical Methods in Natural Language Processing (EMNLP), 2023.

Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.

Cobra frames: Contextual reasoning about effects and harms of offensive statements

Findings of Association for Computational Linguistics (ACL), 2023.

2022

Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.

2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Association for Computational Linguistics (ACL) Workshop, 2021.

Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase

European Chapter of the Association for Computational Linguistics (EACL), 2021.

2019

Do Massively Pretrained Language Models Make Better Storytellers?

Conference on Computational Natural Language Learning (CoNLL), 2019.