Akhila Yerukola

Hi! I am a second-year Ph.D. student at Carnegie Mellon University's School of Computer Science in the Language Technologies Institute. I am fortunate to be advised by Maarten Sap.

My research focuses on enhancing the reasoning and generation capabilities of language models across various contexts. The term "context" is often overloaded and can encompass local, global, cultural, and environmental factors. My goal is to improve the interaction experience for diverse groups of real-world users with technology.

I have ~2.8 years experience as a Senior Research Engineer at the AI Center in Samsung Research America (SRA), where I was advised by Hongxia Jin. Prior to that, I earned my Masters in Computer Science from Stanford University, where I was part of the Stanford NLP Group under the guidance of Chris Manning. I completed my B.Tech in Computer Science from National Institute of Technology Tiruchirappalli (NIT Trichy), Tamil Nadu, India.

Email: ayerukol [at] andrew.cmu.edu

News:
[Aug 2023] 2 papers accepted to EMNLP 2023!
[Aug 2022] I joined CMU's LTI to pursue my PhD!
[Aug 2019] I joined Samsung Research America as a Research Engineer!

Publications

2024

Is the Pope Catholic? Yes, the Pope is Catholic. Generative Evaluation of Intent Resolution in LLMs

Association for Computational Linguistics (ACL), 2024.

NormAd: A Benchmark for Measuring the Cultural Adaptability of Large Language Models

under submission.

2023

Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting

Empirical Methods in Natural Language Processing (EMNLP), 2023.

Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.

Cobra frames: Contextual reasoning about effects and harms of offensive statements

Findings of Association for Computational Linguistics (ACL), 2023.

2022

Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.

2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Association for Computational Linguistics (ACL) Workshop, 2021.

Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase

European Chapter of the Association for Computational Linguistics (EACL), 2021.

2019

Do Massively Pretrained Language Models Make Better Storytellers?

Conference on Computational Natural Language Learning (CoNLL), 2019.