Akhila Yerukola

Hi! I am a second-year Ph.D. student at Carnegie Mellon University's School of Computer Science in the Language Technologies Institute. I am fortunate to be advised by Maarten Sap.

My research interests lie in the field of natural language processing (NLP), specifically focusing on enhancing natural language generation (NLG) systems with social commonsense and mitigating social biases in language.

I have ~2.8 years experience as a Senior Research Engineer at the AI Center in Samsung Research America (SRA), where I was advised by Hongxia Jin. Prior to that, I earned my Masters in Computer Science from Stanford University, where I was part of the Stanford NLP Group under the guidance of Chris Manning. I completed my B.Tech in Computer Science from National Institute of Technology Tiruchirappalli (NIT Trichy), Tamil Nadu, India.

Email: ayerukol [at] andrew.cmu.edu

News:
[Aug 2022] 2 papers accepted to EMNLP 2023!
[Aug 2022] I joined CMU's LTI to pursue my PhD!
[Aug 2019] I joined Samsung Research America as a Research Engineer!

Publications

2023

Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting

Empirical Methods in Natural Language Processing (EMNLP), 2023.

Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.

Cobra frames: Contextual reasoning about effects and harms of offensive statements

Findings of Association for Computational Linguistics (ACL), 2023.

2022

Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling

Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.

2021

The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics

Association for Computational Linguistics (ACL) Workshop, 2021.

Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase

European Chapter of the Association for Computational Linguistics (EACL), 2021.

2019

Do Massively Pretrained Language Models Make Better Storytellers?

Conference on Computational Natural Language Learning (CoNLL), 2019.