I am a recent Ph.D. graduate from Purdue University, where I worked on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. My research focuses on enhancing the reasoning capabilities of language models by tackling linguistic ambiguity and improving generalization, especially in cases where the structural form (locution) differs from the intended communicative function (illocution), as often seen in manipulative discourse.
During my Ph.D., I explored a range of challenges in language model reasoning. I developed methods to quantify how alignment methods exacerbate opposing biases, demonstrated that calibrated models lack generalizability, developed methods for sense-enrichment of contextualized representations, worked on fuzzy models for intent classification, including task-based intent as well as covert malicious intents.
My research takes a two-pronged approach:
For more information regarding my work, please check out my Google Scholar page.
Recent Updates