(pronounced: Geet-aan-juh-lee)
I work on problems at the intersection of Natural Language Processing, Trust & Safety, and Explainable AI. I want to understand how language models process semantic uncertainty in discourse and how this impacts reliability in model decisions.
I recently joined Microsoft as a Senior Applied Scientist! Prior to this, I was a Visiting Scholar in the School of Applied and Creative Computing at Purdue, working on computational modeling of exploitative discourse. I also maintain active research collaborations with the AKRaNLU Lab and the GAURD research group.
I completed my Ph.D. in Summer 2025, where I studied language models on the task of detecting online child grooming. I also explored broader challenges in language model reasoning, including bias amplification from alignment, Clever Hans phenomena, and word sense disambiguation.
You can learn more about my work by checking out my Google Scholar.
Recent Updates