Geetanjali Bihani

Geetanjali Bihani

PhD Candidate

Purdue University

I am a Ph.D. candidate at Purdue University, working on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. As an AI researcher dedicated to developing trustworthy NLP approaches for online safety, I’m passionate about creating solutions that combine structured knowledge representations, model calibration, and robust neural architectures. My goal is to advance scalable and reliable AI safety mechanisms for risk mitigation in high-stakes online environments, where subtle and implicit harms go unnoticed more often than explicit harmful behaviors.

Currently, I am finishing up my PhD dissertation on methodologies for detecting covert harmful behaviors, particularly grooming attempts, in online communications using large language models. By addressing fundamental challenges in model reliability and semantic understanding, my work centers on two complementary approaches:

Model Calibration for Detecting Subtle Signals I investigate techniques to improve LLM calibration during training and fine-tuning phases, specifically targeting the identification of subtle linguistic patterns characteristic of grooming behaviors. This work aims to develop models that can accurately detect implicit and covert harmful content while maintaining appropriate confidence levels in their assessments.

Structured Knowledge for Context-Aware Detection My second research area explores methods for incorporating structured knowledge regarding communication patterns and the heuristics they frequently appear as, within computational models to better recognize harmful language patterns. I aim towards systems capable of improved “social reasoning”, particularly in the context of entrapment situation models.

For more information regarding my work, please check out my Google Scholar.

Recent Updates

  • Mar 2025 - Received A.H. Ismail Interdisciplinary Graduate Degree Grant from Purdue University!
  • Mar 2025 - Received Graduate Student Travel Grant from Purdue Polytechnic Institute!
  • Jan 2025 - Delivered an ignite talk, “Bridging the Gap: Advancing AI for Detecting Covert Online Harms,” in the Digital and Social Media Track at HICSS-58
  • Dec 2024 - Paper on “Examining Language Model’s Behavior with Occupation Attributes” accepted to COLING 2025 - paper
  • Aug 2024 - Paper on “The Reliability Paradox: Exploring How Shortcut Learning Undermines Language Model Calibration” accepted to HICSS-58 - paper

Contact