I am a Ph.D. candidate at Purdue University, working on Natural Language Understanding with Dr. Julia Taylor Rayz at the AKRaNLU Lab. As an AI researcher dedicated to developing trustworthy NLP approaches for online safety, I’m passionate about creating solutions that combine structured knowledge representations, model calibration, and robust neural architectures. My goal is to advance scalable and reliable AI safety mechanisms for risk mitigation in high-stakes online environments, where subtle and implicit harms go unnoticed more often than explicit harmful behaviors.
Currently, I am finishing up my PhD dissertation on methodologies for detecting covert harmful behaviors, particularly grooming attempts, in online communications using large language models. By addressing fundamental challenges in model reliability and semantic understanding, my work centers on two complementary approaches:
Model Calibration for Detecting Subtle Signals I investigate techniques to improve LLM calibration during training and fine-tuning phases, specifically targeting the identification of subtle linguistic patterns characteristic of grooming behaviors. This work aims to develop models that can accurately detect implicit and covert harmful content while maintaining appropriate confidence levels in their assessments.
Structured Knowledge for Context-Aware Detection My second research area explores methods for incorporating structured knowledge regarding communication patterns and the heuristics they frequently appear as, within computational models to better recognize harmful language patterns. I aim towards systems capable of improved “social reasoning”, particularly in the context of entrapment situation models.
For more information regarding my work, please check out my Google Scholar.
Recent Updates