Washington DC-Baltimore Area
I am an Incoming PhD Researcher focused on solving the 'Generalization Crisis' in media forensics. My research quantifies why state-of-the-art detectors fail on unseen, diffusion-based threats, often suffering a catastrophic 50% performance collapse. The Mission: I lead the development of the 'Forensic Triad' a framework that shifts the paradigm from simple artifact hunting to semantic and physical reasoning (Attribution, Localization, Explanation). Current Focus: Building robust, containerized evaluation suites in PyTorch and Docker to stress-test the vulnerabilities of Generative AI systems. I am especially interested in Multimodal Anomaly Detection and the integration of signal processing theory with foundation models. Background: MS in Data Analytics (Clark University) and BE in Electronics & Communications. I treat Deepfakes as signal anomalies, leveraging my engineering foundation to build trustworthy and explainable AI defense systems.
• Built end-to-end data and statistical pipelines (Python, SQL) for large-scale voter engagement and turnout analysis. • Applied regression and multivariate analysis to uncover patterns across demographic segments and inform data-driven civic strategies.
Research Focus: Adversarial Robustness in Generative AI (Deepfakes) Advisor: Dr. Faisal Quader • Publication Success: Lead author of "The Generalization Crisis and the Forensic Triad," accepted for publication in Springer Lecture Notes in Networks and Systems (ICICT 2026). • Benchmarking: Engineered a Python evaluation pipeline to stress-test SOTA detectors, identifying that legacy models operate at near-random accuracy (~50%) on diffusion data. • Framework Design: Formalized the "Phase III: Semantic-Physical" detection approach, shifting focus from pixel artifacts to physical consistency violations. • Project Conclusion: Successfully concluded the research engagement with the acceptance of the survey paper at ICICT 2026 (London).
• Designing an adversarially robust OCR framework for highly degraded, multilingual documents, with a focus on explainability and real-world robustness. • Reviewing 45+ research papers to identify gaps in state-of-the-art OCR methods and map technical constraints in recognition and layout understanding. • Implementing and evaluating transformer-based and attention-driven architectures for robust document analysis in Python.
• Developed machine-learning-based user segmentation and behavior models in Python to improve targeting and personalization. • Automated large-scale data preprocessing and analytics workflows (Python, Pandas, SQL), improving reproducibility and scalability across dozens of datasets.
During my internship, I focused on strengthening my programming and data manipulation skills, gaining hands-on experience with SQL and Tableau. • Developed scripts in Python to automate data manipulation, improving processing time by 30%. • Optimized SQL queries for database management, enhancing data retrieval efficiency. • Created preliminary data visualizations in Tableau to support data-driven decision-making