Bowen Wei
Hello! My name is Bowen Wei, and I am a third-year Ph.D. student in Computer Science at George Mason University. I am fortunate to be advised by Professor Ziwei Zhu.
My research spans trustworthy and interpretable AI and agentic reinforcement learning (RL) for large language models. I develop prototypebased, symbolic, and explanation-driven methods to make model behavior transparent, and robust, enabling users to understand and trust AI decisions in high-stakes settings. In parallel, I study RL and post-training techniques that distill multi-agent reasoning into single, verifiable agentsโimproving reasoning quality, evidence attribution, and causal grounding. Together, these directions aim to advance AI systems that are both interpretable and competent in reasoning.
news
| Apr 14, 2026 | ๐ Two papers accepted to ACL 2026! โVIGNETTE: Socially Grounded Bias Evaluation for Vision-Language Modelsโ as an oral in the Main Conference, and โContext-Aware Decoding for Faithful Vision-Language Generationโ in Findings. |
|---|---|
| Nov 08, 2025 | ๐ Our paper โMaking Sense of LLM Decisions: A Prototype-based Framework for Explainable Classificationโ has been accepted for an oral presentation at AAAI 2026! |
| May 15, 2025 | ๐ Our paper โProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classificationโ has been accepted to the main conference at ACL 2025! |
selected publications
- AAAI 2026 OralMaking Sense of LLM Decisions: A Prototype-based Framework for Explainable ClassificationIn Proceedings of the AAAI Conference on Artificial Intelligence, 2026
- NeurIPS LAW 2025CORTEX: Collaborative LLM Agents for High-Stakes Alert TriageJul 2025
- WACV 2026Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention CalibrationJul 2025
- arXivClawSafety: "Safe" LLMs, Unsafe AgentsJul 2026
- arXivA Logical-Rule Autoencoder for Interpretable RecommendationsJul 2026* Equal contribution.
- arXivKnowBias: Mitigating Social Bias in LLMs via Know-Bias Neuron EnhancementJul 2026
- arXivNeural Symbolic Logical Rule Learner for Interpretable LearningJul 2024
- ACL 2026 OralVIGNETTE: Socially Grounded Bias Evaluation for Vision-Language ModelsIn Proceedings of the 64th Annual Meeting of the Association for Computational Linguistics, Jul 2026
- ACL 2026 FindingsContext-Aware Decoding for Faithful Vision-Language GenerationIn Findings of the Association for Computational Linguistics: ACL 2026, Jul 2026