Bowen Wei

prof_pic.jpg

Hello! My name is Bowen Wei, and I am a third-year Ph.D. student in Computer Science at George Mason University. I am fortunate to be advised by Professor Ziwei Zhu.

My research spans trustworthy and interpretable AI and agentic reinforcement learning (RL) for large language models. I develop prototypebased, symbolic, and explanation-driven methods to make model behavior transparent, and robust, enabling users to understand and trust AI decisions in high-stakes settings. In parallel, I study RL and post-training techniques that distill multi-agent reasoning into single, verifiable agents—improving reasoning quality, evidence attribution, and causal grounding. Together, these directions aim to advance AI systems that are both interpretable and competent in reasoning.

news

Nov 08, 2025 🎉 Our paper “Making Sense of LLM Decisions: A Prototype-based Framework for Explainable Classification” has been accepted for an oral presentation at AAAI 2026!
May 15, 2025 🎉 Our paper “ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification” has been accepted to the main conference at ACL 2025!

selected publications

  1. AAAI 2026 Oral
    Learning to Explain: Prototype-Based Surrogate Models for LLM Classification
    Bowen Wei, Mehrdad Fazli, and Ziwei Zhu
    2025
  2. ACL 2025 Main
    ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification
    Bowen Wei and Ziwei Zhu
    In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Jul 2025
  3. NeurIPS LAW 2025
    CORTEX: Collaborative LLM Agents for High-Stakes Alert Triage
    Bowen Wei, Yuan Shen Tay, Howard Liu, and 4 more authors
    Jul 2025
  4. WACV 2026
    Mitigating Hallucination in Large Vision-Language Models via Adaptive Attention Calibration
    Mehrdad Fazli, Bowen Wei, Ahmet Sari, and 1 more author
    Jul 2025
  5. arXiv
    Neural Symbolic Logical Rule Learner for Interpretable Learning
    Bowen Wei and Ziwei Zhu
    Jul 2024
  6. arXiv
    VIGNETTE: Socially Grounded Bias Evaluation for Vision-Language Models
    Chahat Raj, Bowen Wei, Aylin Caliskan, and 2 more authors
    Jul 2025