CV
This is a description of the page. You can modify it in '_pages/cv.md'. You can also change or remove the top pdf download button.
Basics
| Name | Bowen Wei |
| Label | Ph.D. Student in Computer Science |
| bwei2@gmu.edu | |
| Phone | +1 (434) 254-9053 |
| Url | https://weibowen555.github.io/ |
| Summary | Ph.D. student in Computer Science at George Mason University advised by Prof. Ziwei Zhu. My research focuses on trustworthy and interpretable AI for large language models, combining prototype-based, symbolic, and explanation-driven methods with reinforcement learning to enable verifiable, evidence-grounded reasoning. |
Education
-
2023.08 - 2028.05 Fairfax, VA
Ph.D.
George Mason University
Computer Science
- Trustworthy and Interpretable AI
- LLM Explainability
- LLM Agents
-
2021.08 - 2023.05 Charlottesville, VA
-
2016.09 - 2021.06 Xi'an, China
Publications
-
2025.11.10 CAAC: Confidence-Aware Attention Calibration to Reduce Hallucinations in Large Vision-Language Models
WACV 2026
Presents an attention calibration method that mitigates hallucinations in large vision-language models by accounting for model confidence.
-
2025.11.08 Making Sense of LLM Decisions: A Prototype-based Framework for Explainable Classification
AAAI 2026 (Oral)
Develops a prototype-based framework to explain classification decisions in large language models with transparent, interpretable reasoning.
-
2025.09.22 CORTEX: Collaborative LLM Agents for High-Stakes Alert Triage
NeurIPS 2025 LAW Workshop
Proposes a collaborative LLM-agent framework for security operations center alert triage with interpretable decision-making.
-
2025.05.15 ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification
ACL 2025 (Main)
Introduces ProtoLens, improving interpretability in text classification through prototype learning and fine-grained analysis of linguistic features.
-
2025 VIGNETTE: Socially Grounded Bias Evaluation for Vision-Language Models
In Submission
Proposes a framework to evaluate and mitigate social bias in vision-language models through grounded scenarios.
-
2024 Neural Symbolic Logical Rule Learner for Interpretable Learning
ICLR 2026 (Under Review)
Combines neural representation learning with symbolic logical reasoning for transparent decision boundaries.
-
2023 An Empirical Study of Neural Contextual Bandit Algorithms
M.Sc. Thesis, University of Virginia
A comparative analysis of contextual bandit algorithms with neural approximators in recommendation and decision-making tasks.
Projects
- 2025.10 - Present
Evidence-Attribution Reinforcement Learning (EA-RL)
Develops reinforcement learning methods rewarding LLMs for using correct evidence, not only producing correct answers.
- Designs multi-agent distillation with explicit evidence attribution.
- Builds on CoA for faithful, verifiable single-agent reasoning.
- 2025.10 - Present
NeuroSymbolic Autoencoder for Interpretable Recommendation
Integrates neural and symbolic reasoning for explainable recommendation systems using rule-based latent spaces.
- Employs Rule Network as encoder-decoder for transparent RecSys.
- Targets SIGIR 2026 for publication.
Skills
| Machine Learning & AI | |
| Trustworthy AI | |
| Interpretable Models | |
| Prototype Learning | |
| Reinforcement Learning | |
| Neuro-Symbolic Learning | |
| Large Language Models |
Work
-
2025.06 - 2025.08 GenAI Engineer Intern
GoEngage
Implemented semantic retrieval and built an agentic chatbot that interfaces with APIs to generate analytical reports.
- Implemented semantic search outperforming keyword baselines.
- Built a reasoning-capable chatbot integrated with backend APIs.
-
2025.06 - 2025.08 AI Agents Developer Intern
Fluency Security
Developed multi-agent LLM systems for SOC alert triage; designed the CORTEX architecture for explainable collaboration among LLM agents.
- Proposed and built CORTEX multi-agent LLM framework for alert triage.
- Curated analyst-trace dataset (10+ scenarios) with tool outputs.
- Improved actionable F1 to 0.78 (+0.12) and reduced false-positive rate to 14.2%.
- Paper accepted to NeurIPS 2025 LAW Workshop.
Languages
| English | |
| Fluent |
| Chinese | |
| Native |
Interests
| Research in Explainable AI | |||||
| LLM Interpretability | |||||
| Prototype-Based Reasoning | |||||
| Neural-Symbolic Learning | |||||
| Evidence-Grounded RL | |||||