Uncertainty as Feature Gaps: Epistemic Uncertainty Quantification of LLMs in Contextual Question-Answering

Abstract

We study uncertainty quantification for contextual question answering and propose a principled epistemic uncertainty measure derived from token-level cross-entropy. The framework links uncertainty to semantic feature gaps between the deployed model and an ideal reference model. For contextual QA, we operationalize this gap with context-reliance, context comprehension, and honesty features, extracted from a small labeled set via top-down interpretability. Across multiple benchmarks, the method outperforms strong unsupervised and supervised baselines, improving PRR while adding negligible inference overhead.

Publication
International Conference on Learning Representations 2026
Yavuz Faruk Bakman
Yavuz Faruk Bakman
PhD Student in Computer Science Capital One Responsible AI Fellow

My research interests include Trustworthy LLM, Continual Learning and Federated Learning.