We study uncertainty quantification for contextual question answering and propose a principled epistemic uncertainty measure derived from token-level cross-entropy. The framework links uncertainty to semantic feature gaps between the deployed model and an ideal reference model. For contextual QA, we operationalize this gap with context-reliance, context comprehension, and honesty features, extracted from a small labeled set via top-down interpretability. Across multiple benchmarks, the method outperforms strong unsupervised and supervised baselines, improving PRR while adding negligible inference overhead.