Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs

Overview of LARS

Abstract

In this work, we introduce the Learnable Response Scoring Function (LARS) for Uncertainty Estimation (UE) in generative Large Language Models (LLMs). Current scoring functions for probability-based UE, such as length-normalized scoring and semantic contribution-based weighting, are designed to solve specific aspects of the problem but exhibit limitations, including the inability to handle biased probabilities and under-performance in low-resource languages like Turkish. To address these issues, we propose LARS, a scoring function that leverages supervised data to capture complex dependencies between tokens and probabilities, thereby producing more reliable and calibrated response scores in computing the uncertainty of generations. Our extensive experiments across multiple datasets show that LARS substantially outperforms existing scoring functions considering various probability-based UE methods.

Publication
preprint
Yavuz Faruk Bakman
Yavuz Faruk Bakman
PhD Student in Computer Science

My research interests include Trustworthy LLM, Continual Learning and Federated Learning.