AIBullisharXiv – CS AI · 6h ago7/10
🧠
Gradients with Respect to Semantics Preserving Embeddings Tell the Uncertainty of Large Language Models
Researchers introduce SemGrad, a gradient-based uncertainty quantification method for large language models that operates in semantic space rather than parameter space, eliminating the computational overhead of sampling-based approaches. The method measures output stability under semantically equivalent input perturbations to gauge LLM confidence, addressing the critical challenge of hallucinations in free-form text generation.