11 articles tagged with #uncertainty-estimation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 4d ago7/10
๐ง Researchers propose Evidential Transformation Network (ETN), a lightweight post-hoc module that converts pretrained models into evidential models for uncertainty estimation without retraining. ETN operates in logit space using sample-dependent affine transformations and Dirichlet distributions, demonstrating improved uncertainty quantification across vision and language benchmarks with minimal computational overhead.
AIBullisharXiv โ CS AI ยท Mar 97/10
๐ง Researchers propose a three-stage pipeline to train Large Language Models to efficiently provide calibrated uncertainty estimates for their responses. The method uses entropy-based scoring, Platt scaling calibration, and reinforcement learning to enable models to reason about uncertainty without computationally expensive post-hoc methods.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers introduce JANUS, a new AI framework that solves the 'Quadrilemma' in synthetic data generation by achieving high fidelity, logical constraint control, reliable uncertainty estimation, and computational efficiency simultaneously. The system uses Bayesian Decision Trees and a novel Reverse-Topological Back-filling algorithm to guarantee 100% constraint satisfaction while being 128x faster than existing methods.
AIBullisharXiv โ CS AI ยท Mar 37/103
๐ง Researchers have developed Value Flows, a new reinforcement learning method that uses flow-based models to estimate complete return distributions rather than single scalar values. The approach achieves 1.3x improvement in success rates across 62 benchmark tasks by better identifying states with high return uncertainty for improved decision-making.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง Researchers propose TokUR, a framework that enables large language models to estimate uncertainty at the token level during reasoning tasks, allowing LLMs to self-assess response quality and improve performance on mathematical problems. The approach uses low-rank random weight perturbation to generate predictive distributions, demonstrating strong correlation with answer correctness and potential for enhancing LLM reliability.
AIBearisharXiv โ CS AI ยท Mar 266/10
๐ง Research reveals that RLHF-aligned language models suffer from 'alignment tax' - producing homogenized responses that severely impair uncertainty estimation methods. The study found 40-79% of questions on TruthfulQA generate nearly identical responses, with alignment processes like DPO being the primary cause of this response homogenization.
AIBullisharXiv โ CS AI ยท Mar 126/10
๐ง Researchers introduce CUPID, a plug-in framework that estimates both aleatoric and epistemic uncertainty in deep learning models without requiring model retraining. The modular approach can be inserted into any layer of pretrained networks and provides interpretable uncertainty analysis for high-stakes AI applications.
AINeutralarXiv โ CS AI ยท Mar 116/10
๐ง Research reveals that LLMs heavily concentrate their confidence scores on just three round numbers when using standard 0-100 scales, with over 78% of responses showing this pattern. The study demonstrates that using a 0-20 confidence scale significantly improves metacognitive efficiency compared to the conventional 0-100 format.
AIBullisharXiv โ CS AI ยท Mar 165/10
๐ง Researchers developed an improved Residual Reinforcement Learning method that uses uncertainty estimation to enhance sample efficiency and work with stochastic base policies. The approach outperformed existing methods in simulation benchmarks and demonstrated successful zero-shot sim-to-real transfer in real-world deployments.
AINeutralMarkTechPost ยท Mar 105/10
๐ง This tutorial demonstrates building an advanced AI agent system that incorporates risk-awareness through internal criticism, self-consistency reasoning, and uncertainty estimation. The system evaluates responses across multiple dimensions including accuracy, coherence, and safety while implementing risk-sensitive selection strategies for more reliable decision-making.
AINeutralarXiv โ CS AI ยท Mar 34/104
๐ง Researchers introduce Uncertainty Structure Estimation (USE), a new preprocessing method for semi-supervised learning that improves model reliability by filtering out low-quality unlabeled data. The approach uses entropy scores and statistical thresholds to identify and remove out-of-distribution samples before training, demonstrating consistent accuracy improvements across imaging and NLP tasks.
$NEAR