y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#uncertainty-quantification News & Analysis

36 articles tagged with #uncertainty-quantification. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

36 articles
AIBullisharXiv โ€“ CS AI ยท Mar 37/108
๐Ÿง 

DenoiseFlow: Uncertainty-Aware Denoising for Reliable LLM Agentic Workflows

Researchers introduce DenoiseFlow, a framework that addresses reliability issues in AI agent workflows by managing uncertainty through adaptive computation allocation and error correction. The system achieves 83.3% average accuracy across benchmarks while reducing computational costs by 40-56% through intelligent branching decisions.

$COMP
AIBullisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Polynomial Surrogate Training for Differentiable Ternary Logic Gate Networks

Researchers introduce Polynomial Surrogate Training (PST) to enable differentiable ternary logic gate networks, reducing parameters by 2,187x while maintaining performance. The method extends beyond binary logic gates to ternary systems with an UNKNOWN state for uncertainty handling, training 2-3x faster than binary networks.

AIBullisharXiv โ€“ CS AI ยท Mar 36/106
๐Ÿง 

CIRCUS: Circuit Consensus under Uncertainty via Stability Ensembles

Researchers introduce CIRCUS, a new method for discovering mechanistic circuits in AI models that addresses uncertainty and brittleness issues in current approaches. The technique creates ensemble attribution graphs and extracts consensus circuits that are 40x smaller while maintaining explanatory power, validated on Gemma-2-2B and Llama-3.2-1B models.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

IDER: IDempotent Experience Replay for Reliable Continual Learning

Researchers propose IDER (Idempotent Experience Replay), a new continual learning method that addresses catastrophic forgetting in neural networks while improving prediction reliability. The approach uses idempotent properties to help AI models retain previously learned knowledge when acquiring new tasks, with demonstrated improvements in accuracy and reduced computational overhead.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1010
๐Ÿง 

Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume

Researchers introduce UMPIRE, a new training-free framework for quantifying uncertainty in Multimodal Large Language Models (MLLMs) across various input and output modalities. The system measures incoherence-adjusted semantic volume of model responses to better detect errors and improve reliability without requiring external tools or additional computational overhead.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1011
๐Ÿง 

Evidential Neural Radiance Fields

Researchers introduce Evidential Neural Radiance Fields, a new probabilistic approach that enables uncertainty quantification in 3D scene modeling while maintaining rendering quality. The method addresses critical limitations in existing NeRF technology by capturing both aleatoric and epistemic uncertainty from a single forward pass, making neural radiance fields more suitable for safety-critical applications.

AINeutralarXiv โ€“ CS AI ยท Mar 26/1010
๐Ÿง 

RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models

Researchers introduce RewardUQ, a unified framework for evaluating uncertainty quantification in reward models used to align large language models with human preferences. The study finds that model size and initialization have the most significant impact on performance, while providing an open-source Python package to advance the field.

AINeutralarXiv โ€“ CS AI ยท Apr 64/10
๐Ÿง 

Equivariant Evidential Deep Learning for Interatomic Potentials

Researchers developed eยฒIP, a new framework for uncertainty quantification in machine learning interatomic potentials used in molecular dynamics simulations. The method uses equivariant evidential deep learning to model atomic forces and their uncertainty through symmetric covariance tensors that transform properly under rotations.

$IP
AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Informative Perturbation Selection for Uncertainty-Aware Post-hoc Explanations

Researchers introduce EAGLE, a new framework for explaining black-box machine learning models using information-theoretic active learning to select optimal data perturbations. The method produces feature importance scores with uncertainty estimates and demonstrates improved explanation reproducibility and stability compared to existing approaches like LIME.

AINeutralarXiv โ€“ CS AI ยท Mar 34/103
๐Ÿง 

DAWN-FM: Data-Aware and Noise-Informed Flow Matching for Solving Inverse Problems

Researchers introduce DAWN-FM, a new AI method using Flow Matching to solve inverse problems in fields like medical imaging and signal processing. The approach incorporates data and noise embedding to provide robust solutions even with incomplete or noisy observations, outperforming pretrained diffusion models in highly ill-posed scenarios.

AINeutralarXiv โ€“ CS AI ยท Mar 34/105
๐Ÿง 

Adaptive Uncertainty-Guided Surrogates for Efficient phase field Modeling of Dendritic Solidification

Researchers developed a new AI-powered surrogate model using XGBoost and CNNs to significantly reduce computational costs in phase field simulations for metal solidification processes. The adaptive uncertainty-guided approach achieves accurate predictions while requiring fewer expensive simulations and reducing CO2 emissions in additive manufacturing applications.

โ† PrevPage 2 of 2