y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#information-theory News & Analysis

15 articles tagged with #information-theory. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

15 articles
AINeutralarXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

Information as Structural Alignment: A Dynamical Theory of Continual Learning

Researchers introduce the Informational Buildup Framework (IBF), a new approach to continual learning that eliminates catastrophic forgetting by treating information as structural alignment rather than stored parameters. The framework demonstrates superior performance across multiple domains including chess and image classification, achieving near-zero forgetting without requiring raw data replay.

AIBearisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Incompleteness of AI Safety Verification via Kolmogorov Complexity

Researchers prove a fundamental theoretical limit in AI safety verification using Kolmogorov complexity theory. They demonstrate that no finite formal verifier can certify all policy-compliant AI instances of arbitrarily high complexity, revealing intrinsic information-theoretic barriers beyond computational constraints.

AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Uncertainty Quantification and Data Efficiency in AI: An Information-Theoretic Perspective

This research review examines methodologies for addressing AI systems' challenges with limited training data through uncertainty quantification and synthetic data augmentation. The paper presents formal approaches including Bayesian learning frameworks, information-theoretic bounds, and conformal prediction methods to improve AI performance in data-scarce environments like robotics and healthcare.

AIBullisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

A Geometrically-Grounded Drive for MDL-Based Optimization in Deep Learning

Researchers introduce a novel optimization framework that integrates the Minimum Description Length (MDL) principle directly into deep neural network training dynamics. The method uses geometrically-grounded cognitive manifolds with coupled Ricci flow to create autonomous model simplification while maintaining data fidelity, with theoretical guarantees for convergence and practical O(N log N) complexity.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Emergent Coordination in Multi-Agent Language Models

Researchers developed an information-theoretic framework to measure when multi-agent AI systems exhibit coordinated behavior beyond individual agents. The study found that specific prompt designs can transform collections of AI agents into coordinated collectives that mirror human group intelligence principles.

AINeutralarXiv โ€“ CS AI ยท Feb 277/106
๐Ÿง 

Modality Collapse as Mismatched Decoding: Information-Theoretic Limits of Multimodal LLMs

Researchers identified a fundamental limitation in multimodal LLMs where decoders trained on text cannot effectively utilize non-text information like speaker identity or visual textures, despite this information being preserved through all model layers. The study demonstrates this 'modality collapse' is due to decoder design rather than encoding failures, with experiments showing targeted training can improve specific modality accessibility.

AINeutralarXiv โ€“ CS AI ยท Mar 276/10
๐Ÿง 

The Information Dynamics of Generative Diffusion

Researchers present a unified theoretical framework for understanding generative diffusion models by connecting information theory, dynamics, and thermodynamics. The study reveals that diffusion generation operates as controlled noise-induced symmetry breaking, where the score function regulates information flow from noise to structured data.

AINeutralarXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty

Researchers developed an information-theoretic framework to explain 'Aha moments' in large language models during reasoning tasks. The study reveals that strong reasoning performance stems from uncertainty externalization rather than specific tokens, decomposing LLM reasoning into procedural information and epistemic verbalization.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

InfoPO: Information-Driven Policy Optimization for User-Centric Agents

Researchers introduce InfoPO (Information-Driven Policy Optimization), a new method that improves AI agent interactions by using information-gain rewards to identify valuable conversation turns. The approach addresses credit assignment problems in multi-turn interactions and outperforms existing baselines across diverse tasks including intent clarification and collaborative coding.

AIBullisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Beyond Reward: A Bounded Measure of Agent Environment Coupling

Researchers introduce 'bipredictability' as a new metric to monitor reinforcement learning agents in real-world deployments, measuring interaction effectiveness through shared information ratios. The Information Digital Twin (IDT) system detects 89.3% of perturbations versus 44% for traditional reward-based monitoring, with 4.4x faster detection speed.

AIBullisharXiv โ€“ CS AI ยท Mar 36/109
๐Ÿง 

Information-Theoretic Framework for Self-Adapting Model Predictive Controllers

Researchers introduced Entanglement Learning (EL), an information-theoretic framework that enhances Model Predictive Control (MPC) for autonomous systems like UAVs. The framework uses an Information Digital Twin to monitor information flow and enable real-time adaptive optimization, improving MPC reliability beyond traditional error-based feedback systems.

AINeutralLil'Log (Lilian Weng) ยท Sep 286/10
๐Ÿง 

Anatomize Deep Learning with Information Theory

Professor Naftali Tishby applied information theory to analyze deep neural network training, proposing the Information Bottleneck method as a new learning bound for DNNs. His research identified two distinct phases in DNN training: first representing input data to minimize generalization error, then compressing representations by forgetting irrelevant details.

CryptoNeutralEthereum Foundation Blog ยท Oct 235/103
โ›“๏ธ

An Information-Theoretic Account of Secure Brainwallets

The article provides an information-theoretic analysis of brainwallets, which store cryptocurrency funds using private keys generated from memorized passwords. While brainwallets theoretically offer strong security for long-term storage, they remain controversial due to practical implementation challenges and potential vulnerabilities.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Informative Perturbation Selection for Uncertainty-Aware Post-hoc Explanations

Researchers introduce EAGLE, a new framework for explaining black-box machine learning models using information-theoretic active learning to select optimal data perturbations. The method produces feature importance scores with uncertainty estimates and demonstrates improved explanation reproducibility and stability compared to existing approaches like LIME.

AINeutralarXiv โ€“ CS AI ยท Mar 25/105
๐Ÿง 

Artificial Agency Program: Curiosity, compression, and communication in agents

Researchers present the Artificial Agency Program (AAP), a framework for developing AI systems as resource-bounded agents driven by curiosity and learning progress under physical constraints. The program aims to create AI that enhances human capabilities through better sensing, understanding, and action while reducing interface friction between people, tools, and environments.