y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All34,969🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General6,074

AI × Crypto News Feed

Real-time AI-curated news from 34,840+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34840 articles
AINeutralarXiv – CS AI · 19h ago6/10
🧠

One for All: A Non-Linear Transformer can Enable Cross-Domain Generalization for In-Context Reinforcement Learning

Researchers propose a non-linear transformer architecture that enables reinforcement learning agents to generalize across different domains through in-context learning, establishing a theoretical connection between transformers and kernel-based temporal difference learning. By interpreting transformers as operators in Reproducing Kernel Hilbert Space, the work demonstrates that value functions from diverse domains can share a unified weight set, with MetaWorld experiments validating the approach.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Internalizing Safety Understanding in Large Reasoning Models via Verification

Researchers propose Safety Internal (SInternal), a framework that trains large reasoning models to verify the safety of their own outputs rather than relying on external compliance mechanisms. The approach demonstrates that models can internalize safety understanding through verification tasks, significantly improving robustness against adversarial jailbreaks and out-of-domain attacks.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Open Ontologies: Tool-Augmented Ontology Engineering with Stable Matching Alignment

Open Ontologies is an open-source Rust-based system that combines LLM-driven ontology engineering with formal OWL reasoning and stable matching alignment. The research demonstrates that stable 1-to-1 matching is the critical factor for ontology alignment quality, achieving F1 scores competitive with state-of-the-art systems, while structured tool access via Model Context Protocol significantly outperforms raw file reading for LLM interaction.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Prediction Bottlenecks Don't Discover Causal Structure (But Here's What They Actually Do)

Researchers rigorously tested claims that Mamba state-space models can discover causal structure through prediction-only training, finding the method underperforms classical approaches like PCMCI and Granger causality. The apparent success in earlier experiments was largely attributable to sample-size confounds and non-standard intervention semantics rather than genuine architectural advantages.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Bridging Sequence and Graph Structure for Epigenetic Age Prediction

Researchers present a novel machine learning framework that combines DNA sequence analysis with graph neural networks to predict biological age from methylation patterns, achieving 12.8% improvement over existing methods. The approach uses handcrafted sequence features rather than deep learning to encode biological context, demonstrating practical advantages in aging research applications.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Re$^2$Math: Benchmarking Theorem Retrieval in Research-Level Mathematics

Researchers introduce Re²Math, a new benchmark for evaluating large language models' ability to retrieve relevant mathematical theorems and lemmas from academic literature during proof construction. The benchmark reveals significant gaps in current AI systems, with the best model achieving only 7.0% accuracy despite retrieving valid statements, indicating AI struggles to verify applicability to specific proof contexts.

AINeutralarXiv – CS AI · 19h ago5/10
🧠

ChaosNetBench: Benchmarking Spatio-Temporal Graph Neural Networks on Chaotic Lattice Dynamics

Researchers introduce ChaosNetBench, a synthetic benchmark framework for evaluating spatio-temporal graph neural networks (STGNNs) on chaotic dynamical systems. The framework reveals that STGNNs outperform traditional baselines (TCN, N-BEATS, Transformers) in high-chaos regimes, while non-graph methods remain competitive in low-chaos conditions.

AINeutralarXiv – CS AI · 19h ago5/10
🧠

Sufficient conditions for a Heuristic Rating Estimation Method application

Researchers have formalized the sufficient conditions for applying the Heuristic Rating Estimation (HRE) method, a decision-making framework that evaluates alternatives through pairwise comparisons and reference weights. The study examines both arithmetic and geometric computational approaches for complete and incomplete comparison datasets, demonstrating that arithmetic variants provide optimal inconsistency estimates.

AIBullisharXiv – CS AI · 19h ago6/10
🧠

Semi-Supervised Neural Super-Resolution for Mesh-Based Simulations

Researchers introduce SuperMeshNet, a semi-supervised neural network framework that dramatically reduces the amount of expensive high-resolution training data needed for mesh-based simulations. By combining small paired datasets with abundant unpaired data through complementary learning, the system achieves superior accuracy while requiring 90% less supervised training data than fully supervised approaches.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

PrimeKG-CL: A Continual Graph Learning Benchmark on Evolving Biomedical Knowledge Graphs

Researchers introduced PrimeKG-CL, a benchmark dataset for continual graph learning built from nine biomedical databases with 129K+ nodes and 8.1M+ edges across two temporal snapshots (2021-2023). The work evaluates how different machine learning strategies handle evolving biomedical knowledge graphs, revealing that decoder choice and learning strategy interact significantly and that standard metrics fail to distinguish between retaining valid facts and forgetting outdated ones.

🏢 Hugging Face
AINeutralarXiv – CS AI · 19h ago6/10
🧠

A Cognitively Grounded Bayesian Framework for Misinformation Susceptibility

Researchers present Bounded Pragmatic Listener (BPL), a Bayesian framework that models how cognitive limitations affect susceptibility to misinformation. The framework incorporates three cognitively grounded constraints—working memory limits, information bottlenecks, and saliency-weighted sampling—to predict vulnerability to disinformation across benchmark datasets.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Revisiting Mixture Policies in Entropy-Regularized Actor-Critic

Researchers propose a marginalized reparameterization (MRP) estimator to enable practical use of mixture policies in reinforcement learning, addressing a long-standing gap between theoretical potential and practical implementation. By reducing variance compared to likelihood-ratio methods, MRP mixture policies achieve performance parity with standard Gaussian policies while offering greater flexibility in continuous action spaces.

🏢 Google
AINeutralarXiv – CS AI · 19h ago6/10
🧠

CATO: Charted Attention for Neural PDE Operators

Researchers introduce CATO (Charted Axial Transformer Operator), a neural operator architecture that solves partial differential equations (PDEs) on complex geometries more efficiently than existing methods. By learning geometry-adaptive coordinate transformations and incorporating derivative-aware physics supervision, CATO achieves 26.76% performance improvement over competing approaches while reducing parameters by 82%.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Consistency as a Testable Property: Statistical Methods to Evaluate AI Agent Reliability

Researchers present a rigorous statistical framework for measuring AI agent reliability through U-statistics and kernel-based metrics, moving beyond traditional pass@1 evaluation methods. The study reveals that agents can possess requisite knowledge yet fail catastrophically under minor task variations, with trajectory-level consistency metrics providing significantly better diagnostic sensitivity for identifying failure modes in high-stakes deployments.

AIBullisharXiv – CS AI · 19h ago6/10
🧠

E-TCAV: Formalizing Penultimate Proxies for Efficient Concept Based Interpretability

Researchers introduce E-TCAV, an optimized version of TCAV that improves the efficiency and stability of neural network interpretability testing by leveraging penultimate layer representations. The method achieves linear speed-ups while maintaining accuracy, advancing practical tools for model debugging and real-time concept-guided training across vision and language tasks.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Automated Approach for Solving Infinite-state Polynomial Reachability Games

Researchers have developed an automated algorithm for solving infinite-state polynomial reachability games, a class of two-player strategic games with applications in AI and reactive synthesis. The approach introduces ranking certificates as a formal proof mechanism and demonstrates the ability to solve previously intractable problems, including computing optimal strategies for the classical Cinderella-Stepmother game.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

diffGHOST: Diffusion based Generative Hedged Oblivious Synthetic Trajectories

diffGHOST is a new conditional diffusion model that synthesizes mobility trajectories while preserving privacy through latent space segmentation. The approach addresses a critical gap in existing generative models that lack formal privacy guarantees despite handling sensitive personal movement data.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

A Reflective Storytelling Agent for Older Adults: Integrating Argumentation Schemes and Argument Mining in LLM-Based Personalised Narratives

Researchers developed a reflective storytelling agent that combines large language models with knowledge graphs and argumentation theory to generate personalized narratives for older adults. Testing with 55 participants showed the system successfully identified personally relevant purposes in two-thirds of narratives, with argument-based grounding and hallucination detection significantly improving perceived consistency and clarity.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

ASIA: an Autonomous System Identification Agent

ASIA is an autonomous AI agent framework that automates system identification tasks by delegating model selection, training algorithms, and hyperparameter tuning to a large language model. The framework eliminates manual trial-and-error processes in dynamical systems modeling, though empirical testing reveals concerns around test leakage and reproducibility.

AIBullisharXiv – CS AI · 19h ago6/10
🧠

Learning to Explore: Scaling Agentic Reasoning via Exploration-Aware Policy Optimization

Researchers introduce EAPO, an exploration-aware reinforcement learning framework that enables LLM agents to selectively explore uncertain scenarios before acting. The method uses fine-grained reward functions and adaptive exploration mechanisms to improve decision-making across text and GUI-based agent benchmarks.

🏢 Hugging Face
AIBullisharXiv – CS AI · 19h ago6/10
🧠

Latency Analysis and Optimization of Alpamayo 1 via Efficient Trajectory Generation

Researchers have optimized Alpamayo 1, a reasoning-based autonomous driving system, by redesigning it from multi-reasoning to single-reasoning architecture while accelerating diffusion-based action generation. The optimization achieves a 69.23% latency reduction while maintaining trajectory diversity and prediction quality, demonstrating that system-level efficiency improvements are critical for practical autonomous driving deployment.

AINeutralarXiv – CS AI · 19h ago5/10
🧠

Matching Meaning at Scale: Evaluating Semantic Search for 18th-Century Intellectual History through the Case of Locke

Researchers evaluate semantic search as a tool for analyzing 18th-century intellectual history, specifically tracking how John Locke's ideas circulated through paraphrases and implicit references. While semantic search substantially outperforms traditional lexical methods at capturing meaning-level correspondences, linguistic analysis reveals that retrieval remains constrained by surface-level vocabulary overlap, suggesting both promise and limitations for historical corpus analysis.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

PiCA: Pivot-Based Credit Assignment for Search Agentic Reinforcement Learning

Researchers introduce PiCA (Pivot-Based Credit Assignment), a novel reinforcement learning mechanism that improves how LLM-based search agents learn from long sequences of actions. By identifying key pivot steps and anchoring rewards to final task outcomes, PiCA addresses critical challenges in credit assignment, delivering 15.2% performance gains on knowledge-intensive QA tasks.

AINeutralarXiv – CS AI · 19h ago6/10
🧠

Emergent Semantic Role Understanding in Language Models

Researchers demonstrate that language models develop semantic role understanding (who-did-what-to-whom comprehension) primarily during pre-training, though fine-tuning still improves performance. Using linear probes on frozen transformer models, they find semantic role information emerges from language modeling objectives alone, with representation structure becoming more distributed as models scale.

← PrevPage 419 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined