y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All34,860🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General5,965

AI × Crypto News Feed

Real-time AI-curated news from 34,831+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34831 articles
AINeutralarXiv – CS AI · 15h ago6/10
🧠

DeepTumorVQA: A Hierarchical 3D CT Benchmark for Stage-Wise Evaluation of Medical VLMs and Tool-Augmented Agents

Researchers introduce DeepTumorVQA, a comprehensive benchmark for evaluating medical AI vision-language models on 3D CT tumor analysis through 476K hierarchical questions across four diagnostic stages. The study reveals that measurement accuracy is the critical bottleneck in medical AI reasoning, and that tool-augmented agents significantly outperform models working without external resources.

AINeutralarXiv – CS AI · 15h ago5/10
🧠

ChaosNetBench: Benchmarking Spatio-Temporal Graph Neural Networks on Chaotic Lattice Dynamics

Researchers introduce ChaosNetBench, a synthetic benchmark framework for evaluating spatio-temporal graph neural networks (STGNNs) on chaotic dynamical systems. The framework reveals that STGNNs outperform traditional baselines (TCN, N-BEATS, Transformers) in high-chaos regimes, while non-graph methods remain competitive in low-chaos conditions.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Rethinking Evaluation of Multiple Sclerosis (MS) Lesion Segmentation Models

Researchers argue that Multiple Sclerosis lesion segmentation models are inadequately evaluated using only Dice scores, ignoring lesion-wise detection performance and metrics relevant to clinical practice. The paper proposes rethinking evaluation frameworks to better assess deep learning models for real-world hospital deployment in MS diagnosis and progression monitoring.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Dsat: A Native SAT Solver for Discrete Logic

Researchers introduce DSAT, a native SAT solver designed to work directly with discrete variables rather than converting them to binary Boolean variables. The solver applies traditional SAT techniques like unit resolution and clause learning to discrete logic, offering potential computational and semantic advantages over existing binarization approaches for applications in probabilistic reasoning, planning, and explainable AI.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Beyond ESG Scores: Learning Dynamic Constraints for Sequential Portfolio Optimization

Researchers propose MACF-X, a machine learning framework that integrates ESG constraints into portfolio optimization without modifying financial models' core logic. The approach treats ESG as dynamic portfolio preferences rather than static scoring inputs, potentially improving risk management in sustainable investing.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

A Geometric Perspective on Next-Token Prediction in Large Language Models: Three Emerging Phases

Researchers have developed a geometric framework for understanding how large language models process information across their layers, identifying three distinct phases in next-token prediction: Seeding Multiplexing, Hoisting Overriding, and Focal Convergence. The study reveals that model depth primarily increases capacity for candidate disambiguation rather than adding fundamentally new computational stages.

AIBullisharXiv – CS AI · 15h ago6/10
🧠

Gate-and-Merge: Zero-shot Compositional Personalization of Vision Language Models

Researchers present Gate-and-Merge, a zero-shot framework enabling vision-language models to recognize and compose multiple user-defined concepts without requiring co-occurrence training data. The approach uses lightweight LoRA adapters for individual concepts and employs a gating mechanism to merge them intelligently at inference time, maintaining concept integrity while enabling compositional personalization.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Do Self-Evolving Agents Forget? Capability Degradation and Preservation in Lifelong LLM Agent Adaptation

Researchers identify capability erosion in self-evolving LLM agents, where systems adapting to new tasks progressively lose previously learned abilities across workflow, skill, model, and memory dimensions. The study proposes Capability-Preserving Evolution (CPE), a stabilization framework that maintains performance on existing tasks while enabling new adaptations, demonstrating improvements in retained capability stability across all evolution channels.

🧠 GPT-5
AINeutralarXiv – CS AI · 15h ago6/10
🧠

Spatial Priming Outperforms Semantic Prompting: A Grid-Based Approach to Improving LLM Accuracy on Chart Data Extraction

Researchers demonstrate that overlaying coordinate grids on chart images significantly improves multimodal LLM accuracy for data extraction tasks, reducing error rates from 25.5% to 19.5%. This spatial priming approach outperforms semantic methods like Chain-of-Thought prompting, suggesting that explicit spatial context is more effective than high-level semantic guidance for current-generation vision-language models.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

SKG-VLA: Scene Knowledge Graph Priors for Structured Scene Semantics and Multimodal Reasoning for Decision Making

Researchers present SKG-VLA, an AI system that uses Scene Knowledge Graphs to improve decision-making in large-scale complaint handling by integrating multimodal evidence (text, images, metadata) with structured reasoning about entities, policies, and temporal events. The approach demonstrates improved accuracy and robustness across policy-grounded reasoning and long-tail scenarios.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Beyond Accuracy: Evaluating Strategy Diversity in LLM Mathematical Reasoning

Researchers introduce a strategy-level evaluation framework for large language models on mathematical reasoning tasks, revealing a significant gap between high answer accuracy and actual reasoning flexibility. While frontier models achieve 95-100% accuracy on single-solution prompts, they recover substantially fewer problem-solving strategies than human references when asked to generate multiple approaches, with only 39-71% coverage depending on the model and iteration count.

🧠 Claude🧠 Gemini
AINeutralarXiv – CS AI · 15h ago6/10
🧠

SkillLens: Adaptive Multi-Granularity Skill Reuse for Cost-Efficient LLM Agents

SkillLens introduces a hierarchical framework for organizing and reusing skills in LLM agents at multiple granularity levels, reducing computational costs while maintaining relevance. The system retrieves and adapts skills selectively rather than injecting entire skill blocks, achieving measurable performance gains on benchmark tasks.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Adaptive Data Harvesting for Efficient Neural Network Learning with Universal Constraints

Researchers propose an adaptive data harvesting approach using reinforcement learning to dynamically select training samples for neural networks constrained by universal conditions. The method improves upon fixed heuristics for training Lyapunov Neural Networks and Physics-Informed Neural Networks, demonstrating faster convergence and better solution quality across test problems.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Explainable Knowledge Tracing via Probabilistic Embeddings and Pattern-based Reasoning

Researchers introduce Probabilistic Logical Knowledge Tracing (PLKT), an interpretable AI framework that uses Beta-distributed probabilistic embeddings to model student knowledge states and predict learning performance. Unlike conventional deep learning approaches that rely on opaque deterministic embeddings, PLKT constructs transparent reasoning paths showing how past interactions influence predictions while maintaining superior accuracy compared to existing methods.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Embeddings for Preferences, Not Semantics

Researchers propose a new approach to embedding text for collective decision-making that prioritizes preferential similarity over semantic similarity. The method uses synthetic training data to separate preference signals (stance and values) from semantic nuisance (style and wording), improving preference prediction across deliberation datasets.

🏢 Meta
AINeutralarXiv – CS AI · 15h ago6/10
🧠

EduStory: A Unified Framework for Pedagogically-Consistent Multi-Shot STEM Instructional Video Generation

EduStory introduces a novel framework for generating pedagogically-consistent multi-shot STEM instructional videos, addressing the challenge of maintaining knowledge coherence across long-horizon video generation. The framework combines pedagogical state modeling, script-guided control, and specialized evaluation metrics, supported by a new benchmark (EduVideoBench) designed to advance reliable and trustworthy educational video synthesis.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Structure-Centric Graph Foundation Model via Geometric Bases

Researchers propose Structure-Centric Graph Foundation Models (SCGFM), a novel approach that treats graph topology as the primary source of transferable knowledge using geometric bases and Gromov-Wasserstein distances. The method addresses key limitations in existing graph foundation models by handling structural heterogeneity and incompatible node feature spaces, demonstrating improved generalization across both in-domain and cross-domain graph tasks.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

PiCA: Pivot-Based Credit Assignment for Search Agentic Reinforcement Learning

Researchers introduce PiCA (Pivot-Based Credit Assignment), a novel reinforcement learning mechanism that improves how LLM-based search agents learn from long sequences of actions. By identifying key pivot steps and anchoring rewards to final task outcomes, PiCA addresses critical challenges in credit assignment, delivering 15.2% performance gains on knowledge-intensive QA tasks.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

A Prompt-Aware Structuring Framework for Reliable Reuse of AI-Generated Content in the Agentic Web

Researchers propose a framework that automatically attaches structured metadata to AI-generated content at creation time, including prompts, model information, and confidence scores, enabling verification of reliability and license compliance. This addresses critical risks of chained hallucinations and compliance violations as AI agents increasingly dominate web content generation.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

EquiMem: Calibrating Shared Memory in Multi-Agent Debate via Game-Theoretic Equilibrium

Researchers introduce EquiMem, a game-theoretic framework that addresses vulnerabilities in multi-agent debate systems by validating shared memory entries without relying on LLM judgments. The approach treats memory updating as a zero-trust game where agent equilibrium indicates optimal trust levels, outperforming existing safeguards while maintaining minimal computational overhead.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Shaping Schema via Language Representation as the Next Frontier for LLM Intelligence Expanding

A new arXiv paper argues that optimizing how language represents tasks—rather than scaling model size—is crucial for advancing LLM intelligence. The research demonstrates that deliberate language representation design can yield substantial performance improvements without modifying model parameters, supported by controlled experiments showing how different linguistic framings of identical tasks trigger different internal feature activations.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

SeePhys Pro: Diagnosing Modality Transfer and Blind-Training Effects in Multimodal RLVR for Physics Reasoning

Researchers introduce SeePhys Pro, a benchmark revealing that advanced AI models significantly degrade in physics reasoning when visual information replaces text, with visual grounding as the primary failure point. The study further demonstrates that multimodal reinforcement learning improvements can stem from non-visual textual cues rather than genuine visual understanding, challenging current evaluation methodologies.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

Researchers present causal evidence that large language models learn in-context through dual mechanisms combining genuine structure inference with local pattern-matching, rather than relying on either approach alone. Using graph random-walk tasks and activation patching techniques, they demonstrate that LLMs simultaneously encode multiple competing graph topologies in orthogonal representational subspaces and show that late-layer circuits causally drive graph-preference predictions.

AIBullisharXiv – CS AI · 15h ago6/10
🧠

MemQ: Integrating Q-Learning into Self-Evolving Memory Agents over Provenance DAGs

Researchers introduce MemQ, a novel framework that applies Q-learning eligibility traces to episodic memory in large language model agents, enabling credit assignment across memory dependencies recorded in provenance DAGs. The approach achieves superior performance across six diverse benchmarks, with gains up to 5.7 percentage points on multi-step tasks requiring deep memory chains.

AINeutralarXiv – CS AI · 15h ago6/10
🧠

Fitting Is Not Enough: Smoothness in Extremely Quantized LLMs

Researchers demonstrate that extreme quantization of large language models causes degradation beyond numerical precision loss, specifically through reduced smoothness in prediction spaces. They introduce smoothness-preserving techniques in post-training and quantization-aware training that improve generation quality independent of numerical accuracy gains.

← PrevPage 415 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined