y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All34,635🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General5,740

AI × Crypto News Feed

Real-time AI-curated news from 34,646+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34646 articles
AINeutralarXiv – CS AI · 9h ago6/10
🧠

Mid-Training with Self-Generated Data Improves Reinforcement Learning in Language Models

Researchers propose a mid-training technique using self-generated data to improve reinforcement learning in large language models. By exposing models to multiple problem-solving approaches before RL training, the method demonstrates consistent improvements across mathematical reasoning, code generation, and narrative tasks.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Alignment as Jurisprudence

A new academic paper draws parallels between jurisprudence (how judges decide cases) and AI alignment (ensuring AI systems conform to human values), arguing that legal theory can inform AI safety approaches. The essay bridges Constitutional AI and case-based reasoning methods with established legal frameworks like interpretivism and analogical reasoning, suggesting mutual insights between law and AI development.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Playing games with knowledge: AI-Induced delusions need game theoretic interventions

Researchers propose that conversational AI systems create epistemic problems not through flawed models but through game-theoretic dynamics where sycophantic responses reinforce user biases. They introduce an "Epistemic Mediator" mechanism with belief versioning to break feedback loops that lead users toward delusional certainty, achieving 48x reduction in belief spirals.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

LLM Advertisement based on Neuron Auctions

Researchers introduce Neuron Auctions, a novel mechanism that embeds advertisements within Large Language Models by targeting their internal neural representations rather than surface text. The approach uses mechanistic interpretability to identify brand-specific neurons that operate in near-orthogonal subspaces, enabling platforms to balance advertiser revenue, user experience, and content quality through a strategy-proof auction mechanism.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

The Reciprocity Gradient

Researchers introduce the reciprocity gradient, a novel machine learning method that addresses the influence attribution problem in multi-agent strategic interactions. The approach backpropagates reward signals through estimated opponent policies without requiring reward shaping, enabling agents to learn context-sensitive cooperation strategies that outperform sample-based baselines.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

PathISE: Learning Informative Path Supervision for Knowledge Graph Question Answering

PathISE is a novel framework that enables knowledge graph question-answering systems to learn effective supervision signals from answer-level labels alone, eliminating the need for expensive intermediate annotations. By using a transformer-based estimator to identify informative relation paths and distilling them into LLM path generators, the approach achieves competitive state-of-the-art performance while reducing resource requirements for training.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Interactive Critique-Revision Training for Reliable Structured LLM Generation

Researchers propose DPA-GRPO, a novel training method for large language models that improves structured decision-making by using a generator-verifier framework where one model produces outputs and another validates them through safety assurance cases. The method demonstrates improved accuracy on tax calculation benchmarks and addresses the challenge of ensuring LLM outputs are locally correct, globally consistent, and auditable.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Reasoning Is Not Free: Robust Adaptive Cost-Efficient Routing for LLM-as-a-Judge

Researchers demonstrate that reasoning-capable LLMs improve judgment accuracy significantly on complex tasks like math and coding, but offer minimal or negative benefits on simpler evaluations while consuming substantially more computational resources. They introduce RACER, an adaptive routing algorithm that dynamically selects between reasoning and non-reasoning judges under budget constraints while accounting for distribution shifts.

AIBullisharXiv – CS AI · 9h ago6/10
🧠

Towards Universal Gene Regulatory Network Inference: Unlocking Generalizable Regulatory Knowledge in Single-cell Foundation Models

Researchers introduce improved methods for Gene Regulatory Network (GRN) inference using single-cell foundation models, proposing Virtual Value Perturbation and Gradient Trajectory techniques to better extract regulatory knowledge. The work establishes a new benchmark for evaluating GRN predictions across unseen genes and datasets, demonstrating significant performance improvements over existing approaches.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

PLACO: A Multi-Stage Framework for Cost-Effective Performance in Human-AI Teams

PLACO presents a multi-stage framework for optimizing human-AI team performance in classification tasks by combining human and model outputs through Bayesian probability methods. The research addresses how to effectively leverage both human judgment and AI predictions when neither alone achieves desired performance levels.

AIBullisharXiv – CS AI · 9h ago6/10
🧠

MemQ: Integrating Q-Learning into Self-Evolving Memory Agents over Provenance DAGs

Researchers introduce MemQ, a novel framework that applies Q-learning eligibility traces to episodic memory in large language model agents, enabling credit assignment across memory dependencies recorded in provenance DAGs. The approach achieves superior performance across six diverse benchmarks, with gains up to 5.7 percentage points on multi-step tasks requiring deep memory chains.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

On Distinguishing Capability Elicitation from Capability Creation in Post-Training: A Free-Energy Perspective

Researchers propose distinguishing between capability elicitation and capability creation in large language model post-training, arguing that the SFT vs. RL debate oversimplifies how models improve. The framework suggests post-training either reweights existing behaviors or expands what models can practically achieve, with significant implications for how AI development is understood and evaluated.

AIBullisharXiv – CS AI · 9h ago6/10
🧠

Intelligent Autonomous Orchestration for Distributed Cloud Resources using Complex-Stability Analysis

Researchers propose C-SAS, an AI-driven orchestration framework using complex stability analysis to optimize distributed cloud resource allocation. The system reduces VM flapping by 94% and achieves 96% resource efficiency, outperforming traditional PID and machine learning approaches by embedding formal stability constraints into autonomous cloud infrastructure.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

Researchers present causal evidence that large language models learn in-context through dual mechanisms combining genuine structure inference with local pattern-matching, rather than relying on either approach alone. Using graph random-walk tasks and activation patching techniques, they demonstrate that LLMs simultaneously encode multiple competing graph topologies in orthogonal representational subspaces and show that late-layer circuits causally drive graph-preference predictions.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Embeddings for Preferences, Not Semantics

Researchers propose a new approach to embedding text for collective decision-making that prioritizes preferential similarity over semantic similarity. The method uses synthetic training data to separate preference signals (stance and values) from semantic nuisance (style and wording), improving preference prediction across deliberation datasets.

🏢 Meta
AINeutralarXiv – CS AI · 9h ago6/10
🧠

CDS4RAG: Cyclic Dual-Sequential Hyperparameter Optimization for RAG

Researchers introduce CDS4RAG, a novel optimization framework that improves Retrieval-Augmented Generation systems by cyclically optimizing retriever and generator hyperparameters separately rather than treating them as a monolithic unit. The method achieves up to 1.54x improvements in generation quality while demonstrating faster convergence across multiple benchmarks and language models.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Optimal FALQON for Quantum Approximate Optimization via Layer-wise Parameter Tuning

Researchers present Optimal FALQON, an enhanced quantum optimization algorithm that adaptively tunes layer-wise parameters to improve performance on noisy quantum devices. Testing on 3-regular graphs demonstrates significant improvements in convergence speed and solution quality compared to standard approaches, with implications for practical quantum computing applications.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

LLM-guided Semi-Supervised Approaches for Social Media Crisis Data Classification

Researchers evaluate LLM-guided semi-supervised learning methods for classifying crisis-related social media data, finding that LG-CoTrain significantly outperforms traditional approaches in low-resource settings while compact models can rival large zero-shot LLMs. This demonstrates practical pathways for deploying AI in disaster response applications with minimal labeled training data.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

SkillLens: Adaptive Multi-Granularity Skill Reuse for Cost-Efficient LLM Agents

SkillLens introduces a hierarchical framework for organizing and reusing skills in LLM agents at multiple granularity levels, reducing computational costs while maintaining relevance. The system retrieves and adapts skills selectively rather than injecting entire skill blocks, achieving measurable performance gains on benchmark tasks.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Benchmarking ResNet Backbones in RT-DETR: Impact of Depth and Regularization under environmental conditions

This research benchmarks RT-DETR object detection models with different ResNet backbones for competitive robotics applications, evaluating how environmental variations like lighting and background contrast affect detection performance. The study finds that intermediate-depth models (ResNet34 and ResNet50) offer optimal balance between accuracy, confidence, and latency, with ResNet50 excelling under illumination changes and ResNet34 performing best under background variations.

AINeutralarXiv – CS AI · 9h ago5/10
🧠

Crystal Fractional Graph Neural Network for Energy Prediction of High-Entropy Alloys

Researchers have developed a crystal fractional graph neural network that combines graph neural networks with compositional embeddings to predict the energy of high-entropy alloys, achieving accuracy comparable to first-principles calculations on a dataset of over 1,000 crystal structures. The hybrid architecture addresses a key challenge in materials science by integrating local atomic interactions and global elemental composition, though scalability limitations for larger crystal systems remain.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

REAP: Reinforcement-Learning End-to-End Autonomous Parking with Gaussian Splatting Simulator for Real2Sim2Real Transfer

Researchers introduce REAP, a reinforcement learning-based autonomous parking system that uses Gaussian Splatting to simulate real-world environments for training, then transfers the model to physical vehicles. The method addresses limitations of traditional multi-stage parking approaches by jointly optimizing perception and planning, achieving successful parking in extreme scenarios like mechanical slots.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Attention-based graph neural networks: a survey

A comprehensive survey paper systematizes recent advances in attention-based graph neural networks (GNNs), proposing a two-level taxonomy spanning three developmental stages: graph recurrent attention networks, graph attention networks, and graph transformers. The work addresses a gap in literature by providing structured analysis of how attention mechanisms enhance GNNs' ability to learn discriminative features while filtering noise in graph-structured data.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

Spatial Priming Outperforms Semantic Prompting: A Grid-Based Approach to Improving LLM Accuracy on Chart Data Extraction

Researchers demonstrate that overlaying coordinate grids on chart images significantly improves multimodal LLM accuracy for data extraction tasks, reducing error rates from 25.5% to 19.5%. This spatial priming approach outperforms semantic methods like Chain-of-Thought prompting, suggesting that explicit spatial context is more effective than high-level semantic guidance for current-generation vision-language models.

AINeutralarXiv – CS AI · 9h ago6/10
🧠

The Safety-Aware Denoiser for Text Diffusion Models

Researchers propose Safety-Aware Denoiser (SAD), an inference-time safety framework that guides text diffusion models toward secure outputs during the denoising process without requiring model retraining. The method reduces unsafe text generation while maintaining output quality, offering a scalable alternative to post-hoc filtering approaches.

← PrevPage 409 of 1386Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined