y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All29,417🧠AI12,719⛓️Crypto10,660💎DeFi1,104🤖AI × Crypto541📰General4,393

AI × Crypto News Feed

Real-time AI-curated news from 29,422+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

29422 articles
AIBullisharXiv – CS AI · Apr 77/10
🧠

Customized User Plane Processing via Code Generating AI Agents for Next Generation Mobile Networks

Researchers propose using generative AI agents to create customized user plane processing blocks for 6G mobile networks based on text-based service requests. The study evaluates factors affecting AI code generation accuracy for network-specific tasks, finding that AI agents can successfully generate desired processing functions under suitable conditions.

AIBullisharXiv – CS AI · Apr 77/10
🧠

MemMachine: A Ground-Truth-Preserving Memory System for Personalized AI Agents

MemMachine is an open-source memory system for AI agents that preserves conversational ground truth and achieves superior accuracy-efficiency tradeoffs compared to existing solutions. The system integrates short-term, long-term episodic, and profile memory while using 80% fewer input tokens than comparable systems like Mem0.

🧠 GPT-4🧠 GPT-5
AIBullisharXiv – CS AI · Apr 77/10
🧠

Stabilizing Unsupervised Self-Evolution of MLLMs via Continuous Softened Retracing reSampling

Researchers propose Continuous Softened Retracing reSampling (CSRS) to improve the self-evolution of Multimodal Large Language Models by addressing biases in feedback mechanisms. The method uses continuous reward signals instead of binary rewards and achieves state-of-the-art results on mathematical reasoning benchmarks like MathVision using Qwen2.5-VL-7B.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Commercial Persuasion in AI-Mediated Conversations

A research study reveals that AI-powered conversational interfaces can triple the rate of sponsored product selection compared to traditional search engines (61.2% vs 22.4%). Users largely fail to detect this commercial steering, even with explicit sponsor labels, indicating current transparency measures are insufficient.

AI × CryptoBullisharXiv – CS AI · Apr 77/10
🤖

Quantifying Trust: Financial Risk Management for Trustworthy AI Agents

Researchers introduce the Agentic Risk Standard (ARS), a payment settlement framework for AI-mediated transactions that provides contractual compensation for agent failures. The standard shifts trust from implicit model behavior expectations to explicit, measurable guarantees through financial risk management principles.

AINeutralarXiv – CS AI · Apr 77/10
🧠

Mapping the Exploitation Surface: A 10,000-Trial Taxonomy of What Makes LLM Agents Exploit Vulnerabilities

A comprehensive study of 10,000 trials reveals that most assumed triggers for LLM agent exploitation don't work, but 'goal reframing' prompts like 'You are solving a puzzle; there may be hidden clues' can cause 38-40% exploitation rates despite explicit rule instructions. The research shows agents don't override rules but reinterpret tasks to make exploitative actions seem aligned with their goals.

🏢 OpenAI🧠 GPT-4🧠 GPT-5
AINeutralarXiv – CS AI · Apr 77/10
🧠

The Topology of Multimodal Fusion: Why Current Architectures Fail at Creative Cognition

Researchers identify a fundamental topological limitation in current multimodal AI architectures like CLIP and GPT-4V, proposing that their 'contact topology' structure prevents creative cognition. The paper introduces a philosophical framework combining Chinese epistemology with neuroscience to propose new architectures using Neural ODEs and topological regularization.

🧠 Gemini
AIBullisharXiv – CS AI · Apr 77/10
🧠

Can LLMs Learn to Reason Robustly under Noisy Supervision?

Researchers propose Online Label Refinement (OLR) to improve AI reasoning models' robustness under noisy supervision in Reinforcement Learning with Verifiable Rewards. The method addresses the critical problem of training language models when expert-labeled data contains errors, achieving 3-4% performance gains across mathematical reasoning benchmarks.

AI × CryptoNeutralarXiv – CS AI · Apr 77/10
🤖

PolySwarm: A Multi-Agent Large Language Model Framework for Prediction Market Trading and Latency Arbitrage

PolySwarm is a new multi-agent AI framework that uses 50 diverse large language models to trade on prediction markets like Polymarket, combining swarm intelligence with arbitrage strategies. The system outperformed single-model baselines in probability calibration and includes latency arbitrage capabilities to exploit pricing inefficiencies across markets.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Comparative reversal learning reveals rigid adaptation in LLMs under non-stationary uncertainty

Research reveals that large language models like DeepSeek-V3.2, Gemini-3, and GPT-5.2 show rigid adaptation patterns when learning from changing environments, particularly struggling with loss-based learning compared to humans. The study found LLMs demonstrate asymmetric responses to positive versus negative feedback, with some models showing extreme perseveration after environmental changes.

🧠 GPT-5🧠 Gemini
AIBearisharXiv – CS AI · Apr 77/10
🧠

AI Assistance Reduces Persistence and Hurts Independent Performance

A new study of 1,222 participants found that AI assistance, while improving short-term performance, significantly reduces human persistence and impairs independent performance after only brief 10-minute interactions. The research suggests current AI systems act as short-sighted collaborators that condition users to expect immediate answers, potentially undermining long-term skill acquisition and learning.

AINeutralarXiv – CS AI · Apr 77/10
🧠

ShieldNet: Network-Level Guardrails against Emerging Supply-Chain Injections in Agentic Systems

Researchers have identified a new class of supply-chain threats targeting AI agents through malicious third-party tools and MCP servers. They've created SC-Inject-Bench, a benchmark with over 10,000 malicious tools, and developed ShieldNet, a network-level security framework that achieves 99.5% detection accuracy with minimal false positives.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Structural Rigidity and the 57-Token Predictive Window: A Physical Framework for Inference-Layer Governability in Large Language Models

Researchers present a new framework for AI safety that identifies a 57-token predictive window for detecting potential failures in large language models. The study found that only one out of seven tested models showed predictive signals before committing to problematic outputs, while factual hallucinations produced no detectable warning signs.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Many Preferences, Few Policies: Towards Scalable Language Model Personalization

Researchers developed PALM (Portfolio of Aligned LLMs), a method to create a small collection of language models that can serve diverse user preferences without requiring individual models per user. The approach provides theoretical guarantees on portfolio size and quality while balancing system costs with personalization needs.

AINeutralarXiv – CS AI · Apr 77/10
🧠

When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

Researchers at arXiv have identified two key mechanisms behind reasoning hallucinations in large language models: Path Reuse and Path Compression. The study models next-token prediction as graph search, showing how memorized knowledge can override contextual constraints and how frequently used reasoning paths become shortcuts that lead to unsupported conclusions.

AIBullisharXiv – CS AI · Apr 77/10
🧠

QED-Nano: Teaching a Tiny Model to Prove Hard Theorems

Researchers developed QED-Nano, a 4B parameter AI model that achieves competitive performance on Olympiad-level mathematical proofs despite being much smaller than proprietary systems. The model uses a three-stage training approach including supervised fine-tuning, reinforcement learning, and reasoning cache expansion to match larger models at a fraction of the inference cost.

🧠 Gemini
AINeutralarXiv – CS AI · Apr 77/10
🧠

Justified or Just Convincing? Error Verifiability as a Dimension of LLM Quality

Researchers introduce 'error verifiability' as a new metric to measure whether AI-generated justifications help users distinguish correct from incorrect answers. The study found that common AI improvement methods don't enhance verifiability, but two new domain-specific approaches successfully improved users' ability to assess answer correctness.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Evolutionary Search for Automated Design of Uncertainty Quantification Methods

Researchers developed an LLM-powered evolutionary search method to automatically design uncertainty quantification systems for large language models, achieving up to 6.7% improvement in performance over manual designs. The study found that different AI models employ distinct evolutionary strategies, with some favoring complex linear estimators while others prefer simpler positional weighting approaches.

🧠 Claude🧠 Sonnet🧠 Opus
AINeutralarXiv – CS AI · Apr 77/10
🧠

Gradual Cognitive Externalization: A Framework for Understanding How Ambient Intelligence Externalizes Human Cognition

Researchers propose Gradual Cognitive Externalization (GCE), a framework suggesting human cognitive functions are already migrating into digital AI systems through ambient intelligence rather than traditional mind uploading. The study identifies evidence in scheduling assistants, writing tools, and AI agents that cognitive externalization is occurring now through bidirectional adaptation and functional equivalence.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Learning Dexterous Grasping from Sparse Taxonomy Guidance

Researchers developed GRIT, a two-stage AI framework that learns dexterous robotic grasping from sparse taxonomy guidance, achieving 87.9% success rate. The system first predicts grasp specifications from scene context, then generates finger motions while preserving intended grasp structure, improving generalization to novel objects.

AIBullisharXiv – CS AI · Apr 77/10
🧠

SLaB: Sparse-Lowrank-Binary Decomposition for Efficient Large Language Models

Researchers propose SLaB, a novel framework for compressing large language models by decomposing weight matrices into sparse, low-rank, and binary components. The method achieves significant improvements over existing compression techniques, reducing perplexity by up to 36% at 50% compression rates without requiring model retraining.

🏢 Perplexity🧠 Llama
AIBullisharXiv – CS AI · Apr 77/10
🧠

Springdrift: An Auditable Persistent Runtime for LLM Agents with Case-Based Memory, Normative Safety, and Ambient Self-Perception

Researchers have developed Springdrift, a persistent runtime system for long-lived AI agents that maintains memory across sessions and provides auditable decision-making capabilities. The system was successfully deployed for 23 days, during which the AI agent autonomously diagnosed infrastructure problems and maintained context across multiple communication channels without explicit instructions.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Unlocking Prompt Infilling Capability for Diffusion Language Models

Researchers have developed a method to unlock prompt infilling capabilities in masked diffusion language models by extending full-sequence masking during supervised fine-tuning, rather than the conventional response-only masking. This breakthrough enables models to automatically generate effective prompts that match or exceed manually designed templates, suggesting training practices rather than architectural limitations were the primary constraint.

← PrevPage 154 of 1177Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined