y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,652🧠AI13,486⛓️Crypto11,470💎DeFi1,180🤖AI × Crypto583📰General4,933

AI × Crypto News Feed

Real-time AI-curated news from 31,652+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31652 articles
AINeutralarXiv – CS AI · Mar 37/104
🧠

When Bias Meets Trainability: Connecting Theories of Initialization

New research connects initial guessing bias in untrained deep neural networks to established mean field theories, proving that optimal initialization for learning requires systematic bias toward specific classes rather than neutral initialization. The study demonstrates that efficient training is fundamentally linked to architectural prejudices present before data exposure.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Self-Destructive Language Model

Researchers introduce SEAM, a novel defense mechanism that makes large language models 'self-destructive' when adversaries attempt harmful fine-tuning attacks. The system allows models to function normally for legitimate tasks but causes catastrophic performance degradation when fine-tuned on harmful data, creating robust protection against malicious modifications.

AIBullisharXiv – CS AI · Mar 37/103
🧠

FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference

Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.

$NEAR
AINeutralarXiv – CS AI · Mar 37/104
🧠

Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models

Researchers analyzed 20 Mixture-of-Experts (MoE) language models to study local routing consistency, finding a trade-off between routing consistency and local load balance. The study introduces new metrics to measure how well expert offloading strategies can optimize memory usage on resource-constrained devices while maintaining inference speed.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Characterizing Pattern Matching and Its Limits on Compositional Task Structures

New research formally defines and analyzes pattern matching in large language models, revealing predictable limits in their ability to generalize on compositional tasks. The study provides mathematical boundaries for when pattern matching succeeds or fails, with implications for AI model development and understanding.

AIBullisharXiv – CS AI · Mar 37/104
🧠

RefTool: Reference-Guided Tool Creation for Knowledge-Intensive Reasoning

Researchers introduce RefTool, a framework that enables Large Language Models to create and use external tools by leveraging reference materials like textbooks. The system outperforms existing methods by 12.3% on average across scientific reasoning tasks and shows promise for broader applications.

AINeutralarXiv – CS AI · Mar 37/104
🧠

VeriTrail: Closed-Domain Hallucination Detection with Traceability

Researchers have developed VeriTrail, the first closed-domain hallucination detection method that can trace where AI-generated misinformation originates in multi-step processes. The system addresses a critical problem where language models generate unsubstantiated content even when instructed to stick to source material, with the risk being higher in complex multi-step generative processes.

AIBullisharXiv – CS AI · Mar 37/104
🧠

AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Researchers have developed AReaL, a new asynchronous reinforcement learning system that dramatically improves the efficiency of training large language models for reasoning tasks. The system achieves up to 2.77x training speedup compared to traditional synchronous methods by decoupling generation from training processes.

AIBullisharXiv – CS AI · Mar 37/104
🧠

General search techniques without common knowledge for imperfect-information games, and application to superhuman Fog of War chess

Researchers have developed Obscuro, the first AI system to achieve superhuman performance in Fog of War chess, a complex imperfect-information variant of chess. The breakthrough introduces new search techniques for imperfect-information games and represents the largest zero-sum game where superhuman AI performance has been demonstrated under imperfect information conditions.

AIBullisharXiv – CS AI · Mar 37/103
🧠

RoboPARA: Dual-Arm Robot Planning with Parallel Allocation and Recomposition Across Tasks

Researchers introduce RoboPARA, a new LLM-driven framework that optimizes dual-arm robot task planning through parallel processing and dependency mapping. The system uses directed acyclic graphs to maximize efficiency in complex multitasking scenarios and includes the first dataset specifically designed for evaluating dual-arm parallelism.

AIBullisharXiv – CS AI · Mar 37/103
🧠

VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models

Researchers introduce VITA, a zero-shot value function learning method that enhances Vision-Language Models through test-time adaptation for robotic manipulation tasks. The system updates parameters sequentially over trajectories to improve temporal reasoning and generalizes across diverse environments, outperforming existing autoregressive VLM methods.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Language Agents for Hypothesis-driven Clinical Decision Making with Reinforcement Learning

Researchers developed LA-CDM, a language agent that uses reinforcement learning to support clinical decision-making by iteratively requesting tests and generating hypotheses for diagnosis. The system was trained using a hybrid approach combining supervised and reinforcement learning, and tested on real-world data covering four abdominal diseases.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Long-Context Generalization with Sparse Attention

Researchers introduce ASEntmax, a new attention mechanism for transformer models that uses sparse attention with learnable temperature parameters. This approach significantly outperforms traditional softmax attention, achieving up to 1000x length extrapolation on synthetic tasks and better long-context performance in language modeling.

AIBullisharXiv – CS AI · Mar 37/103
🧠

LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning

Researchers introduce LongWriter-Zero, a reinforcement learning approach that enables large language models to generate ultra-long, high-quality text without relying on synthetic training data. The 32B parameter model outperforms traditional supervised fine-tuning methods and even surpasses larger 100B+ models on long-form writing benchmarks.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Navigating with Annealing Guidance Scale in Diffusion Space

Researchers propose a new annealing guidance scheduler that dynamically adjusts guidance scales in diffusion models during image generation, improving both image quality and text prompt alignment. The method enhances text-to-image generation performance without requiring additional memory or computational resources.

AIBullisharXiv – CS AI · Mar 37/102
🧠

Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs

Researchers propose Partial Model Collapse (PMC), a novel machine unlearning method for large language models that removes private information without directly training on sensitive data. The approach leverages model collapse - where models degrade when trained on their own outputs - as a feature to deliberately forget targeted information while preserving general utility.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Advancing Complex Video Object Segmentation via Progressive Concept Construction

Researchers introduce Segment Concept (SeC), a new video object segmentation framework that uses Large Vision-Language Models to build conceptual representations rather than relying on traditional feature matching. SeC achieves an 11.8-point improvement over SAM 2.1 on the new SeCVOS benchmark, establishing state-of-the-art performance in concept-aware video object segmentation.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Model Predictive Adversarial Imitation Learning for Planning from Observation

Researchers have developed a new approach called Model Predictive Adversarial Imitation Learning that combines inverse reinforcement learning with model predictive control to enable AI agents to learn from incomplete human demonstrations. The method shows significant improvements in sample efficiency, generalization, and robustness compared to traditional imitation learning approaches.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Learning Robust Intervention Representations with Delta Embeddings

Researchers propose Causal Delta Embeddings, a new method for learning robust AI representations from image pairs that improves out-of-distribution performance. The approach focuses on representing interventions in causal models rather than just scene variables, achieving significant improvements in synthetic and real-world benchmarks without additional supervision.

AIBullisharXiv – CS AI · Mar 37/105
🧠

HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models

Researchers developed HierarchicalPrune, a compression framework that reduces large-scale text-to-image diffusion models' memory footprint by 77.5-80.4% and latency by 27.9-38.0% while maintaining image quality. The technique enables billion-parameter AI models to run efficiently on resource-constrained devices through hierarchical pruning and knowledge distillation.

AIBullisharXiv – CS AI · Mar 37/103
🧠

FROGENT: An End-to-End Full-process Drug Design Multi-Agent System

Researchers have developed FROGENT, an AI multi-agent system that uses large language models to automate the entire drug discovery pipeline from target identification to synthesis planning. The system outperformed existing AI approaches across eight benchmarks and demonstrated practical applications in real-world drug design scenarios.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Learning Internal Biological Neuron Parameters and Complexity-Based Encoding for Improved Spiking Neural Networks Performance

Researchers developed a novel learning approach for spiking neural networks that optimizes both synaptic weights and intrinsic neuronal parameters, achieving up to 13.50 percentage point improvements in classification accuracy. The study introduces a biologically-inspired SNN-LZC classifier that achieves 99.50% accuracy with sub-millisecond inference latency.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Disentangled Multi-modal Learning of Histology and Transcriptomics for Cancer Characterization

Researchers developed a new disentangled multi-modal framework that combines histopathology and transcriptome data for improved cancer diagnosis and prognosis. The framework addresses key challenges in medical AI including multi-modal data heterogeneity and dependency on paired datasets through innovative fusion techniques and knowledge distillation strategies.

← PrevPage 306 of 1267Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined