y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,636🧠AI13,484⛓️Crypto11,463💎DeFi1,179🤖AI × Crypto583📰General4,927

AI × Crypto News Feed

Real-time AI-curated news from 31,636+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31636 articles
AIBullisharXiv – CS AI · Mar 46/103
🧠

IoUCert: Robustness Verification for Anchor-based Object Detectors

Researchers introduce IoUCert, a new formal verification framework that enables robustness verification for anchor-based object detection models like SSD, YOLOv2, and YOLOv3. The breakthrough uses novel coordinate transformations and Interval Bound Propagation to overcome previous limitations in verifying object detection systems against input perturbations.

AIBearisharXiv – CS AI · Mar 47/102
🧠

TrustMH-Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Large Language Models in Mental Health

Researchers have developed TrustMH-Bench, a comprehensive framework to evaluate the trustworthiness of Large Language Models (LLMs) in mental health applications. Testing revealed that both general-purpose and specialized mental health LLMs, including advanced models like GPT-5.1, significantly underperform across critical trustworthiness dimensions in mental health scenarios.

AIBullisharXiv – CS AI · Mar 46/102
🧠

TinyIceNet: Low-Power SAR Sea Ice Segmentation for On-Board FPGA Inference

Researchers developed TinyIceNet, a compact AI model for real-time sea ice mapping using satellite SAR imagery, designed specifically for on-board FPGA processing in space. The system achieves 75.216% F1 score while consuming 50% less energy than GPU baselines, demonstrating practical AI deployment for maritime navigation in polar regions.

$NEAR
AIBullisharXiv – CS AI · Mar 46/103
🧠

On the Expressive Power of Transformers for Maxout Networks and Continuous Piecewise Linear Functions

Researchers establish theoretical foundations for Transformer networks' expressive power by connecting them to maxout networks and continuous piecewise linear functions. The study proves Transformers inherit universal approximation capabilities of ReLU networks while revealing that self-attention layers implement max-type operations and feedforward layers perform token-wise affine transformations.

AINeutralarXiv – CS AI · Mar 47/103
🧠

MoECLIP: Patch-Specialized Experts for Zero-shot Anomaly Detection

Researchers have developed MoECLIP, a new AI architecture that improves zero-shot anomaly detection by using specialized experts to analyze different image patches. The system outperforms existing methods across 14 benchmark datasets in industrial and medical domains by dynamically routing patches to specialized LoRA experts while maintaining CLIP's generalization capabilities.

AIBullisharXiv – CS AI · Mar 47/102
🧠

From Complex Dynamics to DynFormer: Rethinking Transformers for PDEs

Researchers have developed DynFormer, a new Transformer-based neural operator that improves partial differential equation (PDE) solving by incorporating physics-informed dynamics. The system achieves up to 95% reduction in relative error compared to existing methods while significantly reducing GPU memory consumption through specialized attention mechanisms for different physical scales.

AIBullisharXiv – CS AI · Mar 46/102
🧠

APRES: An Agentic Paper Revision and Evaluation System

Researchers have developed APRES, an AI-powered system that uses Large Language Models to automatically revise scientific papers based on evaluation rubrics that predict citation counts. The system improves citation prediction accuracy by 19.6% and produces paper revisions that human experts prefer 79% of the time over original versions.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Geometry-Guided Reinforcement Learning for Multi-view Consistent 3D Scene Editing

Researchers propose RL3DEdit, a reinforcement learning framework that addresses multi-view consistency challenges in 3D scene editing by using 2D diffusion model priors with novel reward signals from 3D foundation models. The method achieves stable multi-view consistency and outperforms existing approaches in editing quality and efficiency.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Channel-Adaptive Edge AI: Maximizing Inference Throughput by Adapting Computational Complexity to Channel States

Researchers developed a new channel-adaptive AI algorithm that maximizes inference throughput in 6G edge computing networks by dynamically adjusting computational complexity based on channel conditions. The system uses integrated communication and computation (IC²) to optimize both feature compression and model complexity for mobile edge inference.

AIBullisharXiv – CS AI · Mar 46/104
🧠

Conditioned Activation Transport for T2I Safety Steering

Researchers introduce Conditioned Activation Transport (CAT), a new framework to prevent text-to-image AI models from generating unsafe content while preserving image quality for legitimate prompts. The method uses a geometry-based conditioning mechanism and nonlinear transport maps, validated on Z-Image and Infinity architectures with significantly reduced attack success rates.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Type-Aware Retrieval-Augmented Generation with Dependency Closure for Solver-Executable Industrial Optimization Modeling

Researchers developed a type-aware retrieval-augmented generation (RAG) method that translates natural language requirements into solver-executable optimization code for industrial applications. The method uses a typed knowledge base and dependency closure to ensure code executability, successfully validated on battery production optimization and job scheduling tasks where conventional RAG approaches failed.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Chain of World: World Model Thinking in Latent Motion

Researchers introduce CoWVLA (Chain-of-World VLA), a new Vision-Language-Action model paradigm that combines world-model temporal reasoning with latent motion representation for embodied AI. The approach outperforms existing methods in robotic simulation benchmarks while maintaining computational efficiency through a unified autoregressive decoder that models both keyframes and action sequences.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Understanding and Mitigating Dataset Corruption in LLM Steering

Research reveals that contrastive steering, a method for adjusting LLM behavior during inference, is moderately robust to data corruption but vulnerable to malicious attacks when significant portions of training data are compromised. The study identifies geometric patterns in corruption types and proposes using robust mean estimators as a safeguard against unwanted effects.

AINeutralarXiv – CS AI · Mar 46/102
🧠

UniG2U-Bench: Do Unified Models Advance Multimodal Understanding?

Researchers introduce UniG2U-Bench, a comprehensive benchmark testing whether unified multimodal AI models that can generate content actually understand better than traditional vision-language models. The study of over 30 models reveals that unified models generally underperform their base counterparts, though they show improvements in spatial intelligence and visual reasoning tasks.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Tether: Autonomous Functional Play with Correspondence-Driven Trajectory Warping

Researchers introduce Tether, a breakthrough method enabling robots to perform autonomous functional play using minimal human demonstrations (≤10). The system generates over 1000 expert-level trajectories through continuous cycles of task execution and improvement, representing a significant advance in autonomous robotics learning.

AIBullisharXiv – CS AI · Mar 46/102
🧠

How to Peel with a Knife: Aligning Fine-Grained Manipulation with Human Preference

Researchers developed a two-stage learning framework enabling robots to perform complex manipulation tasks like food peeling with over 90% success rates. The system combines force-aware imitation learning with human preference-based refinement, achieving strong generalization across different produce types using only 50-200 training examples.

AINeutralarXiv – CS AI · Mar 46/103
🧠

ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models

Researchers introduce ViPlan, the first benchmark for comparing Vision-Language Model planning approaches, finding that VLM-as-grounder methods excel in visual tasks like Blocksworld while VLM-as-planner methods perform better in household robotics scenarios. The study reveals fundamental limitations in current VLMs' visual reasoning abilities, with Chain-of-Thought prompting showing no consistent benefits.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Efficient Agent Training for Computer Use

Researchers introduced PC Agent-E, an efficient AI agent training framework that achieves human-like computer use with minimal human demonstration data. Starting with just 312 human-annotated trajectories and augmenting them with Claude 3.7 Sonnet synthesis, the model achieved 141% relative improvement and outperformed Claude 3.7 Sonnet by 10% on WindowsAgentArena-V2 benchmark.

AIBullisharXiv – CS AI · Mar 47/103
🧠

OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging

Researchers introduce OptMerge, a new benchmark and method for combining multiple expert Multimodal Large Language Models (MLLMs) into single, more capable models without requiring additional training data. The approach achieves 2.48% average performance gains while reducing storage and serving costs by merging models across different modalities like vision, audio, and video.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective

New research provides theoretical analysis of reinforcement learning's impact on Large Language Model planning capabilities, revealing that RL improves generalization through exploration while supervised fine-tuning may create spurious solutions. The study shows Q-learning maintains output diversity better than policy gradient methods, with findings validated on real-world planning benchmarks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI

Researchers developed D2E (Desktop to Embodied AI), a framework that uses desktop gaming data to pretrain AI models for robotics tasks. Their 1B-parameter model achieved 96.6% success on manipulation tasks and 83.3% on navigation, matching performance of models up to 7 times larger while using scalable desktop data instead of expensive physical robot training data.

AIBearisharXiv – CS AI · Mar 46/103
🧠

Off-Trajectory Reasoning: Can LLMs Collaborate on Reasoning Trajectory?

New research reveals that current large language models struggle with collaborative reasoning, showing that 'stronger' models are often more fragile when distracted by misleading information. The study of 15 LLMs found they fail to effectively leverage guidance from other models, with success rates below 9.2% on challenging problems.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Reducing Belief Deviation in Reinforcement Learning for Active Reasoning

Researchers introduce T³, a new method to improve large language model (LLM) agents' reasoning abilities by tracking and correcting 'belief deviation' - when AI agents lose accurate understanding of problem states. The technique achieved up to 30-point performance gains and 34% token cost reduction across challenging tasks.

$COMP
← PrevPage 293 of 1266Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined