y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,551🧠AI13,459⛓️Crypto11,421💎DeFi1,174🤖AI × Crypto574📰General4,923

AI × Crypto News Feed

Real-time AI-curated news from 31,554+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31554 articles
AIBullisharXiv – CS AI · Mar 57/10
🧠

PlaneCycle: Training-Free 2D-to-3D Lifting of Foundation Models Without Adapters

PlaneCycle introduces a training-free method to convert 2D AI foundation models to 3D without requiring retraining or architectural changes. The technique enables pretrained 2D models like DINOv3 to process 3D volumetric data by cyclically distributing spatial aggregation across orthogonal planes, achieving competitive performance on 3D classification and segmentation tasks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Architectural Proprioception in State Space Models: Thermodynamic Training Induces Anticipatory Halt Detection

Researchers introduce the Probability Navigation Architecture (PNA) framework that trains State Space Models with thermodynamic principles, discovering that SSMs develop 'architectural proprioception' - the ability to predict when to stop computation based on internal state entropy. This breakthrough shows SSMs can achieve computational self-awareness while Transformers cannot, with significant implications for efficient AI inference systems.

AINeutralarXiv – CS AI · Mar 57/10
🧠

When AI Fails, What Works? A Data-Driven Taxonomy of Real-World AI Risk Mitigation Strategies

Researchers analyzed 9,705 AI incident reports to create an expanded taxonomy of real-world AI risk mitigation strategies, identifying four new categories of responses including corrective actions, legal enforcement, financial controls, and avoidance tactics. The study expands existing mitigation frameworks by 67% and provides structured guidance for preventing cascading AI system failures in high-stakes deployments.

AIBullisharXiv – CS AI · Mar 56/10
🧠

CubeComposer: Spatio-Temporal Autoregressive 4K 360{\deg} Video Generation from Perspective Video

CubeComposer is a new AI model that generates high-quality 4K 360-degree panoramic videos from regular perspective videos using a novel spatio-temporal autoregressive diffusion approach. The technology addresses computational limitations of existing methods by decomposing videos into cubemap representations, enabling native 4K resolution output for VR applications.

AINeutralarXiv – CS AI · Mar 57/10
🧠

World Properties without World Models: Recovering Spatial and Temporal Structure from Co-occurrence Statistics in Static Word Embeddings

Research shows that static word embeddings like GloVe and Word2Vec can recover substantial geographic and temporal information from text co-occurrence patterns alone, challenging assumptions that such capabilities require sophisticated world models in large language models. The study found these simple embeddings could predict city coordinates and historical birth years with high accuracy, suggesting that linear probe recoverability doesn't necessarily indicate advanced internal representations.

AIBullisharXiv – CS AI · Mar 57/10
🧠

SPRINT: Semi-supervised Prototypical Representation for Few-Shot Class-Incremental Tabular Learning

Researchers introduce SPRINT, the first Few-Shot Class-Incremental Learning (FSCIL) framework designed specifically for tabular data domains like cybersecurity and healthcare. The system achieves 77.37% accuracy in 5-shot learning scenarios, outperforming existing methods by 4.45% through novel semi-supervised techniques that leverage unlabeled data and confidence-based pseudo-labeling.

AIBullisharXiv – CS AI · Mar 57/10
🧠

What Does Flow Matching Bring To TD Learning?

Researchers demonstrate that flow matching improves reinforcement learning through enhanced TD learning mechanisms rather than distributional modeling. The approach achieves 2x better final performance and 5x improved sample efficiency compared to standard critics by enabling test-time error recovery and more plastic feature learning.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Efficient Refusal Ablation in LLM through Optimal Transport

Researchers developed a new AI safety attack method using optimal transport theory that achieves 11% higher success rates in bypassing language model safety mechanisms compared to existing approaches. The study reveals that AI safety refusal mechanisms are localized to specific network layers rather than distributed throughout the model, suggesting current alignment methods may be more vulnerable than previously understood.

🏢 Perplexity🧠 Llama
AIBullisharXiv – CS AI · Mar 57/10
🧠

RoboCasa365: A Large-Scale Simulation Framework for Training and Benchmarking Generalist Robots

Researchers have released RoboCasa365, a large-scale simulation benchmark featuring 365 household tasks across 2,500 kitchen environments with over 600 hours of human demonstration data. The platform is designed to train and evaluate generalist robots for everyday tasks, providing insights into factors affecting robot performance and generalization capabilities.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Dissecting Quantization Error: A Concentration-Alignment Perspective

Researchers introduce Concentration-Alignment Transforms (CAT), a new method to reduce quantization error in large language and vision models by improving both weight/activation concentration and alignment. The technique consistently matches or outperforms existing quantization methods at 4-bit precision across several LLMs.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Dual-Modality Multi-Stage Adversarial Safety Training: Robustifying Multimodal Web Agents Against Cross-Modal Attacks

Researchers developed DMAST, a new training framework that protects multimodal web agents from cross-modal attacks where adversaries inject malicious content into webpages to deceive both visual and text processing channels. The method uses adversarial training through a three-stage pipeline and significantly outperforms existing defenses while doubling task completion efficiency.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Low-Resource Guidance for Controllable Latent Audio Diffusion

Researchers have developed a new method called Latent-Control Heads (LatCHs) that enables efficient control of audio generation in diffusion models with significantly reduced computational costs. The approach operates directly in latent space, avoiding expensive decoder steps and requiring only 7M parameters and 4 hours of training while maintaining audio quality.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Robustness of Agentic AI Systems via Adversarially-Aligned Jacobian Regularization

Researchers introduce Adversarially-Aligned Jacobian Regularization (AAJR), a new method to improve the robustness of autonomous AI agent systems by controlling sensitivity along adversarial directions rather than globally. This approach maintains better performance while ensuring stability in multi-agent AI ecosystems compared to existing methods.

AIBullisharXiv – CS AI · Mar 57/10
🧠

ZipMap: Linear-Time Stateful 3D Reconstruction with Test-Time Training

Researchers introduce ZipMap, a new AI model for 3D reconstruction that achieves linear-time processing while maintaining accuracy comparable to slower quadratic-time methods. The system can reconstruct over 700 frames in under 10 seconds on a single H100 GPU, making it more than 20x faster than current state-of-the-art approaches like VGGT.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Synthetic emotions and consciousness: exploring architectural boundaries

Researchers propose an architectural framework for implementing emotion-like AI systems while deliberately avoiding features associated with consciousness. The study introduces risk-reduction constraints and engineering principles to create sophisticated emotional AI without triggering consciousness-related safety concerns.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Emotion-Gradient Metacognitive RSI (Part I): Theoretical Foundations and Single-Agent Architecture

Researchers introduce the Emotion-Gradient Metacognitive Recursive Self-Improvement (EG-MRSI) framework, a theoretical architecture for AI systems that can safely modify their own learning algorithms. The framework integrates metacognition, emotion-based motivation, and self-modification with formal safety constraints, representing foundational research toward safe artificial general intelligence.

AIBullisharXiv – CS AI · Mar 56/10
🧠

R1-Code-Interpreter: LLMs Reason with Code via Supervised and Multi-stage Reinforcement Learning

Researchers developed R1-Code-Interpreter, a large language model that uses multi-stage reinforcement learning to autonomously generate code for step-by-step reasoning across diverse tasks. The 14B parameter model achieves 72.4% accuracy on test tasks, outperforming GPT-4o variants and demonstrating emergent self-checking capabilities through code generation.

🏢 Hugging Face🧠 GPT-4
AIBullisharXiv – CS AI · Mar 57/10
🧠

LeanTutor: Towards a Verified AI Mathematical Proof Tutor

Researchers have developed LeanTutor, a proof-of-concept AI system that combines Large Language Models with theorem provers to create a mathematically verified proof tutor. The system features three modules for autoformalization, proof-checking, and natural language feedback, evaluated using PeanoBench, a new dataset of 371 Peano Arithmetic proofs.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AIBullisharXiv – CS AI · Mar 56/10
🧠

ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools

Researchers introduce ToolVQA, a large-scale multimodal dataset with 23K instances designed to improve AI models' ability to use external tools for visual question answering. The dataset features real-world contexts and multi-step reasoning tasks, with fine-tuned 7B models outperforming GPT-3.5-turbo on various benchmarks.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety

Researchers introduced WebRRSBench, a comprehensive benchmark evaluating multimodal large language models' reasoning, robustness, and safety capabilities for web understanding tasks. Testing 11 MLLMs on 3,799 QA pairs from 729 websites revealed significant gaps in compositional reasoning, UI robustness, and safety-critical action recognition.

← PrevPage 277 of 1263Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined