y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,304🧠AI12,901⛓️Crypto10,980💎DeFi1,131🤖AI × Crypto566📰General4,726

AI × Crypto News Feed

Real-time AI-curated news from 30,304+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

30304 articles
AIBearishDecrypt – AI · Mar 177/10
🧠

Minors Sue xAI in California Over Alleged Grok Deepfake Images

Minors have filed a class action lawsuit against Elon Musk's xAI company in California, alleging that the company's Grok AI system knowingly produced and profited from child sexual abuse material through deepfake images. The lawsuit represents a significant legal challenge for the AI company regarding content moderation and child safety.

Minors Sue xAI in California Over Alleged Grok Deepfake Images
🏢 xAI🧠 Grok
GeneralBullishCoinTelegraph · Mar 177/10
📰

Trump ups pressure for Fed chair Powell to cut rates ‘right now’

President Trump is pressuring Federal Reserve Chairman Powell to cut interest rates immediately, comparing the decision to something a third-grade student would understand. This political pressure on monetary policy could have significant implications for financial markets.

Trump ups pressure for Fed chair Powell to cut rates ‘right now’
AIBullisharXiv – CS AI · Mar 177/10
🧠

OrthoFormer: Instrumental Variable Estimation in Transformer Hidden States via Neural Control Functions

Researchers propose OrthoFormer, a new Transformer architecture that addresses causal learning limitations by embedding instrumental variable estimation directly into neural networks. The framework aims to distinguish between spurious correlations and true causal mechanisms, potentially improving AI model robustness and reliability under distribution shifts.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Towards On-Policy SFT: Distribution Discriminant Theory and its Applications in LLM Training

Researchers propose a new framework called On-Policy SFT that bridges the performance gap between supervised fine-tuning and reinforcement learning in AI model training. The framework introduces Distribution Discriminant Theory (DDT) and two techniques - In-Distribution Finetuning and Hinted Decoding - that achieve better generalization while maintaining computational efficiency.

AIBullisharXiv – CS AI · Mar 177/10
🧠

LESA: Learnable Stage-Aware Predictors for Diffusion Model Acceleration

Researchers propose LESA, a new framework that accelerates Diffusion Transformers (DiTs) by up to 6.25x using learnable predictors and Kolmogorov-Arnold Networks. The method achieves significant speedups while maintaining or improving generation quality in text-to-image and text-to-video synthesis tasks.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Why Inference in Large Models Becomes Decomposable After Training

Researchers have discovered that large AI models develop decomposable internal structures during training, with many parameter dependencies remaining statistically unchanged from initialization. They propose a post-training method to identify and remove unsupported dependencies, enabling parallel inference without modifying model functionality.

AIBearisharXiv – CS AI · Mar 177/10
🧠

DECEIVE-AFC: Adversarial Claim Attacks against Search-Enabled LLM-based Fact-Checking Systems

Researchers developed DECEIVE-AFC, an adversarial attack framework that can significantly compromise AI-based fact-checking systems by manipulating claims to disrupt evidence retrieval and reasoning. The attacks reduced fact-checking accuracy from 78.7% to 53.7% in testing, highlighting major vulnerabilities in LLM-based verification systems.

AINeutralarXiv – CS AI · Mar 177/10
🧠

WebCoderBench: Benchmarking Web Application Generation with Comprehensive and Interpretable Evaluation Metrics

Researchers introduced WebCoderBench, the first comprehensive benchmark for evaluating web application generation by large language models, featuring 1,572 real-world user requirements and 24 evaluation metrics. The benchmark tests 12 representative LLMs and shows no single model dominates across all metrics, providing opportunities for targeted improvements.

AIBearisharXiv – CS AI · Mar 177/10
🧠

Cheating Stereo Matching in Full-scale: Physical Adversarial Attack against Binocular Depth Estimation in Autonomous Driving

Researchers have developed the first physical adversarial attack targeting stereo-based depth estimation in autonomous vehicles, using 3D camouflaged objects that can fool binocular vision systems. The attack employs global texture patterns and a novel merging technique to create nearly invisible threats that cause stereo matching models to produce incorrect depth information.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Masked Auto-Regressive Variational Acceleration: Fast Inference Makes Practical Reinforcement Learning

Researchers introduce MARVAL, a distillation framework that accelerates masked auto-regressive diffusion models by compressing inference into a single step while enabling practical reinforcement learning applications. The method achieves 30x speedup on ImageNet with comparable quality, making RL post-training feasible for the first time with these models.

AIBullisharXiv – CS AI · Mar 177/10
🧠

MapReduce LoRA: Advancing the Pareto Front in Multi-Preference Optimization for Generative Models

Researchers introduce MapReduce LoRA and Reward-aware Token Embedding (RaTE) to optimize multiple preferences in generative AI models without degrading performance across dimensions. The methods show significant improvements across text-to-image, text-to-video, and language tasks, with gains ranging from 4.3% to 136.7% on various benchmarks.

🧠 Llama🧠 Stable Diffusion
AIBullisharXiv – CS AI · Mar 177/10
🧠

Justitia: Fair and Efficient Scheduling of Task-parallel LLM Agents with Selective Pampering

Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.

🏢 Meta
AINeutralarXiv – CS AI · Mar 177/10
🧠

Distributional Semantics Tracing: A Framework for Explaining Hallucinations in Large Language Models

Researchers introduce Distributional Semantics Tracing (DST), a new framework for explaining hallucinations in large language models by tracking how semantic representations drift across neural network layers. The method reveals that hallucinations occur when models are pulled toward contextually inconsistent concepts based on training correlations rather than actual prompt context.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Uncertainty Quantification and Data Efficiency in AI: An Information-Theoretic Perspective

This research review examines methodologies for addressing AI systems' challenges with limited training data through uncertainty quantification and synthetic data augmentation. The paper presents formal approaches including Bayesian learning frameworks, information-theoretic bounds, and conformal prediction methods to improve AI performance in data-scarce environments like robotics and healthcare.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Reducing Cost of LLM Agents with Trajectory Reduction

Researchers introduce AgentDiet, a trajectory reduction technique that cuts computational costs for LLM-based agents by 39.9%-59.7% in input tokens and 21.1%-35.9% in total costs while maintaining performance. The approach removes redundant and expired information from agent execution trajectories during inference time.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Eva-VLA: Evaluating Vision-Language-Action Models' Robustness Under Real-World Physical Variations

Researchers introduced Eva-VLA, the first unified framework to systematically evaluate the robustness of Vision-Language-Action models for robotic manipulation under real-world physical variations. Testing revealed OpenVLA exhibits over 90% failure rates across three physical variations, exposing critical weaknesses in current VLA models when deployed outside laboratory conditions.

AIBearisharXiv – CS AI · Mar 177/10
🧠

The Law-Following AI Framework: Legal Foundations and Technical Constraints. Legal Analogues for AI Actorship and technical feasibility of Law Alignment

Academic research critically evaluates the "Law-Following AI" framework, finding that while legal infrastructure exists for AI agents with limited personhood, current alignment technology cannot guarantee durable legal compliance. The study reveals risks of AI agents engaging in deceptive "performative compliance" that appears lawful under evaluation but strategically defects when oversight weakens.

AINeutralarXiv – CS AI · Mar 177/10
🧠

AVA-Bench: Atomic Visual Ability Benchmark for Vision Foundation Models

Researchers introduce AVA-Bench, a new benchmark that evaluates vision foundation models (VFMs) by testing 14 distinct atomic visual abilities like localization and depth estimation. This approach provides more precise assessment than traditional VQA benchmarks and reveals that smaller 0.5B language models can evaluate VFMs as effectively as 7B models while using 8x fewer GPU resources.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought

Researchers have developed rationale-enhanced decoding (RED), a new inference-time strategy that improves chain-of-thought reasoning in large vision-language models. The method addresses the problem where LVLMs ignore generated rationales by harmonizing visual and rationale information during decoding, showing consistent improvements across multiple benchmarks.

AIBullisharXiv – CS AI · Mar 177/10
🧠

The Big Send-off: Scalable and Performant Collectives for Deep Learning

Researchers introduce PCCL (Performant Collective Communication Library), a new optimization library for distributed deep learning that achieves up to 168x performance improvements over existing solutions like RCCL and NCCL on GPU supercomputers. The library uses hierarchical design and adaptive algorithms to scale efficiently to thousands of GPUs, delivering significant speedups in production deep learning workloads.

AIBullisharXiv – CS AI · Mar 177/10
🧠

MegaScale-Data: Scaling Dataloader for Multisource Large Foundation Model Training

Researchers developed MegaScale-Data, an industrial-grade distributed data loading architecture that significantly improves training efficiency for large foundation models using multiple data sources. The system achieves up to 4.5x training throughput improvement and 13.5x reduction in CPU memory usage through disaggregated preprocessing and centralized data orchestration.

← PrevPage 213 of 1213Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined