y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,636🧠AI13,484⛓️Crypto11,463💎DeFi1,179🤖AI × Crypto583📰General4,927

AI × Crypto News Feed

Real-time AI-curated news from 31,636+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31636 articles
AIBullisharXiv – CS AI · Mar 46/102
🧠

CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

Researchers introduce CoBELa, a new AI framework for interpretable image generation that uses concept bottlenecks on energy landscapes to enable transparent, controllable synthesis without requiring decoder retraining. The system achieves strong performance on benchmark datasets while allowing users to compositionally manipulate concepts through energy function combinations.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Improving Classifier-Free Guidance in Masked Diffusion: Low-Dim Theoretical Insights with High-Dim Impact

Researchers have developed an improved Classifier-Free Guidance mechanism for masked diffusion models that addresses quality degradation issues in AI generation. The study reveals that high guidance early in sampling harms quality while late-stage guidance improves it, leading to a simple one-line code fix that enhances conditional image and text generation.

AIBullisharXiv – CS AI · Mar 47/102
🧠

DMTrack: Spatio-Temporal Multimodal Tracking via Dual-Adapter

Researchers introduce DMTrack, a novel dual-adapter architecture for spatio-temporal multimodal tracking that achieves state-of-the-art performance with only 0.93M trainable parameters. The system uses two key modules - a spatio-temporal modality adapter and a progressive modality complementary adapter - to bridge gaps between different modalities and enable better cross-modality fusion.

AIBullisharXiv – CS AI · Mar 47/103
🧠

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward

Researchers have identified a critical flaw in reinforcement learning fine-tuning of large language models that causes degradation in multi-attempt performance despite improvements in single attempts. Their proposed solution, Diversity-Preserving Hybrid RL (DPH-RL), uses mass-covering f-divergences to maintain model diversity and prevent catastrophic forgetting while improving training efficiency.

AINeutralarXiv – CS AI · Mar 47/102
🧠

No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

Researchers developed linear probes that can predict whether large language models will answer questions correctly by analyzing neural activations before any answer is generated. The method works across different model sizes and generalizes to out-of-distribution datasets, though it struggles with mathematical reasoning tasks.

AIBullisharXiv – CS AI · Mar 46/102
🧠

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.

AIBullisharXiv – CS AI · Mar 46/103
🧠

SiNGER: A Clearer Voice Distills Vision Transformers Further

Researchers introduce SiNGER, a new knowledge distillation framework for Vision Transformers that suppresses harmful high-norm artifacts while preserving informative signals. The technique uses nullspace-guided perturbation and LoRA-based adapters to achieve state-of-the-art performance in downstream tasks.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers propose 'best-of-∞' approach for large language models that uses majority voting with infinite samples, achieving superior performance but requiring infinite computation. They develop an adaptive generation scheme that dynamically selects the optimal number of samples based on answer agreement and extend the framework to weighted ensembles of multiple LLMs.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Bridging Kolmogorov Complexity and Deep Learning: Asymptotically Optimal Description Length Objectives for Transformers

Researchers introduce a theoretical framework connecting Kolmogorov complexity to Transformer neural networks through asymptotically optimal description length objectives. The work demonstrates computational universality of Transformers and proposes a variational objective that achieves optimal compression, though current optimization methods struggle to find such solutions from random initialization.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Death of the Novel(ty): Beyond n-Gram Novelty as a Metric for Textual Creativity

Research analyzing 8,618 expert annotations reveals that n-gram novelty, commonly used to evaluate AI text generation, is insufficient for measuring textual creativity. While positively correlated with creativity, 91% of high n-gram novel expressions were not judged as creative by experts, and higher novelty in open-source LLMs correlates with lower pragmatic quality.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Fine-Tuning Diffusion Models via Intermediate Distribution Shaping

Researchers present P-GRAFT, a new method for fine-tuning diffusion models by shaping distributions at intermediate noise levels, showing improved performance on text-to-image generation tasks. The framework achieved an 8.81% relative improvement over base Stable Diffusion v2 model on popular benchmarks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning

Researchers introduce LaDiR (Latent Diffusion Reasoner), a novel framework that combines continuous latent representation with iterative refinement capabilities to enhance Large Language Models' reasoning abilities. The system uses a Variational Autoencoder to encode reasoning steps and a latent diffusion model for parallel generation of diverse reasoning trajectories, showing improved accuracy and interpretability in mathematical reasoning benchmarks.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Spectrum Tuning: Post-Training for Distributional Coverage and In-Context Steerability

Researchers introduce Spectrum Tuning, a new post-training method that improves AI language models' ability to generate diverse outputs and follow in-context steering instructions. The technique addresses limitations in current post-training approaches that reduce models' distributional coverage and flexibility when tasks require multiple valid answers rather than single correct responses.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy

Researchers introduce Energy Landscape Steering (ELS), a new framework that reduces false refusals in AI safety-aligned language models without compromising security. The method uses an external Energy-Based Model to dynamically guide model behavior during inference, improving compliance from 57.3% to 82.6% on safety benchmarks.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Self-Aug: Query and Entropy Adaptive Decoding for Large Vision-Language Models

Researchers developed a new training-free decoding strategy for Large Vision-Language Models that reduces hallucinations by using query-adaptive visual augmentation and entropy-based token selection. The method showed significant improvements in factual consistency across four LVLMs and seven benchmarks compared to existing approaches.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Narrow Finetuning Leaves Clearly Readable Traces in Activation Differences

Researchers found that narrow finetuning of Large Language Models leaves detectable traces in model activations that can reveal information about the training domain. The study demonstrates that these biases can be used to understand what data was used for finetuning and suggests mixing pretraining data into finetuning to reduce these traces.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Every Language Model Has a Forgery-Resistant Signature

Researchers have discovered that language models produce outputs with unique geometric signatures that lie on high-dimensional ellipses, which can be used to identify the source model. This signature is forgery-resistant and naturally occurring, potentially enabling cryptographic-like verification of AI model outputs.

AIBullisharXiv – CS AI · Mar 46/104
🧠

xLLM Technical Report

xLLM is a new open-source Large Language Model inference framework that delivers significantly improved performance for enterprise AI deployments. The framework achieves 1.7-2.2x higher throughput compared to existing solutions like MindIE and vLLM-Ascend through novel architectural optimizations including decoupled service-engine design and intelligent scheduling.

AIBullisharXiv – CS AI · Mar 47/104
🧠

VeriStruct: AI-assisted Automated Verification of Data-Structure Modules in Verus

VeriStruct is a new AI framework that automates formal verification of complex data structure modules in the Verus programming language. The system achieved a 99.2% success rate in verifying 128 out of 129 functions across eleven Rust data structure modules, representing significant progress in AI-assisted formal verification.

AIBullisharXiv – CS AI · Mar 47/103
🧠

FAST: Topology-Aware Frequency-Domain Distribution Matching for Coreset Selection

Researchers propose FAST, a new DNN-free framework for coreset selection that compresses large datasets into representative subsets for training deep neural networks. The method uses frequency-domain distribution matching and achieves 9.12% average accuracy improvement while reducing power consumption by 96.57% compared to existing methods.

AINeutralarXiv – CS AI · Mar 47/102
🧠

WARP: Weight Teleportation for Attack-Resilient Unlearning Protocols

Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Dual Randomized Smoothing: Beyond Global Noise Variance

Researchers propose a dual Randomized Smoothing framework that overcomes limitations of standard neural network robustness certification by using input-dependent noise variances instead of global ones. The method achieves strong performance at both small and large radii with gains of 15-20% on CIFAR-10 and 8-17% on ImageNet, while adding only 60% computational overhead.

AIBearisharXiv – CS AI · Mar 47/104
🧠

Zero-Permission Manipulation: Can We Trust Large Multimodal Model Powered GUI Agents?

Researchers discovered a critical security vulnerability in AI-powered GUI agents on Android, where malicious apps can hijack agent actions without requiring dangerous permissions. The 'Action Rebinding' attack exploits timing gaps between AI observation and action, achieving 100% success rates in tests across six popular Android GUI agents.

← PrevPage 295 of 1266Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined