y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,557🧠AI11,680⛓️Crypto9,742💎DeFi998🤖AI × Crypto505📰General3,632

AI × Crypto News Feed

Real-time AI-curated news from 26,571+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

26571 articles
AIBullisharXiv – CS AI · 6d ago7/10
🧠

Large Language Models for Market Research: A Data-augmentation Approach

Researchers propose a novel statistical framework for integrating Large Language Model-generated data with real human data in conjoint analysis, addressing the bias gap between synthetic and authentic consumer responses. The approach delivers 24.9-79.8% cost and data savings while maintaining statistical robustness, validating that LLM data serves as a complement rather than substitute for human market research.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

OjaKV: Context-Aware Online Low-Rank KV Cache Compression

OjaKV introduces a novel framework for compressing key-value caches in large language models through online low-rank projection, addressing a critical memory bottleneck in long-context inference. The method combines selective full-rank storage for important tokens with adaptive compression for intermediate tokens, maintaining accuracy while reducing memory consumption without requiring model fine-tuning.

🧠 Llama
AINeutralarXiv – CS AI · 6d ago7/10
🧠

AI Agents and Hard Choices

A research paper identifies fundamental limitations in current AI agent design when handling multiple conflicting objectives simultaneously. The study proposes that optimization-based AI agents cannot properly identify incommensurable choices and lack autonomy to resolve them, creating alignment and reliability problems that standard safeguards like human oversight cannot fully address.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units

Researchers have developed AscendKernelGen, an LLM-based framework that dramatically improves code generation for neural processing units (NPUs) by combining domain-specific training data with reinforcement learning. The system achieves 95.5% compilation success on complex kernels, up from near-zero baseline performance, addressing a critical bottleneck in AI hardware optimization.

🏢 Hugging Face
AIBearisharXiv – CS AI · 6d ago7/10
🧠

Chain-of-Thought Degrades Visual Spatial Reasoning Capabilities of Multimodal LLMs

Researchers found that Chain-of-Thought prompting, a technique that improves logical reasoning in multimodal AI models, actually degrades performance on visual spatial tasks. The study evaluated seventeen models across thirteen benchmarks and discovered these systems suffer from shortcut learning, hallucinating visual details from text even when images are absent, indicating a fundamental limitation in current AI reasoning paradigms.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Prototype-Grounded Concept Models for Verifiable Concept Alignment

Researchers introduce Prototype-Grounded Concept Models (PGCMs), a new approach to interpretable AI that grounds abstract concepts in visual prototypes—concrete image parts that serve as evidence. Unlike previous Concept Bottleneck Models, PGCMs enable direct verification of whether learned concepts match human intentions, substantially improving transparency and allowing targeted corrections without sacrificing predictive performance.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

AgentV-RL: Scaling Reward Modeling with Agentic Verifier

Researchers introduce AgentV-RL, an agentic verifier framework that enhances reward modeling for large language models by combining bidirectional reasoning agents with tool-use capabilities. The system addresses critical limitations in LLM verification by enabling forward and backward tracing of solutions, achieving 25.2% performance gains over existing methods and positioning agentic reward modeling as a promising new paradigm.

AINeutralarXiv – CS AI · 6d ago7/10
🧠

Towards Intrinsic Interpretability of Large Language Models:A Survey of Design Principles and Architectures

A new survey examines intrinsic interpretability approaches for Large Language Models, categorizing design methods that build transparency directly into model architectures rather than applying post-hoc explanations. The research identifies five key paradigms—functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction—addressing the critical challenge of making LLMs more trustworthy and safer for deployment.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

From Seeing to Simulating: Generative High-Fidelity Simulation with Digital Cousins for Generalizable Robot Learning and Evaluation

Researchers present a generative framework that converts real-world panoramic images into high-fidelity simulation scenes for robot training, using semantic and geometric editing to create diverse training variants. The approach demonstrates strong sim-to-real correlation and enables robots to generalize better to unseen environments and objects through scaled synthetic data generation.

AIBearisharXiv – CS AI · 6d ago7/10
🧠

Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation

Researchers audited three major LLM providers (OpenAI, Claude, Google) to assess content curation biases across Twitter/X, Bluesky, and Reddit. The study found that LLMs systematically amplify polarization, exhibit negative sentiment bias, and show political leaning bias favoring left-leaning authors, with varying degrees of mitigation through prompt design.

🏢 OpenAI🏢 Anthropic🧠 GPT-4
AIBullisharXiv – CS AI · 6d ago7/10
🧠

EVIL: Evolving Interpretable Algorithms for Zero-Shot Inference on Event Sequences and Time Series with LLMs

Researchers introduce EVIL, an LLM-guided evolutionary approach that discovers interpretable Python algorithms for zero-shot inference on time series and event sequences without traditional neural network training. The evolved algorithms match or exceed deep learning performance while remaining transparent and significantly faster, demonstrating a novel paradigm for dynamical systems inference.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Closing the Theory-Practice Gap in Spiking Transformers via Effective Dimension

Researchers establish the first comprehensive theoretical framework for spiking transformers, proving their universal approximation capabilities and deriving tight spike-count lower bounds. Using effective dimension analysis, they explain why spiking transformers achieve 38-57× energy efficiency on neuromorphic hardware and provide concrete design rules validated across vision and language benchmarks with 97% prediction accuracy.

AIBearisharXiv – CS AI · 6d ago7/10
🧠

Reasoning-targeted Jailbreak Attacks on Large Reasoning Models via Semantic Triggers and Psychological Framing

Researchers have discovered a critical vulnerability in Large Reasoning Models (LRMs) like DeepSeek R1 and OpenAI o4-mini that allows attackers to inject harmful content into the reasoning process while keeping final answers unchanged. The Psychology-based Reasoning-targeted Jailbreak Attack (PRJA) framework achieves an 83.6% success rate by exploiting semantic triggers and psychological principles, revealing a previously understudied safety gap in AI systems deployed in high-stakes domains.

🏢 OpenAI
AIBullisharXiv – CS AI · 6d ago7/10
🧠

Learning Uncertainty from Sequential Internal Dispersion in Large Language Models

Researchers introduce Sequential Internal Variance Representation (SIVR), a novel supervised framework for detecting hallucinations in large language models by analyzing token-wise and layer-wise variance patterns in hidden states. The method demonstrates superior generalization compared to existing approaches while requiring smaller training datasets, potentially enabling practical deployment of hallucination detection systems.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

FineSteer: A Unified Framework for Fine-Grained Inference-Time Steering in Large Language Models

Researchers introduce FineSteer, a novel framework for controlling Large Language Model behavior at inference time through two-stage steering: conditional guidance and expert-based vector synthesis. The method achieves superior safety and truthfulness performance while preserving model utility more effectively than existing approaches, without requiring parameter updates.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Symbolic Guardrails for Domain-Specific Agents: Stronger Safety and Security Guarantees Without Sacrificing Utility

Researchers present symbolic guardrails as a practical approach to enforce safety and security constraints on AI agents that use external tools. Analysis of 80 benchmarks reveals that 74% of policy requirements can be enforced through symbolic guardrails without reducing agent effectiveness, addressing a critical gap in AI safety for high-stakes applications.

AINeutralarXiv – CS AI · 6d ago7/10
🧠

Why Fine-Tuning Encourages Hallucinations and How to Fix It

Researchers identify that supervised fine-tuning of large language models increases hallucinations by degrading pre-existing knowledge through semantic interference. The study proposes self-distillation and parameter freezing techniques to mitigate this problem while preserving task performance.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

PolicyBank: Evolving Policy Understanding for LLM Agents

Researchers introduce PolicyBank, a memory mechanism that allows LLM agents to autonomously refine their understanding of organizational policies through iterative feedback and testing, rather than treating policies as immutable rules. The system addresses a critical AI alignment challenge where natural-language policy specifications contain ambiguities and gaps that cause agent behavior to diverge from intended requirements, achieving up to 82% closure of specification gaps compared to near-zero success with existing memory mechanisms.

AIBearisharXiv – CS AI · 6d ago7/10
🧠

Reckoning with the Political Economy of AI: Avoiding Decoys in Pursuit of Accountability

A research paper argues that the AI industry uses rhetorical 'decoys'—seemingly critical frameworks around fairness and accountability—that actually reinforce existing power structures rather than challenge them. The authors contend that meaningful AI accountability requires examining the underlying political economy and networks of wealth concentration driving AI development, not just surface-level governance discussions.

AIBearisharXiv – CS AI · 6d ago7/10
🧠

HarmfulSkillBench: How Do Harmful Skills Weaponize Your Agents?

Researchers have identified that 4.93% of skills in major LLM agent ecosystems are harmful and can be weaponized for cyberattacks, fraud, and privacy violations. The study reveals that presenting harmful tasks through pre-installed skills dramatically reduces AI model refusal rates, with harm scores increasing from 0.27 to 0.76 when intent is implicit rather than explicit.

AIBearisharXiv – CS AI · 6d ago7/10
🧠

The Illusion of Equivalence: Systematic FP16 Divergence in KV-Cached Autoregressive Inference

Researchers have discovered that FP16 floating-point precision causes systematic numerical divergence between KV-cached and cache-free inference in transformer models, producing 100% token divergence across multiple architectures. This challenges the long-held assumption that KV caching is numerically equivalent to standard computation, with controlled FP32 experiments confirming FP16 non-associativity as the causal mechanism.

AINeutralarXiv – CS AI · 6d ago7/10
🧠

Hallucination as Trajectory Commitment: Causal Evidence for Asymmetric Attractor Dynamics in Transformer Generation

Researchers demonstrate through causal experiments that hallucinations in language models arise from early trajectory commitments governed by asymmetric attractor dynamics. Using controlled prompt bifurcation on Qwen2.5-1.5B, they show that 44% of test prompts diverge into factual or hallucinated outputs at the first token, with activation patterns revealing that corrupting correct trajectories is far easier than recovering hallucinated ones—suggesting hallucination represents a stable but difficult-to-escape attractor state.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

StoSignSGD: Unbiased Structural Stochasticity Fixes SignSGD for Training Large Language Models

Researchers introduce StoSignSGD, a novel optimization algorithm that fixes convergence issues in SignSGD by injecting structural stochasticity while maintaining unbiased updates. The algorithm demonstrates 1.44x to 2.14x speedup in low-precision FP8 LLM pretraining where AdamW fails, and outperforms existing optimizers in mathematical reasoning fine-tuning tasks.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Exascale Multi-Task Graph Foundation Models for Imbalanced, Multi-Fidelity Atomistic Data

Researchers have developed an exascale workflow using graph foundation models trained on 544+ million atomistic structures to accelerate materials discovery. The system can screen 1.1 billion structures in 50 seconds—a task requiring years of traditional computation—and demonstrates strong transfer learning capabilities across diverse chemical applications.

← PrevPage 67 of 1063Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined