y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,353🧠AI12,908⛓️Crypto11,002💎DeFi1,132🤖AI × Crypto566📰General4,745

AI × Crypto News Feed

Real-time AI-curated news from 30,356+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

30356 articles
AIBullisharXiv – CS AI · Mar 177/10
🧠

SPARQ: Spiking Early-Exit Neural Networks for Energy-Efficient Edge AI

SPARQ introduces a unified framework combining spiking neural networks, quantization-aware training, and reinforcement learning-guided early exits for energy-efficient edge AI. The system achieves up to 5.15% higher accuracy than conventional quantized SNNs while reducing system energy consumption by over 330 times and cutting synaptic operations by over 90%.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Membership Inference for Contrastive Pre-training Models with Text-only PII Queries

Researchers developed UMID, a new text-only auditing framework to detect if personally identifiable information was memorized during training of multimodal AI models like CLIP and CLAP. The method significantly improves efficiency and effectiveness of membership inference attacks while maintaining privacy constraints.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Efficient Federated Conformal Prediction with Group-Conditional Guarantee

Researchers propose group-conditional federated conformal prediction (GC-FCP), a new protocol that enables trustworthy AI uncertainty quantification across distributed clients while providing coverage guarantees for specific groups. The framework addresses challenges in federated learning for applications in healthcare, finance, and mobile sensing by creating compact weighted summaries that support efficient calibration.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Bridging the Gap in the Responsible AI Divides

Researchers analyzed 3,550 papers to map the divide between AI Safety (AIS) and AI Ethics (AIE) communities, proposing a 'critical bridging' approach to reconcile tensions. The study identifies four engagement modes and finds overlapping concerns around transparency, reproducibility, and governance despite fundamental differences in approach.

AIBullisharXiv – CS AI · Mar 177/10
🧠

APEX-Searcher: Augmenting LLMs' Search Capabilities through Agentic Planning and Execution

Researchers introduce APEX-Searcher, a new framework that enhances large language models' search capabilities through a two-stage approach combining reinforcement learning for strategic planning and supervised fine-tuning for execution. The system addresses limitations in multi-hop question answering by decoupling retrieval processes into planning and execution phases, showing significant improvements across multiple benchmarks.

AIBearisharXiv – CS AI · Mar 177/10
🧠

Sirens' Whisper: Inaudible Near-Ultrasonic Jailbreaks of Speech-Driven LLMs

Researchers developed SWhisper, a framework that uses near-ultrasonic audio to deliver covert jailbreak attacks against speech-driven AI systems. The technique is inaudible to humans but can successfully bypass AI safety measures with up to 94% effectiveness on commercial models.

AIBearisharXiv – CS AI · Mar 177/10
🧠

Large Language Models Reproduce Racial Stereotypes When Used for Text Annotation

A comprehensive study of 19 large language models reveals systematic racial bias in automated text annotation, with over 4 million judgments showing LLMs consistently reproduce harmful stereotypes based on names and dialect. The research demonstrates that AI models rate texts with Black-associated names as more aggressive and those written in African American Vernacular English as less professional and more toxic.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Fine-tuning is Not Enough: A Parallel Framework for Collaborative Imitation and Reinforcement Learning in End-to-end Autonomous Driving

Researchers propose PaIR-Drive, a new parallel framework that combines imitation learning and reinforcement learning for autonomous driving, achieving 91.2 PDMS performance on NAVSIMv1 benchmark. The approach addresses limitations of sequential fine-tuning by running IL and RL in parallel branches, enabling better performance than existing methods.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Purifying Generative LLMs from Backdoors without Prior Knowledge or Clean Reference

Researchers developed a new framework to remove backdoors from large language models without prior knowledge of triggers or clean reference models. The method uses an immunization-inspired approach that creates synthetic backdoored variants to identify and neutralize malicious components while preserving the model's generative capabilities.

AIBearisharXiv – CS AI · Mar 177/10
🧠

Widespread Gender and Pronoun Bias in Moral Judgments Across LLMs

A comprehensive study of six major LLM families reveals systematic biases in moral judgments based on gender pronouns and grammatical markers. The research found that AI models consistently favor non-binary subjects while penalizing male subjects in fairness assessments, raising concerns about embedded biases in AI ethical decision-making.

🏢 Meta🧠 Grok
AIBearisharXiv – CS AI · Mar 177/10
🧠

EvoClaw: Evaluating AI Agents on Continuous Software Evolution

Researchers introduce EvoClaw, a new benchmark that evaluates AI agents on continuous software evolution rather than isolated coding tasks. The study reveals a critical performance drop from >80% on isolated tasks to at most 38% in continuous settings across 12 frontier models, highlighting AI agents' struggle with long-term software maintenance.

AIBearisharXiv – CS AI · Mar 177/10
🧠

$\tau$-Voice: Benchmarking Full-Duplex Voice Agents on Real-World Domains

Researchers introduce τ-voice, a new benchmark for evaluating full-duplex voice AI agents on complex real-world tasks. The study reveals significant performance gaps, with voice agents achieving only 30-45% of text-based AI capability under realistic conditions with noise and diverse accents.

🧠 GPT-5
AIBearisharXiv – CS AI · Mar 177/10
🧠

VisualLeakBench: Auditing the Fragility of Large Vision-Language Models against PII Leakage and Social Engineering

Researchers introduced VisualLeakBench, a new evaluation suite that tests Large Vision-Language Models (LVLMs) for vulnerabilities to privacy attacks through visual inputs. The study found significant weaknesses in frontier AI systems like GPT-5.2, Claude-4, Gemini-3 Flash, and Grok-4, with Claude-4 showing the highest PII leakage rate at 74.4% despite having strong OCR attack resistance.

🧠 GPT-5🧠 Claude🧠 Gemini
AINeutralarXiv – CS AI · Mar 177/10
🧠

Accelerating Suffix Jailbreak attacks with Prefix-Shared KV-cache

Researchers developed Prefix-Shared KV Cache (PSKV), a new technique that accelerates jailbreak attacks on Large Language Models by 40% while reducing memory usage by 50%. The method optimizes the red-teaming process by sharing cached prefixes across multiple attack attempts, enabling more efficient parallel inference without compromising attack success rates.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Safety-Guided Flow (SGF): A Unified Framework for Negative Guidance in Safe Generation

Researchers introduce Safety-Guided Flow (SGF), a unified probabilistic framework that combines control barrier functions with negative guidance approaches to improve safety in AI-generated content. The framework identifies a critical time window during the denoising process where strong negative guidance is most effective for preventing harmful outputs.

AINeutralarXiv – CS AI · Mar 177/10
🧠

Real-World AI Evaluation: How FRAME Generates Systematic Evidence to Resolve the Decision-Maker's Dilemma

FRAME (Forum for Real World AI Measurement and Evaluation) addresses the challenge organizational leaders face in governing AI systems without systematic evidence of real-world performance. The framework combines large-scale AI trials with structured observation of contextual use and outcomes, utilizing a Testing Sandbox and Metrics Hub to provide actionable insights.

$MKR
AIBullisharXiv – CS AI · Mar 177/10
🧠

Preventing Curriculum Collapse in Self-Evolving Reasoning Systems

Researchers introduce Prism, a new self-evolving AI reasoning system that prevents diversity collapse in problem generation by maintaining semantic coverage across mathematical problem spaces. The system achieved significant accuracy improvements over existing methods on mathematical reasoning benchmarks and generated 100k diverse mathematical questions.

AIBullisharXiv – CS AI · Mar 177/10
🧠

RelayCaching: Accelerating LLM Collaboration via Decoding KV Cache Reuse

Researchers introduce RelayCaching, a training-free method that accelerates multi-agent LLM systems by reusing KV cache data from previous agents to eliminate redundant computation. The technique achieves over 80% cache reuse and reduces time-to-first-token by up to 4.7x while maintaining accuracy across mathematical reasoning, knowledge tasks, and code generation.

AIBullisharXiv – CS AI · Mar 177/10
🧠

UniVid: Pyramid Diffusion Model for High Quality Video Generation

Researchers have developed UniVid, a new pyramid diffusion model that unifies text-to-video and image-to-video generation into a single system. The model uses dual-stream cross-attention mechanisms to process both text prompts and reference images, achieving superior temporal coherence across different video generation tasks.

AI × CryptoBullisharXiv – CS AI · Mar 177/10
🤖

TAS-GNN: A Status-Aware Signed Graph Neural Network for Anomaly Detection in Bitcoin Trust Systems

Researchers developed TAS-GNN, a novel Graph Neural Network framework specifically designed to detect fraudulent behavior in Bitcoin trust systems. The system addresses critical limitations in existing anomaly detection methods by using a dual-channel architecture that separately processes trust and distrust signals to better identify Sybil attacks and exit scams.

$BTC
AINeutralarXiv – CS AI · Mar 177/10
🧠

The AI Transformation Gap Index (AITG): An Empirical Framework for Measuring AI Transformation Opportunity, Disruption Risk, and Value Creation at the Industry and Firm Level

Researchers introduce the AI Transformation Gap Index (AITG), the first empirical framework to measure firms' AI readiness relative to competitors and translate it into quantifiable financial outcomes. The framework analyzes 22 industries and shows that larger AI transformation gaps don't always create the highest value due to implementation challenges and timing issues.

AIBullisharXiv – CS AI · Mar 177/10
🧠

ICaRus: Identical Cache Reuse for Efficient Multi Model Inference

ICaRus introduces a novel architecture enabling multiple AI models to share identical Key-Value (KV) caches, addressing memory explosion issues in multi-model inference systems. The solution achieves up to 11.1x lower latency and 3.8x higher throughput by allowing cross-model cache reuse while maintaining comparable accuracy to task-specific fine-tuned models.

← PrevPage 217 of 1215Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined