y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,275🧠AI12,898⛓️Crypto10,969💎DeFi1,128🤖AI × Crypto564📰General4,716
🧠

AI

12,898 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12898 articles
AIBullisharXiv – CS AI · Mar 126/10
🧠

HEAL: Hindsight Entropy-Assisted Learning for Reasoning Distillation

Researchers introduce HEAL (Hindsight Entropy-Assisted Learning), a new framework for distilling reasoning capabilities from large AI models into smaller ones. The method overcomes traditional limitations by using three core modules to bridge reasoning gaps and significantly outperforms standard distillation techniques.

🏢 Perplexity
AINeutralarXiv – CS AI · Mar 126/10
🧠

Verbalizing LLM's Higher-order Uncertainty via Imprecise Probabilities

Researchers propose new uncertainty elicitation techniques for large language models using imprecise probabilities framework to better capture higher-order uncertainty. The approach addresses systematic failures in ambiguous question-answering and self-reflection by quantifying both first-order uncertainty over responses and second-order uncertainty about the probability model itself.

AIBullisharXiv – CS AI · Mar 126/10
🧠

When Fine-Tuning Fails and when it Generalises: Role of Data Diversity and Mixed Training in LLM-based TTS

Research demonstrates that LoRA fine-tuning of large language models significantly improves text-to-speech systems, achieving up to 0.42 DNS-MOS gains and 34% SNR improvements when training data has sufficient acoustic diversity. The study establishes LoRA as an effective mechanism for speaker adaptation in compact LLM-based TTS systems, outperforming frozen base models across perceptual quality, speaker fidelity, and signal quality metrics.

AIBullisharXiv – CS AI · Mar 126/10
🧠

LookaheadKV: Fast and Accurate KV Cache Eviction by Glimpsing into the Future without Generation

Researchers have developed LookaheadKV, a new framework that significantly improves memory efficiency in large language models by intelligently evicting less important cached data. The method achieves superior accuracy while reducing computational costs by up to 14.5x compared to existing approaches, making long-context AI tasks more practical.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Dynamics-Predictive Sampling for Active RL Finetuning of Large Reasoning Models

Researchers propose Dynamics-Predictive Sampling (DPS), a new method that improves reinforcement learning finetuning of large language models by predicting which training prompts will be most informative without expensive computational rollouts. The technique models each prompt's learning progress as a dynamical system and uses Bayesian inference to select better training data, reducing computational overhead while achieving superior reasoning performance.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Speaker Verification with Speech-Aware LLMs: Evaluation and Augmentation

Researchers developed a protocol to evaluate speaker verification capabilities in speech-aware large language models, finding weak performance with error rates above 20%. They introduced ECAPA-LLM, a lightweight augmentation that achieves 1.03% error rate by integrating speaker embeddings while maintaining natural language interface.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Towards Cold-Start Drafting and Continual Refining: A Value-Driven Memory Approach with Application to NPU Kernel Synthesis

Researchers introduce EvoKernel, a self-evolving AI framework that addresses the 'Data Wall' problem in deploying Large Language Models for kernel synthesis on data-scarce hardware platforms like NPUs. The system uses memory-based reinforcement learning to improve correctness from 11% to 83% and achieves 3.60x speedup through iterative refinement.

AINeutralarXiv – CS AI · Mar 126/10
🧠

Towards Robust Speech Deepfake Detection via Human-Inspired Reasoning

Researchers propose HIR-SDD, a new framework combining Large Audio Language Models with human-inspired reasoning to detect speech deepfakes. The method aims to improve generalization across different audio domains and provide interpretable explanations for deepfake detection decisions.

AINeutralarXiv – CS AI · Mar 126/10
🧠

Probabilistic Verification of Voice Anti-Spoofing Models

Researchers have developed PV-VASM, a probabilistic framework for verifying the robustness of voice anti-spoofing models against deepfake attacks. The model-agnostic approach estimates misclassification probability under various speech synthesis techniques including text-to-speech and voice cloning, providing formal robustness guarantees against unseen generation methods.

AINeutralarXiv – CS AI · Mar 126/10
🧠

RandMark: On Random Watermarking of Visual Foundation Models

Researchers propose RandMark, a new method for watermarking visual foundation models to protect intellectual property rights. The approach uses a small encoder-decoder network to embed random digital watermarks into internal representations, enabling ownership verification with low false detection rates.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Learning to Negotiate: Multi-Agent Deliberation for Collective Value Alignment in LLMs

Researchers propose a multi-agent negotiation framework for aligning large language models in scenarios involving conflicting stakeholder values. The approach uses two LLM instances with opposing personas engaging in structured dialogue to develop conflict resolution capabilities while maintaining collective agency alignment.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Aligning Large Language Models with Searcher Preferences

Researchers introduce SearchLLM, the first large language model designed for open-ended generative search, featuring a hierarchical reward system that balances safety constraints with user alignment. The model was deployed on RedNote's AI search platform, showing significant improvements in user engagement with a 1.03% increase in Valid Consumption Rate and 2.81% reduction in Re-search Rate.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Causal Concept Graphs in LLM Latent Space for Stepwise Reasoning

Researchers developed Causal Concept Graphs (CCG), a new method for understanding how concepts interact during multi-step reasoning in language models by creating directed graphs of causal dependencies between interpretable features. Testing on GPT-2 Medium across reasoning tasks showed CCG significantly outperformed existing methods with a Causal Fidelity Score of 5.654, demonstrating more effective intervention targeting than random approaches.

AIBearisharXiv – CS AI · Mar 126/10
🧠

Reactive Writers: How Co-Writing with AI Changes How We Engage with Ideas

A research study reveals that AI co-writing tools fundamentally change how people write by shifting them into 'Reactive Writing' mode, where writers evaluate AI suggestions rather than generating original ideas first. This process influences writers' opinions and expressed views without them realizing the AI's impact, as they focus on suggestion evaluation rather than traditional ideation.

AINeutralarXiv – CS AI · Mar 126/10
🧠

Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model?

Researchers propose Contract And Conquer (CAC), a new method for provably generating adversarial examples against black-box neural networks using knowledge distillation and search space contraction. The approach provides theoretical guarantees for finding adversarial examples within a fixed number of iterations and outperforms existing methods on ImageNet datasets including vision transformers.

AIBullisharXiv – CS AI · Mar 126/10
🧠

Designing Service Systems from Textual Evidence

Researchers developed PP-LUCB, an algorithm that efficiently identifies optimal service system configurations by combining biased AI evaluation with selective human audits. The method reduces human audit costs by 90% while maintaining accuracy in selecting the best performing systems from textual evidence like customer support transcripts.

AIBullisharXiv – CS AI · Mar 126/10
🧠

CLIPO: Contrastive Learning in Policy Optimization Generalizes RLVR

Researchers introduce CLIPO (Contrastive Learning in Policy Optimization), a new method that improves upon Reinforcement Learning with Verifiable Rewards (RLVR) for training Large Language Models. CLIPO addresses hallucination and answer-copying issues by incorporating contrastive learning to better capture correct reasoning patterns across multiple solution paths.

AIBearishFortune Crypto · Mar 117/10
🧠

‘Proceed with caution’: Elon Musk offers warning after Amazon reportedly held mandatory meeting to address ‘high blast radius’ AI-related incident

Amazon reportedly held a mandatory meeting to address a significant AI-related infrastructure incident with 'high blast radius' impact. Senior VP Dave Treadwell acknowledged recent poor availability of Amazon's site and related infrastructure, while Elon Musk issued a cautionary warning about the situation.

‘Proceed with caution’: Elon Musk offers warning after Amazon reportedly held mandatory meeting to address ‘high blast radius’ AI-related incident
AIBearishDecrypt – AI · Mar 116/10
🧠

Grammarly Disables AI 'Expert Review' After Backlash From Authors and Journalists

Grammarly disabled its AI 'Expert Review' feature following criticism from authors and journalists who discovered the tool used real experts' identities, including deceased individuals, without obtaining proper consent. The company has announced it will reconsider the tool's implementation in response to the backlash.

Grammarly Disables AI 'Expert Review' After Backlash From Authors and Journalists
← PrevPage 208 of 516Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined