y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,201🧠AI13,292⛓️Crypto11,273💎DeFi1,163🤖AI × Crypto566📰General4,907
🧠

AI

13,292 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13292 articles
AIBullisharXiv – CS AI · Mar 27/1015
🧠

Learning to Generate Secure Code via Token-Level Rewards

Researchers have developed Vul2Safe, a new framework for generating secure code using large language models, which addresses security vulnerabilities through self-reflection and token-level reinforcement learning. The approach introduces the PrimeVul+ dataset and SRCode training framework to provide more precise optimization of security patterns in code generation.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

Hello-Chat: Towards Realistic Social Audio Interactions

Researchers have introduced Hello-Chat, an end-to-end audio language model designed to create more realistic and emotionally resonant AI conversations. The model addresses the robotic nature of existing Large Audio Language Models by using real-life conversation data and achieving breakthrough performance in prosodic naturalness and emotional alignment.

AIBullisharXiv – CS AI · Mar 27/1013
🧠

Brain-OF: An Omnifunctional Foundation Model for fMRI, EEG and MEG

Researchers have developed Brain-OF, the first omnifunctional brain foundation model that can process fMRI, EEG, and MEG data simultaneously within a unified framework. The model introduces novel techniques like Any-Resolution Neural Signal Sampler and Masked Temporal-Frequency Modeling, trained on 40 datasets to achieve superior performance across diverse neuroscience tasks.

AIBullisharXiv – CS AI · Mar 26/1010
🧠

Long Range Frequency Tuning for QML

Researchers have developed a new quantum machine learning optimization technique using ternary encodings that significantly improves frequency tuning efficiency. The method achieves 22.8% better performance than existing approaches while requiring exponentially fewer encoding gates than traditional fixed-frequency methods.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

Higress-RAG: A Holistic Optimization Framework for Enterprise Retrieval-Augmented Generation via Dual Hybrid Retrieval, Adaptive Routing, and CRAG

Researchers have developed Higress-RAG, a new enterprise-grade framework that addresses key challenges in Retrieval-Augmented Generation systems including low retrieval precision, hallucination, and high latency. The system introduces innovations like 50ms semantic caching, hybrid retrieval methods, and corrective evaluation to optimize the entire RAG pipeline for production use.

$LINK
AIBullisharXiv – CS AI · Mar 26/1012
🧠

Democratizing GraphRAG: Linear, CPU-Only Graph Retrieval for Multi-Hop QA

Researchers present SPRIG, a CPU-only GraphRAG system that eliminates expensive LLM-based graph construction and GPU requirements for multi-hop question answering. The system uses lightweight NER-driven co-occurrence graphs with Personalized PageRank, achieving comparable performance while reducing computational costs by 28%.

AINeutralarXiv – CS AI · Mar 26/1013
🧠

Human or Machine? A Preliminary Turing Test for Speech-to-Speech Interaction

Researchers conducted the first Turing test for speech-to-speech AI systems, analyzing 2,968 human judgments across 9 state-of-the-art systems. No current S2S system passed the test, with failures primarily stemming from paralinguistic features and emotional expressivity rather than semantic understanding.

AINeutralarXiv – CS AI · Mar 27/1012
🧠

CIRCLE: A Framework for Evaluating AI from a Real-World Lens

Researchers propose CIRCLE, a six-stage framework for evaluating AI systems through real-world deployment outcomes rather than abstract model performance metrics. The framework aims to bridge the gap between theoretical AI capabilities and actual materialized effects by providing systematic evidence for decision-makers outside the AI development stack.

AINeutralarXiv – CS AI · Mar 27/1020
🧠

LemmaBench: A Live, Research-Level Benchmark to Evaluate LLM Capabilities in Mathematics

Researchers have developed LemmaBench, a new benchmark for evaluating Large Language Models on research-level mathematics by automatically extracting and rewriting lemmas from arXiv papers. Current state-of-the-art LLMs achieve only 10-15% accuracy on these mathematical theorem proving tasks, revealing a significant gap between AI capabilities and human-level mathematical research.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

Recycling Failures: Salvaging Exploration in RLVR via Fine-Grained Off-Policy Guidance

Researchers propose SCOPE, a new framework for Reinforcement Learning from Verifiable Rewards (RLVR) that improves AI reasoning by salvaging partially correct solutions rather than discarding them entirely. The method achieves 46.6% accuracy on math reasoning tasks and 53.4% on out-of-distribution problems by using step-wise correction to maintain exploration diversity.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

Portfolio Reinforcement Learning with Scenario-Context Rollout

Researchers developed a new portfolio reinforcement learning method called macro-conditioned scenario-context rollout (SCR) that addresses market regime shifts and distribution changes. The approach generates plausible return scenarios under stress events and improves portfolio performance by up to 76% in Sharpe ratio and reduces maximum drawdown by 53%.

AIBullisharXiv – CS AI · Mar 26/1022
🧠

RUMAD: Reinforcement-Unifying Multi-Agent Debate

Researchers introduce RUMAD, a reinforcement learning framework that optimizes multi-agent AI debate systems by dynamically controlling communication topology. The system achieves over 80% reduction in computational costs while improving reasoning accuracy across benchmark tests, with strong generalization capabilities across different task domains.

AINeutralarXiv – CS AI · Mar 26/1019
🧠

BRIDGE the Gap: Mitigating Bias Amplification in Automated Scoring of English Language Learners via Inter-group Data Augmentation

Researchers developed BRIDGE, a framework to reduce bias in AI-powered automated scoring systems that unfairly penalize English Language Learners (ELLs). The system addresses representation bias by generating synthetic high-scoring ELL samples, achieving fairness improvements comparable to using additional human data while maintaining overall performance.

AIBullisharXiv – CS AI · Mar 26/1013
🧠

RF-Agent: Automated Reward Function Design via Language Agent Tree Search

Researchers introduce RF-Agent, a framework that uses Large Language Models as agents to automatically design reward functions for control tasks through Monte Carlo Tree Search. The method improves upon existing approaches by better utilizing historical feedback and enhancing search efficiency across 17 diverse low-level control tasks.

AIBullisharXiv – CS AI · Mar 26/1018
🧠

Reasoning-Driven Multimodal LLM for Domain Generalization

Researchers developed RD-MLDG, a new framework that uses multimodal large language models with reasoning chains to improve domain generalization in deep learning. The approach addresses challenges in cross-domain visual recognition by leveraging reasoning capabilities rather than just visual feature invariance, achieving state-of-the-art performance on standard benchmarks.

AINeutralarXiv – CS AI · Mar 26/1010
🧠

Unlocking Cognitive Capabilities and Analyzing the Perception-Logic Trade-off

Researchers introduce MERaLiON2-Omni (Alpha), a 10B-parameter multilingual AI model designed for Southeast Asia that combines perception and reasoning capabilities. The study reveals an efficiency-stability paradox where reasoning enhances abstract tasks but causes instability in basic sensory processing like audio timing and visual interpretation.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

ODAR: Principled Adaptive Routing for LLM Reasoning via Active Inference

Researchers propose ODAR-Expert, an adaptive routing framework for large language models that optimizes accuracy-efficiency trade-offs by dynamically routing queries between fast and slow processing agents. The system achieved 98.2% accuracy on MATH benchmarks while reducing computational costs by 82%, suggesting that optimal AI scaling requires adaptive resource allocation rather than simply increasing test-time compute.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

PseudoAct: Leveraging Pseudocode Synthesis for Flexible Planning and Action Control in Large Language Model Agents

Researchers introduce PseudoAct, a new framework that uses pseudocode synthesis to improve large language model agent planning and action control. The method achieves significant performance improvements over existing reactive approaches, with a 20.93% absolute gain in success rate on FEVER benchmark and new state-of-the-art results on HotpotQA.

AINeutralarXiv – CS AI · Mar 26/1012
🧠

AI Must Embrace Specialization via Superhuman Adaptable Intelligence

A new research paper challenges the concept of Artificial General Intelligence (AGI), arguing that AI should embrace specialization rather than generality. The authors propose Superhuman Adaptable Intelligence (SAI) as an alternative framework that focuses on AI systems that can exceed human performance in specific important tasks while filling capability gaps.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

MMKG-RDS: Reasoning Data Synthesis via Deep Mining of Multimodal Knowledge Graphs

Researchers introduce MMKG-RDS, a framework that uses multimodal knowledge graphs to synthesize high-quality training data for improving AI model reasoning abilities. Testing on Qwen3 models showed 9.2% improvement in reasoning accuracy, with applications for complex benchmark construction involving tables and formulas.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

SleepLM: Natural-Language Intelligence for Human Sleep

Researchers have developed SleepLM, a family of AI foundation models that combine natural language processing with sleep analysis using polysomnography data. The system can interpret and describe sleep patterns in natural language, trained on over 100K hours of sleep data from 10,000+ individuals, enabling new capabilities like language-guided sleep event detection and zero-shot generalization to novel sleep analysis tasks.

← PrevPage 251 of 532Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined