y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,108🧠AI13,262⛓️Crypto11,232💎DeFi1,158🤖AI × Crypto566📰General4,890
🧠

AI

13,264 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13264 articles
AINeutralarXiv – CS AI · Mar 27/1012
🧠

Representing local protein environments with atomistic foundation models

Researchers developed a novel method to represent local protein environments using atomistic foundation models (AFMs), creating embeddings that capture both structural and chemical features. The approach enables construction of data-driven priors for biomolecular environments and achieves state-of-the-art accuracy in physics-informed chemical shift prediction for NMR spectroscopy.

AIBullisharXiv – CS AI · Mar 27/1018
🧠

Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling

Researchers propose Semantic Parallelism, a new framework called Sem-MoE that significantly improves efficiency of large language model inference by optimizing how AI models distribute computational tasks across multiple devices. The system reduces communication overhead between devices by 'collocating' frequently-used model components with their corresponding data, achieving superior throughput compared to existing solutions.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data

Researchers have developed FinBloom 7B, a specialized large language model trained on 14 million financial news articles and SEC filings, designed to handle real-time financial queries. The model introduces a Financial Agent system that can access up-to-date market data and financial information to support decision-making and algorithmic trading applications.

AINeutralarXiv – CS AI · Mar 27/1015
🧠

What Makes a Reward Model a Good Teacher? An Optimization Perspective

Research reveals that reward model accuracy alone doesn't determine effectiveness in RLHF systems. The study proves that low reward variance can create flat optimization landscapes, making even perfectly accurate reward models inefficient teachers that underperform less accurate models with higher variance.

AIBullisharXiv – CS AI · Mar 26/1021
🧠

Multi-View Encoders for Performance Prediction in LLM-Based Agentic Workflows

Researchers developed Agentic Predictor, a lightweight AI system that uses multi-view encoding to optimize LLM-based agent workflows without expensive trial-and-error evaluations. The system incorporates code architecture, textual prompts, and interaction graphs to predict task success rates and select optimal configurations across different domains.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

Aletheia tackles FirstProof autonomously

Aletheia, a mathematics research agent powered by Gemini 3 Deep Think, successfully solved 6 out of 10 problems in the inaugural FirstProof challenge. The AI system demonstrated autonomous mathematical problem-solving capabilities, with expert assessments confirming its solutions though some disagreement existed on Problem 8.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

FineScope : SAE-guided Data Selection Enables Domain Specific LLM Pruning and Finetuning

Researchers introduce FineScope, a framework that uses Sparse Autoencoder (SAE) techniques to create smaller, domain-specific language models from larger pretrained LLMs through structured pruning and self-data distillation. The method achieves competitive performance while significantly reducing computational requirements compared to training from scratch.

AINeutralarXiv – CS AI · Mar 26/1023
🧠

Spread them Apart: Towards Robust Watermarking of Generated Content

Researchers propose a new watermarking approach for AI-generated content that embeds detectable marks during model inference without requiring retraining. The method aims to address ethical concerns about ownership claims of generated content by allowing future detection and user identification.

AIBearisharXiv – CS AI · Mar 27/1014
🧠

ForesightSafety Bench: A Frontier Risk Evaluation and Governance Framework towards Safe AI

Researchers have developed ForesightSafety Bench, a comprehensive AI safety evaluation framework covering 94 risk dimensions across 7 fundamental safety pillars. The benchmark evaluation of over 20 advanced large language models revealed widespread safety vulnerabilities, particularly in autonomous AI agents, AI4Science, and catastrophic risk scenarios.

AIBullisharXiv – CS AI · Mar 26/1020
🧠

Stop Unnecessary Reflection: Training LRMs for Efficient Reasoning with Adaptive Reflection and Length Coordinated Penalty

Researchers developed ARLCP, a reinforcement learning framework that reduces unnecessary reflection in Large Reasoning Models, achieving 53% shorter responses while improving accuracy by 5.8% on smaller models. The method addresses computational inefficiencies in AI reasoning by dynamically balancing efficiency and accuracy through adaptive penalties.

AIBullisharXiv – CS AI · Mar 26/1016
🧠

Does Your Reasoning Model Implicitly Know When to Stop Thinking?

Researchers introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that improves AI reasoning efficiency by helping large reasoning models know when to stop thinking. The approach addresses the problem of redundant, lengthy reasoning chains that don't improve accuracy while reducing computational costs and response times.

AINeutralarXiv – CS AI · Mar 27/1013
🧠

Efficient Ensemble Conditional Independence Test Framework for Causal Discovery

Researchers introduce E-CIT (Ensemble Conditional Independence Test), a new framework that significantly reduces computational costs in causal discovery by partitioning data into subsets and aggregating results. The method achieves linear computational complexity while maintaining competitive performance, particularly on real-world datasets.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

Robust and Efficient Tool Orchestration via Layered Execution Structures with Reflective Correction

Researchers propose a new approach to tool orchestration in AI agent systems using layered execution structures with reflective error correction. The method reduces execution complexity by using coarse-grained layer structures for global guidance while handling failures locally, eliminating the need for precise dependency graphs or fine-grained planning.

AIBullisharXiv – CS AI · Mar 26/1021
🧠

Reallocating Attention Across Layers to Reduce Multimodal Hallucination

Researchers propose a training-free solution to reduce hallucinations in multimodal AI models by rebalancing attention between perception and reasoning layers. The method achieves 4.2% improvement in reasoning accuracy with minimal computational overhead.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information

Researchers introduce MITS (Mutual Information Tree Search), a new framework that improves reasoning capabilities in large language models using information-theoretic principles. The method uses pointwise mutual information for step-wise evaluation and achieves better performance while being more computationally efficient than existing tree search methods like Tree-of-Thought.

AIBullisharXiv – CS AI · Mar 27/1026
🧠

RE-PO: Robust Enhanced Policy Optimization as a General Framework for LLM Alignment

Researchers introduce RE-PO (Robust Enhanced Policy Optimization), a new framework that addresses noise in human preference data used to train large language models. The method uses expectation-maximization to identify unreliable labels and reweight training data, improving alignment algorithm performance by up to 7% on benchmarks.

$LINK
AIBullisharXiv – CS AI · Mar 27/1016
🧠

Automating the Refinement of Reinforcement Learning Specifications

Researchers introduce AutoSpec, a framework that automatically refines reinforcement learning specifications to help AI agents learn complex tasks more effectively. The system improves coarse-grained logical specifications through exploration-guided strategies while maintaining specification soundness, demonstrating promising improvements in solving complex control tasks.

AINeutralarXiv – CS AI · Mar 27/1014
🧠

Demystifying the Lifecycle of Failures in Platform-Orchestrated Agentic Workflows

Researchers present AgentFail, a dataset of 307 real-world failure cases from agentic workflow platforms, analyzing how multi-agent AI systems fail and can be repaired. The study reveals that failures in these low-code orchestrated AI workflows propagate differently than traditional software, making them harder to diagnose and fix.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

MACD: Multi-Agent Clinical Diagnosis with Self-Learned Knowledge for LLM

Researchers developed MACD, a Multi-Agent Clinical Diagnosis framework that enables large language models to self-learn clinical knowledge and improve medical diagnosis accuracy. The system achieved up to 22.3% improvement over clinical guidelines and 16% improvement over physician-only diagnosis when tested on 4,390 real-world patient cases.

AIBullisharXiv – CS AI · Mar 26/1010
🧠

CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation

Researchers introduce CowPilot, a framework that combines autonomous AI agents with human collaboration for web navigation tasks. The system achieved 95% success rate while requiring humans to perform only 15.2% of total steps, demonstrating effective human-AI cooperation for complex web tasks.

AINeutralarXiv – CS AI · Mar 26/1016
🧠

Do LLMs Benefit From Their Own Words?

Research reveals that large language models don't significantly benefit from conditioning on their own previous responses in multi-turn conversations. The study found that omitting assistant history can reduce context lengths by up to 10x while maintaining response quality, and in some cases even improves performance by avoiding context pollution where models over-condition on previous responses.

AIBullisharXiv – CS AI · Mar 27/1013
🧠

CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel Generation

Researchers developed CUDA Agent, a reinforcement learning system that significantly outperforms existing methods for GPU kernel optimization, achieving 100% faster performance than torch.compile on benchmark tests. The system uses large-scale agentic RL with automated verification and profiling to improve CUDA kernel generation, addressing a critical bottleneck in deep learning performance.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

Radiologist Copilot: An Agentic Framework Orchestrating Specialized Tools for Reliable Radiology Reporting

Researchers have developed Radiologist Copilot, an AI agentic framework that orchestrates specialized tools to complete the entire radiology reporting workflow beyond simple report generation. The system integrates image localization, interpretation, template selection, report composition, and quality control to support radiologists throughout the comprehensive reporting process.

← PrevPage 247 of 531Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined