y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm News & Analysis

956 articles tagged with #llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

956 articles
AIBullisharXiv โ€“ CS AI ยท Mar 26/1023
๐Ÿง 

From Flat Logs to Causal Graphs: Hierarchical Failure Attribution for LLM-based Multi-Agent Systems

Researchers introduce CHIEF, a new framework that improves failure analysis in LLM-powered multi-agent systems by transforming execution logs into hierarchical causal graphs. The system uses oracle-guided backtracking and counterfactual attribution to better identify root causes of failures, outperforming existing methods on benchmark tests.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1015
๐Ÿง 

Learning to Generate Secure Code via Token-Level Rewards

Researchers have developed Vul2Safe, a new framework for generating secure code using large language models, which addresses security vulnerabilities through self-reflection and token-level reinforcement learning. The approach introduces the PrimeVul+ dataset and SRCode training framework to provide more precise optimization of security patterns in code generation.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1012
๐Ÿง 

The Auton Agentic AI Framework

Researchers have introduced the Auton Agentic AI Framework, a new architecture designed to bridge the gap between stochastic LLM outputs and deterministic backend systems required for autonomous AI agents. The framework separates cognitive blueprints from runtime engines, enabling cross-platform portability and formal auditability while incorporating advanced safety mechanisms and memory systems.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1016
๐Ÿง 

ODAR: Principled Adaptive Routing for LLM Reasoning via Active Inference

Researchers propose ODAR-Expert, an adaptive routing framework for large language models that optimizes accuracy-efficiency trade-offs by dynamically routing queries between fast and slow processing agents. The system achieved 98.2% accuracy on MATH benchmarks while reducing computational costs by 82%, suggesting that optimal AI scaling requires adaptive resource allocation rather than simply increasing test-time compute.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Biases in the Blind Spot: Detecting What LLMs Fail to Mention

Researchers have developed an automated pipeline to detect hidden biases in Large Language Models that don't appear in their reasoning explanations. The system discovered previously unknown biases like Spanish fluency and writing formality across seven LLMs in hiring, loan approval, and university admission tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1013
๐Ÿง 

LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning

Researchers propose an LLM-driven framework for generating multi-turn task-oriented dialogues to create more realistic reasoning benchmarks. The framework addresses limitations in current AI evaluation methods by producing synthetic datasets that better reflect real-world complexity and contextual coherence.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1025
๐Ÿง 

Capabilities Ain't All You Need: Measuring Propensities in AI

Researchers introduce the first formal framework for measuring AI propensities - the tendencies of models to exhibit particular behaviors - going beyond traditional capability measurements. The new bilogistic approach successfully predicts AI behavior on held-out tasks and shows stronger predictive power when combining propensities with capabilities than using either measure alone.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1010
๐Ÿง 

SAGE-LLM: Towards Safe and Generalizable LLM Controller with Fuzzy-CBF Verification and Graph-Structured Knowledge Retrieval for UAV Decision

Researchers propose SAGE-LLM, a novel framework that combines Large Language Models with Control Barrier Functions for safe UAV autonomous decision-making. The system addresses LLM safety limitations through formal verification mechanisms and graph-based knowledge retrieval, demonstrating improved safety and generalization in drone control scenarios.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1020
๐Ÿง 

HumanMCP: A Human-Like Query Dataset for Evaluating MCP Tool Retrieval Performance

Researchers have released HumanMCP, the first large-scale dataset designed to evaluate tool retrieval performance in Model Context Protocol (MCP) servers. The dataset addresses a critical gap by providing realistic, human-like queries paired with 2,800 tools across 308 MCP servers, improving upon existing benchmarks that lack authentic user interaction patterns.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1012
๐Ÿง 

An Agentic LLM Framework for Adverse Media Screening in AML Compliance

Researchers have developed an agentic LLM framework using Retrieval-Augmented Generation to automate adverse media screening for anti-money laundering compliance in financial institutions. The system addresses high false-positive rates in traditional keyword-based approaches by implementing multi-step web searches and computing Adverse Media Index scores to distinguish between high-risk and low-risk individuals.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1020
๐Ÿง 

Training Generalizable Collaborative Agents via Strategic Risk Aversion

Researchers developed a new multi-agent reinforcement learning algorithm that uses strategic risk aversion to create AI agents that can reliably collaborate with unseen partners. The approach addresses the problem of brittle AI collaboration systems that fail when working with new partners by incorporating robustness against behavioral deviations.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1012
๐Ÿง 

Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning

Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1011
๐Ÿง 

KEEP: A KV-Cache-Centric Memory Management System for Efficient Embodied Planning

Researchers from PKU-SEC-Lab have developed KEEP, a new memory management system that significantly improves the efficiency of AI-powered embodied planning by optimizing KV cache usage. The system achieves 2.68x speedup compared to text-based memory methods while maintaining accuracy, addressing a key bottleneck in memory-augmented Large Language Models for complex planning tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1017
๐Ÿง 

MITS: Enhanced Tree Search Reasoning for LLMs via Pointwise Mutual Information

Researchers introduce MITS (Mutual Information Tree Search), a new framework that improves reasoning capabilities in large language models using information-theoretic principles. The method uses pointwise mutual information for step-wise evaluation and achieves better performance while being more computationally efficient than existing tree search methods like Tree-of-Thought.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

Trust Region Masking for Long-Horizon LLM Reinforcement Learning

Researchers propose Trust Region Masking (TRM) to address off-policy mismatch problems in Large Language Model reinforcement learning pipelines. The method provides the first non-vacuous monotonic improvement guarantees for long-horizon LLM-RL tasks by masking entire sequences that violate trust region constraints.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1018
๐Ÿง 

LIA: Supervised Fine-Tuning of Large Language Models for Automatic Issue Assignment

Researchers developed LIA, a supervised fine-tuning approach using DeepSeek-R1-Distill-Llama-8B to automatically assign software issues to developers. The system achieved up to 187.8% improvement over the base model and 211.2% better performance than existing methods in developer recommendation accuracy.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Thompson Sampling via Fine-Tuning of LLMs

Researchers developed ToSFiT (Thompson Sampling via Fine-Tuning), a new Bayesian optimization method that uses fine-tuned large language models to improve search efficiency in complex discrete spaces. The approach eliminates computational bottlenecks by directly parameterizing reward probabilities and demonstrates superior performance across diverse applications including protein search and quantum circuit design.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1017
๐Ÿง 

VISTA: Knowledge-Driven Vessel Trajectory Imputation with Repair Provenance

Researchers introduce VISTA, a framework for vessel trajectory imputation that uses knowledge-driven LLM reasoning to repair incomplete maritime tracking data. The system provides 'repair provenance' - documented reasoning behind data repairs - achieving 5-91% accuracy improvements over existing methods while reducing inference time by 51-93%.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

VCWorld: A Biological World Model for Virtual Cell Simulation

Researchers have developed VCWorld, a new AI-powered biological simulation system that combines large language models with structured biological knowledge to predict cellular responses to drug perturbations. The system operates as a 'white-box' model, providing interpretable predictions and mechanistic insights while achieving state-of-the-art performance in drug perturbation benchmarks.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1018
๐Ÿง 

LumiMAS: A Comprehensive Framework for Real-Time Monitoring and Enhanced Observability in Multi-Agent Systems

Researchers have developed LumiMAS, a comprehensive framework for monitoring and detecting failures in multi-agent systems that incorporate large language models. The framework features three layers: monitoring and logging, anomaly detection, and anomaly explanation with root cause analysis, addressing the unique challenges of observing entire multi-agent systems rather than individual agents.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1014
๐Ÿง 

Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning

Researchers introduce Latent Self-Consistency (LSC), a new method for improving Large Language Model output reliability across both short and long-form reasoning tasks. LSC uses learnable token embeddings to select semantically consistent responses with only 0.9% computational overhead, outperforming existing consistency methods like Self-Consistency and Universal Self-Consistency.

AINeutralarXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Once4All: Skeleton-Guided SMT Solver Fuzzing with LLM-Synthesized Generators

Researchers developed Once4All, an LLM-assisted fuzzing framework for testing SMT solvers that addresses syntax validity issues and computational overhead. The system found 43 confirmed bugs in leading solvers Z3 and cvc5, with 40 already fixed by developers.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1024
๐Ÿง 

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Researchers propose DUET, a new distillation-based method for LLM unlearning that removes undesirable knowledge from AI models without full retraining. The technique combines computational efficiency with security advantages, achieving better performance in both knowledge removal and utility preservation while being significantly more data-efficient than existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1012
๐Ÿง 

FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data

Researchers have developed FinBloom 7B, a specialized large language model trained on 14 million financial news articles and SEC filings, designed to handle real-time financial queries. The model introduces a Financial Agent system that can access up-to-date market data and financial information to support decision-making and algorithmic trading applications.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1016
๐Ÿง 

Does Your Reasoning Model Implicitly Know When to Stop Thinking?

Researchers introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that improves AI reasoning efficiency by helping large reasoning models know when to stop thinking. The approach addresses the problem of redundant, lengthy reasoning chains that don't improve accuracy while reducing computational costs and response times.