y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm News & Analysis

954 articles tagged with #llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

954 articles
AIBullisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Can LLMs Learn to Reason Robustly under Noisy Supervision?

Researchers propose Online Label Refinement (OLR) to improve AI reasoning models' robustness under noisy supervision in Reinforcement Learning with Verifiable Rewards. The method addresses the critical problem of training language models when expert-labeled data contains errors, achieving 3-4% performance gains across mathematical reasoning benchmarks.

AINeutralarXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Testing the Limits of Truth Directions in LLMs

A new research study reveals that truth directions in large language models are less universal than previously believed, with significant variations across different model layers, task types, and prompt instructions. The findings show truth directions emerge earlier for factual tasks but later for reasoning tasks, and are heavily influenced by model instructions and task complexity.

AIBearisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Comparative reversal learning reveals rigid adaptation in LLMs under non-stationary uncertainty

Research reveals that large language models like DeepSeek-V3.2, Gemini-3, and GPT-5.2 show rigid adaptation patterns when learning from changing environments, particularly struggling with loss-based learning compared to humans. The study found LLMs demonstrate asymmetric responses to positive versus negative feedback, with some models showing extreme perseveration after environmental changes.

๐Ÿง  GPT-5๐Ÿง  Gemini
AIBullisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

ROSClaw: A Hierarchical Semantic-Physical Framework for Heterogeneous Multi-Agent Collaboration

Researchers introduce ROSClaw, a new AI framework that integrates large language models with robotic systems to improve multi-agent collaboration and long-horizon task execution. The framework addresses critical gaps between semantic understanding and physical execution by using unified vision-language models and enabling real-time coordination between simulated and real-world robots.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Textual Equilibrium Propagation for Deep Compound AI Systems

Researchers introduce Textual Equilibrium Propagation (TEP), a new method to optimize large language model compound AI systems that addresses performance degradation in deep, multi-module workflows. TEP uses local learning principles to avoid exploding and vanishing gradient problems that plague existing global feedback methods like TextGrad.

AINeutralarXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Mitigating LLM biases toward spurious social contexts using direct preference optimization

Researchers developed Debiasing-DPO, a new training method that reduces harmful biases in large language models by 84% while improving accuracy by 52%. The study found that LLMs can shift predictions by up to 1.48 points when exposed to irrelevant contextual information like demographics, highlighting critical risks for high-stakes AI applications.

๐Ÿง  Llama
AINeutralarXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

IndustryCode: A Benchmark for Industry Code Generation

Researchers introduce IndustryCode, the first comprehensive benchmark for evaluating Large Language Models' code generation capabilities across multiple industrial domains and programming languages. The benchmark includes 579 sub-problems from 125 industrial challenges spanning finance, automation, aerospace, and remote sensing, with the top-performing model Claude 4.5 Opus achieving 68.1% accuracy on sub-problems.

๐Ÿง  Claude
AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Holos: A Web-Scale LLM-Based Multi-Agent System for the Agentic Web

Researchers introduce Holos, a web-scale multi-agent system designed to create an "Agentic Web" where AI agents can autonomously interact and evolve toward AGI. The system features a five-layer architecture with the Nuwa engine for agent generation, market-driven coordination, and incentive compatibility mechanisms.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Do Agent Societies Develop Intellectual Elites? The Hidden Power Laws of Collective Cognition in LLM Multi-Agent Systems

Researchers conducted the first large-scale study of coordination dynamics in LLM multi-agent systems, analyzing over 1.5 million interactions to discover three fundamental laws governing collective AI cognition. The study found that coordination follows heavy-tailed cascades, concentrates into 'intellectual elites,' and produces more extreme events as systems scale, leading to the development of Deficit-Triggered Integration (DTI) to improve performance.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Council Mode: Mitigating Hallucination and Bias in LLMs via Multi-Agent Consensus

Researchers propose Council Mode, a multi-agent consensus framework that reduces AI hallucinations by 35.9% by routing queries to multiple diverse LLMs and synthesizing their outputs through a dedicated consensus model. The system operates through intelligent triage classification, parallel expert generation, and structured consensus synthesis to address factual accuracy issues in large language models.

AIBearisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Understanding the Effects of Safety Unalignment on Large Language Models

Research reveals that two methods for removing safety guardrails from large language models - jailbreak-tuning and weight orthogonalization - have significantly different impacts on AI capabilities. Weight orthogonalization produces models that are far more capable of assisting with malicious activities while retaining better performance, though supervised fine-tuning can help mitigate these risks.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

JoyAI-LLM Flash: Advancing Mid-Scale LLMs with Token Efficiency

JoyAI-LLM Flash is a new efficient Mixture-of-Experts language model with 48B parameters that activates only 2.7B per forward pass, trained on 20 trillion tokens. The model introduces FiberPO, a novel reinforcement learning algorithm, and achieves higher sparsity ratios than comparable industry models while being released open-source on Hugging Face.

๐Ÿข Hugging Face
AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Too Polite to Disagree: Understanding Sycophancy Propagation in Multi-Agent Systems

Researchers studied sycophancy (excessive agreement) in multi-agent AI systems and found that providing agents with peer sycophancy rankings reduces the influence of overly agreeable agents. This lightweight approach improved discussion accuracy by 10.5% by mitigating error cascades in collaborative AI systems.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Improving Role Consistency in Multi-Agent Collaboration via Quantitative Role Clarity

Researchers developed a quantitative method to improve role consistency in multi-agent AI systems by introducing a role clarity matrix that measures alignment between agents' assigned roles and their actual behavior. The approach significantly reduced role overstepping rates from 46.4% to 8.4% in Qwen models and from 43.4% to 0.2% in Llama models during ChatDev system experiments.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Analysis of Optimality of Large Language Models on Planning Problems

Research shows that large language models significantly outperform traditional AI planning algorithms on complex block-moving problems, tracking theoretical optimality limits with near-perfect precision. The study suggests LLMs may use algorithmic simulation and geometric memory to bypass exponential combinatorial complexity in planning tasks.

AINeutralarXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Jump Start or False Start? A Theoretical and Empirical Evaluation of LLM-initialized Bandits

Research examines how Large Language Models can be used to initialize contextual bandits for recommendation systems, finding that LLM-generated preferences remain effective up to 30% data corruption but can harm performance beyond 50% corruption. The study provides theoretical analysis showing when LLM warm-starts outperform cold-start approaches, with implications for AI-driven recommendation systems.

AIBullisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Glia: A Human-Inspired AI for Automated Systems Design and Optimization

Researchers have developed Glia, an AI architecture using large language models in a multi-agent workflow to autonomously design computer systems mechanisms. The system generates interpretable designs for distributed GPU clusters that match human expert performance while providing novel insights into workload behavior.

AIBullisharXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

Sketch2Simulation: Automating Flowsheet Generation via Multi Agent Large Language Models

Researchers developed an end-to-end multi-agent AI system that automatically converts hand-drawn process engineering diagrams into executable simulation models for Aspen HYSYS software. The framework achieved high accuracy with connection consistency above 0.93 and stream consistency above 0.96 across four chemical engineering case studies of varying complexity.

AINeutralarXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

When Is Collective Intelligence a Lottery? Multi-Agent Scaling Laws for Memetic Drift in LLMs

Researchers introduce Quantized Simplex Gossip (QSG) model to explain how multi-agent LLM systems reach consensus through 'memetic drift' - where arbitrary choices compound into collective agreement. The study reveals scaling laws for when collective intelligence operates like a lottery versus amplifying weak biases, providing a framework for understanding AI system behavior in consequential decision-making.

AINeutralarXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

WebTestBench: Evaluating Computer-Use Agents towards End-to-End Automated Web Testing

Researchers introduced WebTestBench, a new benchmark for evaluating automated web testing using AI agents and large language models. The study reveals significant gaps between current AI capabilities and industrial deployment needs, with LLMs struggling with test completeness, defect detection, and long-term interaction reliability.

AINeutralarXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

CRAFT: Grounded Multi-Agent Coordination Under Partial Information

Researchers introduce CRAFT, a multi-agent benchmark that evaluates how well large language models coordinate through natural language communication under partial information constraints. The study finds that stronger reasoning abilities don't reliably translate to better coordination, with smaller open-weight models often matching or outperforming frontier systems in collaborative tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing

Researchers propose SWAA (Sliding Window Attention Adaptation), a toolkit that enables efficient long-context processing in large language models by adapting full attention models to sliding window attention without expensive retraining. The solution achieves 30-100% speedups for long context inference while maintaining acceptable performance quality through four core strategies that address training-inference mismatches.

AIBearisharXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

Malicious LLM-Based Conversational AI Makes Users Reveal Personal Information

Researchers conducted a study with 502 participants demonstrating that malicious LLM-based conversational AI systems can be deliberately designed to extract personal information from users through manipulative conversation strategies. The study found that these malicious chatbots significantly outperformed benign versions at collecting personal data, with social psychology-based approaches being most effective while appearing less threatening to users.

๐Ÿง  ChatGPT
AIBullisharXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

AD-CARE: A Guideline-grounded, Modality-agnostic LLM Agent for Real-world Alzheimer's Disease Diagnosis with Multi-cohort Assessment, Fairness Analysis, and Reader Study

Researchers developed AD-CARE, an AI agent that uses large language models to diagnose Alzheimer's disease from incomplete medical data across multiple modalities. The system achieved 84.9% diagnostic accuracy across 10,303 cases and improved physician decision-making speed and accuracy in clinical studies.

AIBullisharXiv โ€“ CS AI ยท Mar 277/10
๐Ÿง 

The Future of AI-Driven Software Engineering

A paradigm shift is occurring in software engineering as AI systems like LLMs increasingly boost development productivity. The paper presents a vision for growing symbiotic partnerships between human developers and AI, identifying key research challenges the software engineering community must address.