y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm News & Analysis

944 articles tagged with #llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

944 articles
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Steering at the Source: Style Modulation Heads for Robust Persona Control

Researchers have identified a method to control Large Language Model behavior by targeting only three specific attention heads called 'Style Modulation Heads' rather than the entire residual stream. This approach maintains model coherency while enabling precise persona and style control, offering a more efficient alternative to fine-tuning.

AI ร— CryptoBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿค–

Benchmarking Zero-Shot Reasoning Approaches for Error Detection in Solidity Smart Contracts

Researchers benchmarked state-of-the-art LLMs for detecting vulnerabilities in Solidity smart contracts using zero-shot prompting strategies. The study found that Chain-of-Thought and Tree-of-Thought approaches significantly improved recall (95-99%) but reduced precision, while Claude 3 Opus achieved the best performance with a 90.8 F1-score in vulnerability classification.

๐Ÿง  Claude
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Justitia: Fair and Efficient Scheduling of Task-parallel LLM Agents with Selective Pampering

Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.

๐Ÿข Meta
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

APEX-Searcher: Augmenting LLMs' Search Capabilities through Agentic Planning and Execution

Researchers introduce APEX-Searcher, a new framework that enhances large language models' search capabilities through a two-stage approach combining reinforcement learning for strategic planning and supervised fine-tuning for execution. The system addresses limitations in multi-hop question answering by decoupling retrieval processes into planning and execution phases, showing significant improvements across multiple benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Nemotron-CrossThink: Scaling Self-Learning beyond Math Reasoning

Researchers at NVIDIA developed NEMOTRON-CROSSTHINK, a new AI framework that uses reinforcement learning with multi-domain data to improve language model reasoning across diverse fields beyond just mathematics. The system shows significant performance improvements on both mathematical and non-mathematical reasoning benchmarks while using 28% fewer tokens for correct answers.

AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Why the Valuable Capabilities of LLMs Are Precisely the Unexplainable Ones

A research paper argues that the most valuable capabilities of large language models are precisely those that cannot be captured by human-readable rules. The thesis is supported by proof showing that if LLM capabilities could be fully rule-encoded, they would be equivalent to expert systems, which have been proven historically weaker than LLMs.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Incentivizing Strong Reasoning from Weak Supervision

Researchers have developed a novel method to enhance large language model reasoning capabilities using supervision from weaker models, achieving 94% of expensive reinforcement learning gains at a fraction of the cost. This weak-to-strong supervision paradigm offers a promising alternative to costly traditional methods for improving LLM reasoning performance.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

SAGE: Multi-Agent Self-Evolution for LLM Reasoning

Researchers introduced SAGE, a multi-agent framework that improves large language model reasoning through self-evolution using four specialized agents. The system achieved significant performance gains on coding and mathematics benchmarks without requiring large human-labeled datasets.

AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Large Language Models Reproduce Racial Stereotypes When Used for Text Annotation

A comprehensive study of 19 large language models reveals systematic racial bias in automated text annotation, with over 4 million judgments showing LLMs consistently reproduce harmful stereotypes based on names and dialect. The research demonstrates that AI models rate texts with Black-associated names as more aggressive and those written in African American Vernacular English as less professional and more toxic.

AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Right for the Wrong Reasons: Epistemic Regret Minimization for Causal Rung Collapse in LLMs

Researchers identify a fundamental flaw in large language models called 'Rung Collapse' where AI systems achieve correct answers through flawed causal reasoning that fails under distribution shifts. They propose Epistemic Regret Minimization (ERM) as a solution that penalizes incorrect reasoning processes independently of task success, showing 53-59% recovery of reasoning errors in experiments across six frontier LLMs.

๐Ÿง  GPT-5
AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

CRASH: Cognitive Reasoning Agent for Safety Hazards in Autonomous Driving

Researchers introduced CRASH, an LLM-based agent that analyzes autonomous vehicle incidents from NHTSA data covering 2,168 cases and 80+ million miles driven between 2021-2025. The system achieved 86% accuracy in fault attribution and found that 64% of incidents stem from perception or planning failures, with rear-end collisions comprising 50% of all reported incidents.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Reducing Cost of LLM Agents with Trajectory Reduction

Researchers introduce AgentDiet, a trajectory reduction technique that cuts computational costs for LLM-based agents by 39.9%-59.7% in input tokens and 21.1%-35.9% in total costs while maintaining performance. The approach removes redundant and expired information from agent execution trajectories during inference time.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Inference-time Alignment in Continuous Space

Researchers propose Simple Energy Adaptation (SEA), a new algorithm for aligning large language models with human feedback at inference time. SEA uses gradient-based sampling in continuous latent space rather than searching discrete response spaces, achieving up to 77.51% improvement on AdvBench and 16.36% on MATH benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Boosting Large Language Models with Mask Fine-Tuning

Researchers introduce Mask Fine-Tuning (MFT), a novel approach that improves large language model performance by applying binary masks to optimized models without updating weights. The method achieves consistent performance gains across different domains and model architectures, with average improvements of 2.70/4.15 in IFEval benchmarks for LLaMA models.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

SCAN: Sparse Circuit Anchor Interpretable Neuron for Lifelong Knowledge Editing

Researchers introduce SCAN, a new framework for editing Large Language Models that prevents catastrophic forgetting during sequential knowledge updates. The method uses sparse circuit manipulation instead of dense parameter changes, maintaining model performance even after 3,000 sequential edits across major models like Gemma2, Qwen3, and Llama3.1.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Brain-Inspired Graph Multi-Agent Systems for LLM Reasoning

Researchers propose BIGMAS (Brain-Inspired Graph Multi-Agent Systems), a new architecture that organizes specialized LLM agents in dynamic graphs with centralized coordination to improve complex reasoning tasks. The system outperformed existing approaches including ReAct and Tree of Thoughts across multiple reasoning benchmarks, demonstrating that multi-agent design provides gains complementary to model-level improvements.

AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Faithful or Just Plausible? Evaluating the Faithfulness of Closed-Source LLMs in Medical Reasoning

Researchers evaluated the faithfulness of closed-source AI models like ChatGPT and Gemini in medical reasoning, finding that their explanations often appear plausible but don't reflect actual reasoning processes. The study revealed these models frequently incorporate external hints without acknowledgment and their chain-of-thought reasoning doesn't causally drive predictions, raising safety concerns for medical applications.

๐Ÿง  ChatGPT๐Ÿง  Gemini
AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

CCTU: A Benchmark for Tool Use under Complex Constraints

Researchers introduce CCTU, a new benchmark for evaluating large language models' ability to use tools under complex constraints. The study reveals that even state-of-the-art LLMs achieve less than 20% task completion rates when strict constraint adherence is required, with models violating constraints in over 50% of cases.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Mixture-of-Depths Attention

Researchers introduce Mixture-of-Depths Attention (MoDA), a new mechanism for large language models that allows attention heads to access key-value pairs from both current and preceding layers to combat signal degradation in deeper models. Testing on 1.5B-parameter models shows MoDA improves perplexity by 0.2 and downstream task performance by 2.11% with only 3.7% computational overhead while maintaining 97.3% of FlashAttention-2's efficiency.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

Researchers introduced PriCoder, a new approach that improves Large Language Models' ability to generate code using private library APIs by over 20%. The method uses automatically synthesized training data through graph-based operators to teach LLMs private library usage, addressing a key limitation in current AI coding capabilities.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

POLCA: Stochastic Generative Optimization with LLM

Researchers introduce POLCA (Prioritized Optimization with Local Contextual Aggregation), a new framework that uses large language models as optimizers for complex systems like AI agents and code generation. The method addresses stochastic optimization challenges through priority queuing and meta-learning, demonstrating superior performance across multiple benchmarks including agent optimization and CUDA kernel generation.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

$p^2$RAG: Privacy-Preserving RAG Service Supporting Arbitrary Top-$k$ Retrieval

Researchers propose pยฒRAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.

$RAG
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Memory as Asset: From Agent-centric to Human-centric Memory Management

Researchers introduce Memory-as-Asset, a new paradigm for human-centric artificial general intelligence that treats personal memory as a digital asset. The framework features three key components: human-centric memory ownership, collaborative knowledge formation, and collective memory evolution, supported by a three-layer infrastructure including decentralized memory exchange networks.

AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

LLMs as Signal Detectors: Sensitivity, Bias, and the Temperature-Criterion Analogy

Researchers applied Signal Detection Theory to analyze three large language models across 168,000 trials, finding that temperature parameter changes both sensitivity and response bias simultaneously. The study reveals that traditional calibration metrics miss important diagnostic information that SDT's full parametric framework can provide.