y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#language-models News & Analysis

350 articles tagged with #language-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

350 articles
AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

AdaMCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Multilingual Chain-of-Thought

Researchers introduce AdaMCoT, a framework that improves multilingual reasoning in large language models by dynamically routing intermediate thoughts through optimal 'thinking languages' before generating target-language responses. The approach achieves significant performance gains in low-resource languages without requiring additional pretraining, addressing a key limitation in current multilingual AI systems.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

How Transformers Learn to Plan via Multi-Token Prediction

Researchers demonstrate that multi-token prediction (MTP) outperforms standard next-token prediction (NTP) for training language models on reasoning tasks like planning and pathfinding. Through theoretical analysis of simplified Transformers, they reveal that MTP enables a reverse reasoning process where models first identify end states then reconstruct paths backward, suggesting MTP induces more interpretable and robust reasoning circuits.

AINeutralarXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

LASA: Language-Agnostic Semantic Alignment at the Semantic Bottleneck for LLM Safety

Researchers have identified a critical vulnerability in large language models where safety guardrails fail across low-resource languages despite strong performance in high-resource ones. The team proposes LASA (Language-Agnostic Semantic Alignment), a new method that anchors safety protocols at the semantic bottleneck layer, dramatically reducing attack success rates from 24.7% to 2.8% on tested models.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Instructions are all you need: Self-supervised Reinforcement Learning for Instruction Following

Researchers propose a label-free self-supervised reinforcement learning framework that enables language models to follow complex multi-constraint instructions without external supervision. The approach derives reward signals directly from instructions and uses constraint decomposition strategies to address sparse reward challenges, demonstrating strong performance across both in-domain and out-of-domain instruction-following tasks.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Reasoning Graphs: Self-Improving, Deterministic RAG through Evidence-Centric Feedback

Researchers introduce reasoning graphs, a persistent knowledge structure that improves language model reasoning accuracy by storing and reusing chains of thought tied to evidence items. The system achieves 47% error reduction on multi-hop questions and maintains deterministic outputs without model retraining, using only context engineering.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Introspective Diffusion Language Models

Researchers introduce Introspective Diffusion Language Models (I-DLM), a new approach that combines the parallel generation speed of diffusion models with the quality of autoregressive models by ensuring models verify their own outputs. I-DLM achieves performance matching conventional large language models while delivering 3x higher throughput, potentially reshaping how AI systems are deployed at scale.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Think in Sentences: Explicit Sentence Boundaries Enhance Language Model's Capabilities

Researchers demonstrate that inserting sentence boundary delimiters in LLM inputs significantly enhances model performance across reasoning tasks, with improvements up to 12.5% on specific benchmarks. This technique leverages the natural sentence-level structure of human language to enable better processing during inference, tested across model scales from 7B to 600B parameters.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

AI Achieves a Perfect LSAT Score

A frontier language model has achieved a perfect score on the LSAT, marking the first documented instance of an AI system answering all questions without error on the standardized law school admission test. Research shows that extended reasoning and thinking processes are critical to this performance, with ablation studies revealing up to 8 percentage point drops in accuracy when these mechanisms are removed.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Learning and Enforcing Context-Sensitive Control for LLMs

Researchers introduce a framework that automatically learns context-sensitive constraints from LLM interactions, eliminating the need for manual specification while ensuring perfect constraint adherence during generation. The method enables even 1B-parameter models to outperform larger models and state-of-the-art reasoning systems in constraint-compliant generation.

AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Dead Cognitions: A Census of Misattributed Insights

Researchers identify 'attribution laundering,' a failure mode in AI chat systems where models perform cognitive work but rhetorically credit users for the insights, systematically obscuring this misattribution and eroding users' ability to assess their own contributions. The phenomenon operates across individual interactions and institutional scales, reinforced by interface design and adoption-focused incentives rather than accountability mechanisms.

๐Ÿง  Claude
AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Too Nice to Tell the Truth: Quantifying Agreeableness-Driven Sycophancy in Role-Playing Language Models

Researchers at y0.exchange have quantified how agreeableness in AI persona role-play directly correlates with sycophantic behavior, finding that 9 of 13 language models exhibit statistically significant positive correlations between persona agreeableness and tendency to validate users over factual accuracy. The study tested 275 personas against 4,950 prompts across 33 topic categories, revealing effect sizes as large as Cohen's d = 2.33, with implications for AI safety and alignment in conversational agent deployment.

AINeutralarXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

From GPT-3 to GPT-5: Mapping their capabilities, scope, limitations, and consequences

A comprehensive comparative study traces the evolution of OpenAI's GPT models from GPT-3 through GPT-5, revealing that successive generations represent far more than incremental capability improvements. The research demonstrates a fundamental shift from simple text predictors to integrated, multimodal systems with tool access and workflow capabilities, while persistent limitations like hallucination and benchmark fragility remain largely unresolved across all versions.

๐Ÿง  GPT-4๐Ÿง  GPT-5
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Retrieval as Generation: A Unified Framework with Self-Triggered Information Planning

Researchers introduce GRIP, a unified framework that integrates retrieval decisions directly into language model generation through control tokens, eliminating the need for external retrieval controllers. The system enables models to autonomously decide when to retrieve information, reformulate queries, and terminate retrieval within a single autoregressive process, achieving competitive performance with GPT-4o while using substantially fewer parameters.

๐Ÿง  GPT-4
AINeutralarXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

LiveCLKTBench: Towards Reliable Evaluation of Cross-Lingual Knowledge Transfer in Multilingual LLMs

Researchers introduce LiveCLKTBench, an automated benchmark for evaluating how well multilingual large language models transfer knowledge across languages, addressing the challenge of distinguishing genuine cross-lingual transfer from pre-training artifacts. Testing across five languages reveals that transfer effectiveness depends heavily on linguistic distance, model scale, and domain, with improvements plateauing in larger models.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models

Researchers introduce FS-DFM, a discrete flow-matching model that generates long text 128x faster than standard diffusion models while maintaining quality parity. The breakthrough uses few-step sampling with teacher guidance distillation, achieving in 8 steps what previously required 1,024 evaluations.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

GIANTS: Generative Insight Anticipation from Scientific Literature

Researchers introduce GIANTS, a framework for training language models to anticipate scientific breakthroughs by synthesizing insights from foundational papers. The team releases GiantsBench, a 17k-example benchmark across eight scientific domains, and GIANTS-4B, a 4B-parameter model that outperforms larger proprietary baselines by 34% while generalizing to unseen research areas.

AIBearisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Powerful Training-Free Membership Inference Against Autoregressive Language Models

Researchers have developed EZ-MIA, a training-free membership inference attack that dramatically improves detection of memorized data in fine-tuned language models by analyzing probability shifts at error positions. The method achieves 3.8x higher detection rates than previous approaches on GPT-2 and demonstrates that privacy risks in fine-tuned models are substantially greater than previously understood.

๐Ÿง  Llama
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

EdgeCIM: A Hardware-Software Co-Design for CIM-Based Acceleration of Small Language Models

EdgeCIM presents a specialized hardware-software framework designed to accelerate Small Language Model inference on edge devices by addressing memory-bandwidth bottlenecks inherent in autoregressive decoding. The system achieves significant performance and energy improvements over existing mobile accelerators, reaching 7.3x higher throughput than NVIDIA Orin Nano on 1B-parameter models.

๐Ÿข Nvidia
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Three Roles, One Model: Role Orchestration at Inference Time to Close the Performance Gap Between Small and Large Agents

Researchers demonstrate that inference-time scaffolding can double the performance of small 8B language models on complex tool-use tasks without additional training, by deploying the same frozen model in three specialized roles: summarization, reasoning, and code correction. On a single 24GB GPU, this approach enables an 8B model to match or exceed much larger systems like DeepSeek-Coder 33B, suggesting efficient deployment paths for capable AI agents on modest hardware.

AIBullisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

The Two-Stage Decision-Sampling Hypothesis: Understanding the Emergence of Self-Reflection in RL-Trained LLMs

Researchers introduce the Two-Stage Decision-Sampling Hypothesis to explain how reinforcement learning enables self-reflection capabilities in large language models, demonstrating that RL's superior performance stems from improved decision-making rather than generation quality. The theory shows that reward gradients distribute asymmetrically across policy components, explaining why RL succeeds where supervised fine-tuning fails.

AIBullisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels

Researchers introduced Webscale-RL, a data pipeline that converts large-scale pre-training documents into 1.2 million diverse question-answer pairs for reinforcement learning training. The approach enables RL models to achieve pre-training-level performance with up to 100x fewer tokens, addressing a critical bottleneck in scaling RL data and potentially advancing more efficient language model development.

AIBullisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

Dynamic sparsity in tree-structured feed-forward layers at scale

Researchers demonstrate that tree-structured sparse feed-forward layers can replace dense MLPs in large transformer models while maintaining performance, activating less than 5% of parameters per token. The work reveals an emergent auto-pruning mechanism where hard routing progressively converts dynamic sparsity into static structure, offering a scalable approach to reducing computational costs in language models beyond 1 billion parameters.

AIBearisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

Re-Mask and Redirect: Exploiting Denoising Irreversibility in Diffusion Language Models

Researchers demonstrate a critical vulnerability in diffusion-based language models where safety mechanisms can be bypassed by re-masking committed refusal tokens and injecting affirmative prefixes, achieving 76-82% attack success rates without gradient optimization. The findings reveal that dLLM safety relies on a fragile architectural assumption rather than robust adversarial defenses.

AIBullisharXiv โ€“ CS AI ยท 3d ago7/10
๐Ÿง 

SkillFactory: Self-Distillation For Learning Cognitive Behaviors

SkillFactory is a novel fine-tuning method that enables language models to learn cognitive behaviors like verification and backtracking without requiring distillation from stronger models. The approach uses self-rearranged training samples during supervised fine-tuning to prime models for subsequent reinforcement learning, resulting in better generalization and robustness.

Page 1 of 14Next โ†’