y0news
← Feed
←Back to feed
🧠 AIβšͺ NeutralImportance 7/10

LLM Probability Concentration: How Alignment Shrinks the Generative Horizon

arXiv – CS AI|Chenghao Yang, Sida Li, Ari Holtzman||2 views
πŸ€–AI Summary

Researchers introduce the Branching Factor (BF) metric to measure how alignment tuning reduces output diversity in large language models by concentrating probability distributions. The study reveals that aligned models generate 2-5x less diverse outputs and become more predictable during generation, explaining why alignment reduces sensitivity to decoding strategies and enables more stable Chain-of-Thought reasoning.

Key Takeaways
  • β†’Alignment tuning reduces LLM output diversity by a factor of 2-5 overall and up to 10x at beginning positions through probability concentration.
  • β†’The Branching Factor (BF) metric quantifies the effective number of plausible next tokens, typically decreasing as generation progresses.
  • β†’Aligned Chain-of-Thought models achieve more stable outputs by generating longer reasoning chains that push generation into more deterministic stages.
  • β†’Alignment appears to steer models toward stylistic tokens that unlock low-entropy trajectories already present in base models rather than fundamentally changing behavior.
  • β†’Base models can be nudged toward similar low-diversity behavior by prompting with alignment-style tokens like 'Sure'.
Read Original β†’via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains β€” you keep full control of your keys.
Connect Wallet to AI β†’How it works
Related Articles