y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#scaling-laws News & Analysis

22 articles tagged with #scaling-laws. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

22 articles
AIBullisharXiv – CS AI · 4d ago7/10
🧠

Dynamic sparsity in tree-structured feed-forward layers at scale

Researchers demonstrate that tree-structured sparse feed-forward layers can replace dense MLPs in large transformer models while maintaining performance, activating less than 5% of parameters per token. The work reveals an emergent auto-pruning mechanism where hard routing progressively converts dynamic sparsity into static structure, offering a scalable approach to reducing computational costs in language models beyond 1 billion parameters.

AIBullisharXiv – CS AI · Apr 67/10
🧠

Do Agent Societies Develop Intellectual Elites? The Hidden Power Laws of Collective Cognition in LLM Multi-Agent Systems

Researchers conducted the first large-scale study of coordination dynamics in LLM multi-agent systems, analyzing over 1.5 million interactions to discover three fundamental laws governing collective AI cognition. The study found that coordination follows heavy-tailed cascades, concentrates into 'intellectual elites,' and produces more extreme events as systems scale, leading to the development of Deficit-Triggered Integration (DTI) to improve performance.

AINeutralarXiv – CS AI · Mar 277/10
🧠

When Is Collective Intelligence a Lottery? Multi-Agent Scaling Laws for Memetic Drift in LLMs

Researchers introduce Quantized Simplex Gossip (QSG) model to explain how multi-agent LLM systems reach consensus through 'memetic drift' - where arbitrary choices compound into collective agreement. The study reveals scaling laws for when collective intelligence operates like a lottery versus amplifying weak biases, providing a framework for understanding AI system behavior in consequential decision-making.

AIBullishApple Machine Learning · Mar 267/10
🧠

Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training

Researchers propose a new framework for predicting Large Language Model performance on downstream tasks directly from training budget, finding that simple power laws can accurately model scaling behavior. This challenges the traditional view that downstream task performance prediction is unreliable, offering better extrapolation than previous two-stage methods.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Fragile Thoughts: How Large Language Models Handle Chain-of-Thought Perturbations

Research reveals that Large Language Models show varying vulnerabilities to different types of Chain-of-Thought reasoning perturbations, with math errors causing 50-60% accuracy loss in small models while unit conversion issues remain challenging even for the largest models. The study tested 13 models across parameter ranges from 3B to 1.5T parameters, finding that scaling provides protection against some perturbations but limited defense against dimensional reasoning tasks.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks

Researchers analyzed Mixture-of-Experts (MoE) language models to determine optimal sparsity levels for different tasks. They found that reasoning tasks require balancing active compute (FLOPs) with optimal data-to-parameter ratios, while memorization tasks benefit from more parameters regardless of sparsity.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Scaling with Collapse: Efficient and Predictable Training of LLM Families

Researchers demonstrate that training loss curves for large language models can collapse onto universal trajectories when hyperparameters are optimally set, enabling more efficient LLM training. They introduce Celerity, a competitive LLM family developed using these insights, and show that deviation from collapse can serve as an early diagnostic for training issues.

AINeutralarXiv – CS AI · Mar 37/103
🧠

What Scales in Cross-Entropy Scaling Law?

Researchers discovered that the traditional cross-entropy scaling law for large language models breaks down at very large scales because only one component (error-entropy) actually follows power-law scaling, while other components remain constant. This finding explains why model performance improvements become less predictable as models grow larger and establishes a new error-entropy scaling law for better understanding LLM development.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs

Researchers developed a new scaling law for large language models that optimizes both accuracy and inference efficiency by examining architectural factors like hidden size, MLP-to-attention ratios, and grouped-query attention. Testing over 200 models from 80M to 3B parameters, they found optimized architectures achieve 2.1% higher accuracy and 42% greater inference throughput compared to LLaMA-3.2.

AINeutralarXiv – CS AI · Feb 277/106
🧠

On the Complexity of Neural Computation in Superposition

Researchers establish theoretical foundations for neural network superposition, proving lower bounds that require at least Ω(√m' log m') neurons and Ω(m' log m') parameters to compute m' features. The work demonstrates exponential complexity gaps between computing versus merely representing features and provides first subexponential bounds on network capacity.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Compute-Optimal Quantization-Aware Training

Researchers developed a new approach to quantization-aware training (QAT) that optimizes compute allocation between full-precision and quantized training phases. They discovered that contrary to previous findings, the optimal ratio of QAT to full-precision training increases with total compute budget, and derived scaling laws to predict optimal configurations across different model sizes and bit widths.

AIBearishCrypto Briefing · 6d ago6/10
🧠

Ranjan Roy: AI marketing hype often overshadows substance, concerns about AI exploiting software vulnerabilities, and the significance of scaling laws in model performance | Big Technology

Ranjan Roy highlights how AI marketing hype often obscures substantive security concerns, particularly regarding AI systems exploiting software vulnerabilities. The analysis emphasizes the importance of scaling laws in model performance and urges critical evaluation of AI breakthroughs beyond promotional claims.

Ranjan Roy: AI marketing hype often overshadows substance, concerns about AI exploiting software vulnerabilities, and the significance of scaling laws in model performance | Big Technology
AINeutralarXiv – CS AI · Mar 37/107
🧠

What Is the Geometry of the Alignment Tax?

Researchers present a formal geometric theory for quantifying the alignment tax - the tradeoff between AI safety and capability performance. They derive mathematical frameworks showing how safety-capability conflicts can be measured using angles between representation subspaces and provide scaling laws for how these tradeoffs evolve with model size.

AINeutralarXiv – CS AI · Feb 275/105
🧠

Scaling Laws for Precision in High-Dimensional Linear Regression

Researchers developed theoretical scaling laws for low-precision AI model training, analyzing how quantization affects model performance in high-dimensional linear regression. The study reveals that multiplicative and additive quantization schemes have distinct effects on effective model size, with multiplicative maintaining full precision while additive reduces it.

AINeutralGoogle Research Blog · Jan 276/105
🧠

ATLAS: Practical scaling laws for multilingual models

ATLAS presents new scaling laws for multilingual generative AI models, providing practical frameworks for understanding how model performance scales across different languages and model sizes. This research offers valuable insights for optimizing multilingual AI system development and deployment strategies.

AINeutralarXiv – CS AI · Mar 34/104
🧠

Scaling Laws of SignSGD in Linear Regression: When Does It Outperform SGD?

Researchers analyzed scaling laws for signSGD optimization in machine learning, comparing it to standard SGD under a power-law random features model. The study identifies unique effects in signSGD that can lead to steeper compute-optimal scaling laws than SGD in noise-dominant regimes.

AINeutralOpenAI News · Oct 191/107
🧠

Scaling laws for reward model overoptimization

The article appears to discuss scaling laws related to reward model overoptimization in AI systems. However, the article body is empty, making it impossible to provide meaningful analysis of the content or implications.

AINeutralOpenAI News · Jan 231/107
🧠

Scaling laws for neural language models

The article title references scaling laws for neural language models, which are fundamental principles governing how AI model performance improves with increased computational resources, data, and model size. However, no article body content was provided for analysis.