AINeutralarXiv โ CS AI ยท 7h ago7/10
๐ง
The Institutional Scaling Law: Non-Monotonic Fitness, Capability-Trust Divergence, and Symbiogenetic Scaling in Generative AI
Researchers propose the Institutional Scaling Law, challenging the assumption that AI performance improves monotonically with model size. The framework shows that institutional fitness (capability, trust, affordability, sovereignty) has an optimal scale beyond which capability and trust diverge, suggesting orchestrated domain-specific models may outperform large generalist models.