y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#cross-entropy News & Analysis

2 articles tagged with #cross-entropy. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

2 articles
AINeutralarXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

The Phenomenology of Hallucinations

Researchers discovered that AI language models hallucinate not from failing to detect uncertainty, but from inability to integrate uncertainty signals into output generation. The study shows models can identify uncertain inputs internally, but these signals become geometrically amplified yet functionally silent due to weak coupling with output layers.

AINeutralarXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

What Scales in Cross-Entropy Scaling Law?

Researchers discovered that the traditional cross-entropy scaling law for large language models breaks down at very large scales because only one component (error-entropy) actually follows power-law scaling, while other components remain constant. This finding explains why model performance improvements become less predictable as models grow larger and establishes a new error-entropy scaling law for better understanding LLM development.