y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#model-capabilities News & Analysis

2 articles tagged with #model-capabilities. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

2 articles
AIBullisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Explainable LLM Unlearning Through Reasoning

Researchers introduce Targeted Reasoning Unlearning (TRU), a new method for removing specific knowledge from large language models while preserving general capabilities. The approach uses reasoning-based targets to guide the unlearning process, addressing issues with previous gradient ascent methods that caused unintended capability degradation.

AINeutralarXiv โ€“ CS AI ยท Apr 146/10
๐Ÿง 

Human-like Working Memory Interference in Large Language Models

Researchers discovered that large language models exhibit working memory limitations similar to humans, encoding multiple memory items in entangled representations that require interference control rather than direct retrieval. This finding reveals a shared computational constraint between biological and artificial systems, suggesting that working memory capacity may be a fundamental bottleneck in intelligent systems rather than a limitation unique to biological brains.