13 articles tagged with #llm-interpretability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers introduce IDEA, a framework that converts Large Language Model decision-making into interpretable, editable parametric models with calibrated probabilities. The approach outperforms major LLMs like GPT-5.2 and DeepSeek R1 on benchmarks while enabling direct expert knowledge integration and precise human-AI collaboration.
๐ง GPT-5
AINeutralarXiv โ CS AI ยท 3d ago7/10
๐ง Researchers demonstrate that Mixture of Experts (MoEs) specialization in large language models emerges from hidden state geometry rather than specialized routing architecture, challenging assumptions about how these systems work. Expert routing patterns resist human interpretation across models and tasks, suggesting that understanding MoE specialization remains as difficult as the broader unsolved problem of interpreting LLM internal representations.
AINeutralarXiv โ CS AI ยท 3d ago7/10
๐ง Researchers demonstrate that interpreting large language model reasoning requires analyzing distributions of possible reasoning chains rather than single examples. By resampling text after specific points, they show that stated reasons often don't causally drive model decisions, off-policy interventions are unstable, and hidden contextual hints exert cumulative influence even when explicitly removed.
AIBullisharXiv โ CS AI ยท 4d ago7/10
๐ง Researchers introduce NeuronLens, a framework that interprets neural networks by analyzing activation ranges rather than individual neurons, addressing the widespread polysemanticity problem in large language models. The range-based approach enables more precise concept manipulation while minimizing unintended degradation to model performance.
AIBullisharXiv โ CS AI ยท 4d ago7/10
๐ง Researchers propose a cost-effective proxy model framework that uses smaller, efficient models to approximate the interpretability explanations of expensive Large Language Models (LLMs), achieving over 90% fidelity at just 11% of computational cost. The framework includes verification mechanisms and demonstrates practical applications in prompt compression and data cleaning, making interpretability tools viable for real-world LLM development.
AIBullisharXiv โ CS AI ยท Apr 107/10
๐ง Researchers have developed a scalable system for interpreting and controlling large language models distributed across multiple GPUs, achieving up to 7x memory reduction and 41x throughput improvements. The method enables real-time behavioral steering of frontier LLMs like LLaMA and Qwen without fine-tuning, with results released as open-source tooling.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers demonstrate that large language models develop attractor-like geometric patterns in their activation space when processing identity documents describing persistent agents. Experiments on Llama 3.1 and Gemma 2 show paraphrased identity descriptions cluster significantly tighter than structural controls, suggesting LLMs encode semantic agent identity as stable attractors independent of linguistic variation.
๐ง Llama
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce Safe-SAIL, a framework that uses sparse autoencoders to interpret safety features in large language models across four domains (pornography, politics, violence, terror). The work reduces interpretation costs by 55% and identifies 1,758 safety-related features with human-readable explanations, advancing mechanistic understanding of AI safety.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose a novel framework treating Large Language Models as attention-informed Neural Topic Models, enabling interpretable topic extraction from documents. The approach combines white-box interpretability analysis with black-box long-context LLM capabilities, demonstrating competitive performance on topic modeling tasks while maintaining semantic clarity.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose a method for large language models to handle ambiguous user requests by generating structured responses that enumerate multiple valid interpretations with corresponding answers, trained via reinforcement learning with dual reward objectives for coverage and precision.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง Researchers investigate how large language models represent emotions in their latent spaces, discovering that LLMs develop coherent emotional representations aligned with established psychological models of valence and arousal. The findings support the linear representation hypothesis used in AI transparency methods and demonstrate practical applications for uncertainty quantification in emotion processing tasks.
AIBullisharXiv โ CS AI ยท Mar 126/10
๐ง Researchers developed Causal Concept Graphs (CCG), a new method for understanding how concepts interact during multi-step reasoning in language models by creating directed graphs of causal dependencies between interpretable features. Testing on GPT-2 Medium across reasoning tasks showed CCG significantly outperformed existing methods with a Causal Fidelity Score of 5.654, demonstrating more effective intervention targeting than random approaches.
AINeutralarXiv โ CS AI ยท Mar 37/108
๐ง New research reveals that large language models often determine their final answers before generating chain-of-thought reasoning, challenging the assumption that CoT reflects the model's actual decision process. Linear probes can predict model answers with 0.9 AUC accuracy before CoT generation, and steering these activations can flip answers in over 50% of cases.