y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#mechanistic-interpretability News & Analysis

23 articles tagged with #mechanistic-interpretability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

23 articles
AINeutralarXiv – CS AI Β· 2d ago7/10
🧠

Why Do Large Language Models Generate Harmful Content?

Researchers used causal mediation analysis to identify why large language models generate harmful content, discovering that harmful outputs originate in later model layers primarily through MLP blocks rather than attention mechanisms. Early layers develop contextual understanding of harmfulness that propagates through the network to sparse neurons in final layers that act as gating mechanisms for harmful generation.

AINeutralarXiv – CS AI Β· 2d ago7/10
🧠

Pando: Do Interpretability Methods Work When Models Won't Explain Themselves?

Researchers introduce Pando, a benchmark that evaluates mechanistic interpretability methods by controlling for the 'elicitation confounder'β€”where black-box prompting alone might explain model behavior without requiring white-box tools. Testing 720 models, they find gradient-based attribution and relevance patching improve accuracy by 3-5% when explanations are absent or misleading, but perform poorly when models provide faithful explanations, suggesting interpretability tools may provide limited value for alignment auditing.

AINeutralarXiv – CS AI Β· 2d ago7/10
🧠

Do LLMs Know Tool Irrelevance? Demystifying Structural Alignment Bias in Tool Invocations

Researchers identify structural alignment bias, a mechanistic flaw where large language models invoke tools even when irrelevant to user queries, simply because query attributes match tool parameters. The study introduces SABEval dataset and a rebalancing strategy that effectively mitigates this bias without degrading general tool-use capabilities.

AINeutralarXiv – CS AI Β· 3d ago7/10
🧠

Large Language Models Generate Harmful Content Using a Distinct, Unified Mechanism

Researchers using weight pruning techniques discovered that large language models generate harmful content through a compact, unified set of internal weights that are distinct from benign capabilities. The findings reveal that aligned models compress harmful representations more than unaligned ones, explaining why safety guardrails remain brittle despite alignment training and why fine-tuning on narrow domains can trigger broad misalignment.

AIBullisharXiv – CS AI Β· 3d ago7/10
🧠

The Two-Stage Decision-Sampling Hypothesis: Understanding the Emergence of Self-Reflection in RL-Trained LLMs

Researchers introduce the Two-Stage Decision-Sampling Hypothesis to explain how reinforcement learning enables self-reflection capabilities in large language models, demonstrating that RL's superior performance stems from improved decision-making rather than generation quality. The theory shows that reward gradients distribute asymmetrically across policy components, explaining why RL succeeds where supervised fine-tuning fails.

AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

SALLIE: Safeguarding Against Latent Language & Image Exploits

Researchers introduce SALLIE, a lightweight runtime defense framework that detects and mitigates jailbreak attacks and prompt injections in large language and vision-language models simultaneously. Using mechanistic interpretability and internal model activations, SALLIE achieves robust protection across multiple architectures without degrading performance or requiring architectural changes.

AINeutralarXiv – CS AI Β· Mar 277/10
🧠

Closing the Confidence-Faithfulness Gap in Large Language Models

Researchers have identified a fundamental issue in large language models where verbalized confidence scores don't align with actual accuracy due to orthogonal encoding of these signals. They discovered a 'Reasoning Contamination Effect' where simultaneous reasoning disrupts confidence calibration, and developed a two-stage adaptive steering pipeline to improve alignment.

AIBullisharXiv – CS AI Β· Mar 177/10
🧠

Directional Routing in Transformers

Researchers introduce directional routing, a lightweight mechanism for transformer models that adds only 3.9% parameter cost but significantly improves performance. The technique gives attention heads learned suppression directions controlled by a shared router, reducing perplexity by 31-56% and becoming the dominant computational pathway in the model.

🏒 Perplexity
AIBullisharXiv – CS AI Β· Mar 167/10
🧠

Disentangling Recall and Reasoning in Transformer Models through Layer-wise Attention and Activation Analysis

Researchers used mechanistic interpretability techniques to demonstrate that transformer language models have distinct but interacting neural circuits for recall (retrieving memorized facts) and reasoning (multi-step inference). Through controlled experiments on Qwen and LLaMA models, they showed that disabling specific circuits can selectively impair one ability while leaving the other intact.

AINeutralarXiv – CS AI Β· Mar 127/10
🧠

Dissecting Chronos: Sparse Autoencoders Reveal Causal Feature Hierarchies in Time Series Foundation Models

Researchers applied sparse autoencoders to analyze Chronos-T5-Large, a 710M parameter time series foundation model, revealing how different layers process temporal data. The study found that mid-encoder layers contain the most causally important features for change detection, while early layers handle frequency patterns and final layers compress semantic concepts.

AINeutralarXiv – CS AI Β· Mar 117/10
🧠

From Data Statistics to Feature Geometry: How Correlations Shape Superposition

Researchers introduce Bag-of-Words Superposition (BOWS) to study how neural networks arrange features in superposition when using realistic correlated data. The study reveals that interference between features can be constructive rather than just noise, leading to semantic clusters and cyclical structures observed in language models.

AIBullisharXiv – CS AI Β· Feb 277/105
🧠

Certified Circuits: Stability Guarantees for Mechanistic Circuits

Researchers introduce Certified Circuits, a framework that provides provable stability guarantees for neural network circuit discovery. The method wraps existing algorithms with randomized data subsampling to ensure circuit components remain consistent across dataset variations, achieving 91% higher accuracy while using 45% fewer neurons.

AINeutralarXiv – CS AI Β· Feb 277/105
🧠

Transformers converge to invariant algorithmic cores

Researchers have discovered that transformer models, despite different training runs producing different weights, converge to the same compact 'algorithmic cores' - low-dimensional subspaces essential for task performance. The study shows these invariant structures persist across different scales and training runs, suggesting transformer computations are organized around shared algorithmic patterns rather than implementation-specific details.

AIBullisharXiv – CS AI Β· 2d ago6/10
🧠

CoSToM:Causal-oriented Steering for Intrinsic Theory-of-Mind Alignment in Large Language Models

Researchers introduce CoSToM, a framework that uses causal tracing and activation steering to improve Theory of Mind alignment in large language models. The work addresses a critical gap between LLMs' internal knowledge and external behavior, demonstrating that targeted interventions in specific neural layers can enhance social reasoning capabilities and dialogue quality.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

A Unified Theory of Sparse Dictionary Learning in Mechanistic Interpretability: Piecewise Biconvexity and Spurious Minima

Researchers develop the first unified theoretical framework for sparse dictionary learning (SDL) methods used in AI interpretability, proving these optimization problems are piecewise biconvex and characterizing why they produce flawed features. The work explains long-standing practical failures in sparse autoencoders and proposes feature anchoring as a solution to improve feature disentanglement in neural networks.

AIBullisharXiv – CS AI Β· 6d ago6/10
🧠

Improving Robustness In Sparse Autoencoders via Masked Regularization

Researchers propose a masked regularization technique to improve the robustness and interpretability of Sparse Autoencoders (SAEs) used in large language model analysis. The method addresses feature absorption and out-of-distribution performance failures by randomly replacing tokens during training to disrupt co-occurrence patterns, offering a practical path toward more reliable mechanistic interpretability tools.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

ConceptTracer: Interactive Analysis of Concept Saliency and Selectivity in Neural Representations

ConceptTracer is an interactive tool for analyzing neural network representations through human-interpretable concepts, using information-theoretic measures to identify neurons responsive to specific ideas. The tool demonstrates how foundation models like TabPFN encode conceptual information, advancing mechanistic interpretability research.

AIBullisharXiv – CS AI Β· Apr 76/10
🧠

Automated Attention Pattern Discovery at Scale in Large Language Models

Researchers developed AP-MAE, a vision transformer model that analyzes attention patterns in large language models at scale to improve interpretability. The system can predict code generation accuracy with 55-70% precision and enable targeted interventions that increase model accuracy by 13.6%.

AINeutralarXiv – CS AI Β· Mar 26/1015
🧠

Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures

Researchers conducted an in-depth analysis of in-context learning capabilities across different AI architectures including transformers, state-space models, and hybrid systems. The study reveals that while these models perform similarly on tasks, their internal mechanisms differ significantly, with function vectors playing key roles in self-attention and Mamba layers.

AIBullisharXiv – CS AI Β· Feb 276/107
🧠

Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility

Researchers have identified 'modal difference vectors' in language models that can distinguish between possible, impossible, and nonsensical statements, revealing better modal categorization abilities than previously thought. The study shows these vectors emerge consistently as models become more capable and can even predict human judgment patterns about event plausibility.

AIBullishOpenAI News Β· Nov 136/107
🧠

Understanding neural networks through sparse circuits

OpenAI is researching mechanistic interpretability through sparse neural network models to better understand AI reasoning processes. This approach aims to make AI systems more transparent and improve their safety and reliability.

AINeutralarXiv – CS AI Β· Mar 54/10
🧠

Circuit Insights: Towards Interpretability Beyond Activations

Researchers introduce WeightLens and CircuitLens, two new methods for analyzing neural network interpretability that go beyond traditional activation-based approaches. These tools aim to provide more systematic and scalable analysis of neural network circuits by interpreting features directly from weights and capturing feature interactions.