y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#interpretability News & Analysis

79 articles tagged with #interpretability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

79 articles
AINeutralarXiv – CS AI Β· 1d ago6/10
🧠

FaCT: Faithful Concept Traces for Explaining Neural Network Decisions

Researchers introduce FaCT, a new approach for explaining neural network decisions through faithful concept-based explanations that don't rely on restrictive assumptions about how models learn. The method includes a new evaluation metric (CΒ²-Score) and demonstrates improved interpretability while maintaining competitive performance on ImageNet.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Shared Emotion Geometry Across Small Language Models: A Cross-Architecture Study of Representation, Behavior, and Methodological Confounds

Researchers demonstrate that five mature small language model architectures (1.5B-8B parameters) share nearly identical emotion vector representations despite exhibiting opposite behavioral profiles, suggesting emotion geometry is a universal feature organized early in model development. The study also deconstructs prior emotion-vector research methodology into four distinct layers of confounding factors, revealing that single correlations between studies cannot safely establish comparability.

🧠 Llama
AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Hubble: An LLM-Driven Agentic Framework for Safe and Automated Alpha Factor Discovery

Researchers introduce Hubble, an LLM-driven framework that automates alpha factor discovery in quantitative finance by using large language models constrained by safety mechanisms to generate and refine predictive trading factors. The system achieved a composite score of 0.827 across 181 evaluated factors on U.S. equities, demonstrating that combining AI-driven generation with deterministic safety constraints enables interpretable and reproducible factor discovery.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Layerwise Dynamics for In-Context Classification in Transformers

Researchers have developed a method to make transformer neural networks interpretable by studying how they perform in-context classification from few examples. By enforcing permutation equivariance constraints, they extracted an explicit algorithmic update rule that reveals how transformers dynamically adjust to new data, offering the first identifiable recursion of this kind.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

A comprehensive review examines explainable AI methods for human activity recognition (HAR) systems across wearable, ambient, and physiological sensors. The paper addresses the critical gap between deep learning's performance improvements and the opacity that limits real-world deployment, proposing a unified framework for understanding XAI mechanisms in HAR applications.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Principles Do Not Apply Themselves: A Hermeneutic Perspective on AI Alignment

A new arXiv paper argues that AI alignment cannot rely solely on stated principles because their real-world application requires contextual judgment and interpretation. The research shows that a significant portion of preference-labeling data involves principle conflicts or indifference, meaning principles alone cannot determine decisionsβ€”and these interpretive choices often emerge only during model deployment rather than in training data.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Towards Reasonable Concept Bottleneck Models

Researchers introduce CREAM (Concept Reasoning Models), an advanced framework for Concept Bottleneck Models that allows explicit encoding of concept relationships and concept-to-task mappings. The model maintains interpretability while achieving competitive performance even with incomplete concept sets through an optional side-channel, addressing a key limitation in explainable AI systems.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Detecting Invariant Manifolds in ReLU-Based RNNs

Researchers have developed a novel algorithm for detecting invariant manifolds in ReLU-based recurrent neural networks (RNNs), enabling analysis of dynamical system behavior through topological and geometrical properties. The method identifies basin boundaries, multistability, and chaotic dynamics, with applications to scientific computing and explainable AI.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Diffusion-CAM: Faithful Visual Explanations for dMLLMs

Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

From Attribution to Action: A Human-Centered Application of Activation Steering

Researchers introduce an interactive workflow combining Sparse Autoencoders (SAE) and activation steering to make AI explainability actionable for practitioners. Through expert interviews with debugging tasks on CLIP, the study reveals that activation steering enables hypothesis testing and intervention-based debugging, though practitioners emphasize trust in observed model behavior over explanation plausibility and identify risks like ripple effects and limited generalization.

$XRP
AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Computational Lesions in Multilingual Language Models Separate Shared and Language-specific Brain Alignment

Researchers used computational lesions on multilingual large language models to identify how the brain processes language across different languages. By selectively disabling parameters, they found that a shared computational core handles 60% of multilingual processing, while language-specific components fine-tune predictions for individual languages, providing new insights into how multilingual AI aligns with human neurobiology.

AINeutralarXiv – CS AI Β· 2d ago6/10
🧠

Relational Preference Encoding in Looped Transformer Internal States

Researchers demonstrate that looped transformers like Ouro-2.6B encode human preferences relationally rather than independently, with pairwise evaluators achieving 95.2% accuracy compared to 21.75% for independent classification. The study reveals that preference encoding is fundamentally relational, functioning as an internal consistency probe rather than a direct predictor of human annotations.

🏒 Anthropic
AIBearisharXiv – CS AI Β· 3d ago6/10
🧠

Adversarial Evasion Attacks on Computer Vision using SHAP Values

Researchers demonstrate a white-box adversarial attack on computer vision models using SHAP values to identify and exploit critical input features, showing superior robustness compared to the Fast Gradient Sign Method, particularly when gradient information is obscured or hidden.

AIBullisharXiv – CS AI Β· 3d ago6/10
🧠

Learning Vision-Language-Action World Models for Autonomous Driving

Researchers present VLA-World, a vision-language-action model that combines predictive world modeling with reflective reasoning for autonomous driving. The system generates future frames guided by action trajectories and then reasons over imagined scenarios to refine predictions, achieving state-of-the-art performance on planning and future-generation benchmarks.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

Improved Evidence Extraction and Metrics for Document Inconsistency Detection with LLMs

Researchers introduce improved methods for detecting inconsistencies in documents using large language models, including new evaluation metrics and a redact-and-retry framework. The work addresses a research gap in LLM-based document analysis and includes a new semi-synthetic dataset for benchmarking evidence extraction capabilities.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

SymptomWise: A Deterministic Reasoning Layer for Reliable and Efficient AI Systems

SymptomWise introduces a deterministic reasoning framework that separates language understanding from diagnostic inference in AI-driven medical systems, combining expert-curated knowledge with constrained LLM use to improve reliability and reduce hallucinations. The system achieved 88% accuracy in placing correct diagnoses in top-five differentials on challenging pediatric neurology cases, demonstrating how structured approaches can enhance AI safety in critical domains.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

Reasoning Fails Where Step Flow Breaks

Researchers introduce Step-Saliency, a diagnostic tool that reveals how large reasoning models fail during multi-step reasoning tasks by identifying two critical information-flow breakdowns: shallow layers that ignore context and deep layers that lose focus on reasoning. They propose StepFlow, a test-time intervention that repairs these flows and improves model accuracy without retraining.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

Explaining Neural Networks in Preference Learning: a Post-hoc Inductive Logic Programming Approach

Researchers propose using Inductive Learning of Answer Set Programs (ILASP) to create interpretable approximations of neural networks trained on preference learning tasks. The approach combines dimensionality reduction through Principal Component Analysis with logic-based explanations, addressing the challenge of explaining black-box AI models while maintaining computational efficiency.

AINeutralarXiv – CS AI Β· 6d ago6/10
🧠

How Much LLM Does a Self-Revising Agent Actually Need?

Researchers introduce a declarative runtime protocol that externalizes agent state to measure how much of an LLM-based agent's competence actually derives from the language model versus explicit structural components. Testing on Collaborative Battleship, they find that explicit world-model planning drives most performance gains, while sparse LLM-based revision at 4.3% of turns yields minimal and sometimes negative returns.

AIBullisharXiv – CS AI Β· Apr 76/10
🧠

Automated Attention Pattern Discovery at Scale in Large Language Models

Researchers developed AP-MAE, a vision transformer model that analyzes attention patterns in large language models at scale to improve interpretability. The system can predict code generation accuracy with 55-70% precision and enable targeted interventions that increase model accuracy by 13.6%.

AIBearisharXiv – CS AI Β· Apr 66/10
🧠

Do Audio-Visual Large Language Models Really See and Hear?

A new research study reveals that Audio-Visual Large Language Models (AVLLMs) exhibit a fundamental bias toward visual information over audio when the modalities conflict. The research shows that while these models encode rich audio semantics in intermediate layers, visual representations dominate during the final text generation phase, indicating limited effectiveness of current multimodal AI training approaches.

AIBullisharXiv – CS AI Β· Mar 266/10
🧠

Explainable embeddings with Distance Explainer

Researchers introduce Distance Explainer, a new method for explaining how AI models make decisions in embedded vector spaces by identifying which features contribute to similarity between data points. The technique adapts existing explainability methods to work with complex multi-modal embeddings like image-caption pairs, addressing a critical gap in AI interpretability research.

AINeutralarXiv – CS AI Β· Mar 176/10
🧠

Feature-level Interaction Explanations in Multimodal Transformers

Researchers introduce FL-I2MoE, a new Mixture-of-Experts layer for multimodal Transformers that explicitly identifies synergistic and redundant cross-modal feature interactions. The method provides more interpretable explanations for how different data modalities contribute to AI decision-making compared to existing approaches.

AIBullisharXiv – CS AI Β· Mar 176/10
🧠

GradCFA: A Hybrid Gradient-Based Counterfactual and Feature Attribution Explanation Algorithm for Local Interpretation of Neural Networks

Researchers introduce GradCFA, a new hybrid AI explanation framework that combines counterfactual explanations and feature attribution to improve transparency in neural network decisions. The algorithm extends beyond binary classification to multi-class scenarios and demonstrates superior performance in generating feasible, plausible, and diverse explanations compared to existing methods.

AINeutralarXiv – CS AI Β· Mar 176/10
🧠

A Closer Look into LLMs for Table Understanding

Researchers conducted an empirical study on 16 Large Language Models to understand how they process tabular data, revealing a three-phase attention pattern and finding that tabular tasks require deeper neural network layers than math reasoning. The study analyzed attention dynamics, layer depth requirements, expert activation in MoE models, and the impact of different input designs on table understanding performance.