79 articles tagged with #interpretability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv β CS AI Β· 1d ago6/10
π§ Researchers introduce FaCT, a new approach for explaining neural network decisions through faithful concept-based explanations that don't rely on restrictive assumptions about how models learn. The method includes a new evaluation metric (CΒ²-Score) and demonstrates improved interpretability while maintaining competitive performance on ImageNet.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers demonstrate that five mature small language model architectures (1.5B-8B parameters) share nearly identical emotion vector representations despite exhibiting opposite behavioral profiles, suggesting emotion geometry is a universal feature organized early in model development. The study also deconstructs prior emotion-vector research methodology into four distinct layers of confounding factors, revealing that single correlations between studies cannot safely establish comparability.
π§ Llama
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce Hubble, an LLM-driven framework that automates alpha factor discovery in quantitative finance by using large language models constrained by safety mechanisms to generate and refine predictive trading factors. The system achieved a composite score of 0.827 across 181 evaluated factors on U.S. equities, demonstrating that combining AI-driven generation with deterministic safety constraints enables interpretable and reproducible factor discovery.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers have developed a method to make transformer neural networks interpretable by studying how they perform in-context classification from few examples. By enforcing permutation equivariance constraints, they extracted an explicit algorithmic update rule that reveals how transformers dynamically adjust to new data, offering the first identifiable recursion of this kind.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ A comprehensive review examines explainable AI methods for human activity recognition (HAR) systems across wearable, ambient, and physiological sensors. The paper addresses the critical gap between deep learning's performance improvements and the opacity that limits real-world deployment, proposing a unified framework for understanding XAI mechanisms in HAR applications.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ A new arXiv paper argues that AI alignment cannot rely solely on stated principles because their real-world application requires contextual judgment and interpretation. The research shows that a significant portion of preference-labeling data involves principle conflicts or indifference, meaning principles alone cannot determine decisionsβand these interpretive choices often emerge only during model deployment rather than in training data.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce CREAM (Concept Reasoning Models), an advanced framework for Concept Bottleneck Models that allows explicit encoding of concept relationships and concept-to-task mappings. The model maintains interpretability while achieving competitive performance even with incomplete concept sets through an optional side-channel, addressing a key limitation in explainable AI systems.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers have developed a novel algorithm for detecting invariant manifolds in ReLU-based recurrent neural networks (RNNs), enabling analysis of dynamical system behavior through topological and geometrical properties. The method identifies basin boundaries, multistability, and chaotic dynamics, with applications to scientific computing and explainable AI.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce Diffusion-CAM, a novel interpretability method designed specifically for diffusion-based Multimodal Large Language Models (dMLLMs). Unlike existing visualization techniques optimized for sequential models, this approach accounts for the parallel denoising process inherent to diffusion architectures, achieving superior localization accuracy and visual fidelity in model explanations.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers introduce an interactive workflow combining Sparse Autoencoders (SAE) and activation steering to make AI explainability actionable for practitioners. Through expert interviews with debugging tasks on CLIP, the study reveals that activation steering enables hypothesis testing and intervention-based debugging, though practitioners emphasize trust in observed model behavior over explanation plausibility and identify risks like ripple effects and limited generalization.
$XRP
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers used computational lesions on multilingual large language models to identify how the brain processes language across different languages. By selectively disabling parameters, they found that a shared computational core handles 60% of multilingual processing, while language-specific components fine-tune predictions for individual languages, providing new insights into how multilingual AI aligns with human neurobiology.
AINeutralarXiv β CS AI Β· 2d ago6/10
π§ Researchers demonstrate that looped transformers like Ouro-2.6B encode human preferences relationally rather than independently, with pairwise evaluators achieving 95.2% accuracy compared to 21.75% for independent classification. The study reveals that preference encoding is fundamentally relational, functioning as an internal consistency probe rather than a direct predictor of human annotations.
π’ Anthropic
AIBearisharXiv β CS AI Β· 3d ago6/10
π§ Researchers demonstrate a white-box adversarial attack on computer vision models using SHAP values to identify and exploit critical input features, showing superior robustness compared to the Fast Gradient Sign Method, particularly when gradient information is obscured or hidden.
AIBullisharXiv β CS AI Β· 3d ago6/10
π§ Researchers present VLA-World, a vision-language-action model that combines predictive world modeling with reflective reasoning for autonomous driving. The system generates future frames guided by action trajectories and then reasons over imagined scenarios to refine predictions, achieving state-of-the-art performance on planning and future-generation benchmarks.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers introduce improved methods for detecting inconsistencies in documents using large language models, including new evaluation metrics and a redact-and-retry framework. The work addresses a research gap in LLM-based document analysis and includes a new semi-synthetic dataset for benchmarking evidence extraction capabilities.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ SymptomWise introduces a deterministic reasoning framework that separates language understanding from diagnostic inference in AI-driven medical systems, combining expert-curated knowledge with constrained LLM use to improve reliability and reduce hallucinations. The system achieved 88% accuracy in placing correct diagnoses in top-five differentials on challenging pediatric neurology cases, demonstrating how structured approaches can enhance AI safety in critical domains.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers introduce Step-Saliency, a diagnostic tool that reveals how large reasoning models fail during multi-step reasoning tasks by identifying two critical information-flow breakdowns: shallow layers that ignore context and deep layers that lose focus on reasoning. They propose StepFlow, a test-time intervention that repairs these flows and improves model accuracy without retraining.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers propose using Inductive Learning of Answer Set Programs (ILASP) to create interpretable approximations of neural networks trained on preference learning tasks. The approach combines dimensionality reduction through Principal Component Analysis with logic-based explanations, addressing the challenge of explaining black-box AI models while maintaining computational efficiency.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers introduce a declarative runtime protocol that externalizes agent state to measure how much of an LLM-based agent's competence actually derives from the language model versus explicit structural components. Testing on Collaborative Battleship, they find that explicit world-model planning drives most performance gains, while sparse LLM-based revision at 4.3% of turns yields minimal and sometimes negative returns.
AIBullisharXiv β CS AI Β· Apr 76/10
π§ Researchers developed AP-MAE, a vision transformer model that analyzes attention patterns in large language models at scale to improve interpretability. The system can predict code generation accuracy with 55-70% precision and enable targeted interventions that increase model accuracy by 13.6%.
AIBearisharXiv β CS AI Β· Apr 66/10
π§ A new research study reveals that Audio-Visual Large Language Models (AVLLMs) exhibit a fundamental bias toward visual information over audio when the modalities conflict. The research shows that while these models encode rich audio semantics in intermediate layers, visual representations dominate during the final text generation phase, indicating limited effectiveness of current multimodal AI training approaches.
AIBullisharXiv β CS AI Β· Mar 266/10
π§ Researchers introduce Distance Explainer, a new method for explaining how AI models make decisions in embedded vector spaces by identifying which features contribute to similarity between data points. The technique adapts existing explainability methods to work with complex multi-modal embeddings like image-caption pairs, addressing a critical gap in AI interpretability research.
AINeutralarXiv β CS AI Β· Mar 176/10
π§ Researchers introduce FL-I2MoE, a new Mixture-of-Experts layer for multimodal Transformers that explicitly identifies synergistic and redundant cross-modal feature interactions. The method provides more interpretable explanations for how different data modalities contribute to AI decision-making compared to existing approaches.
AIBullisharXiv β CS AI Β· Mar 176/10
π§ Researchers introduce GradCFA, a new hybrid AI explanation framework that combines counterfactual explanations and feature attribution to improve transparency in neural network decisions. The algorithm extends beyond binary classification to multi-class scenarios and demonstrates superior performance in generating feasible, plausible, and diverse explanations compared to existing methods.
AINeutralarXiv β CS AI Β· Mar 176/10
π§ Researchers conducted an empirical study on 16 Large Language Models to understand how they process tabular data, revealing a three-phase attention pattern and finding that tabular tasks require deeper neural network layers than math reasoning. The study analyzed attention dynamics, layer depth requirements, expert activation in MoE models, and the impact of different input designs on table understanding performance.