y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#interpretable-ai News & Analysis

23 articles tagged with #interpretable-ai. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

23 articles
AIBullisharXiv – CS AI · 2d ago7/10
🧠

Minimal Embodiment Enables Efficient Learning of Number Concepts in Robot

Researchers demonstrate that robots equipped with minimal embodied sensorimotor capabilities learn numerical concepts significantly faster than vision-only systems, achieving 96.8% counting accuracy with 10% of training data. The embodied neural network spontaneously develops biologically plausible number representations matching human cognitive development, suggesting embodiment acts as a structural learning prior rather than merely an information source.

AIBullisharXiv – CS AI · Mar 177/10
🧠

In-Context Symbolic Regression for Robustness-Improved Kolmogorov-Arnold Networks

Researchers developed new methods for extracting symbolic formulas from Kolmogorov-Arnold Networks (KANs), addressing a key bottleneck in making AI models more interpretable. The proposed Greedy in-context Symbolic Regression (GSR) and Gated Matching Pursuit (GMP) methods achieved up to 99.8% reduction in test error while improving robustness.

AIBullisharXiv – CS AI · Mar 117/10
🧠

Logos: An evolvable reasoning engine for rational molecular design

Researchers introduce Logos, a compact AI model that combines multi-step logical reasoning with chemical consistency for molecular design. The model achieves strong performance in structural accuracy and chemical validity while using fewer parameters than larger language models, and provides transparent reasoning that can be inspected by humans.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Detecting Structural Heart Disease from Electrocardiograms via a Generalized Additive Model of Interpretable Foundation-Model Predictors

Researchers developed an interpretable AI framework for detecting structural heart disease from electrocardiograms, achieving better performance than existing deep-learning methods while providing clinical transparency. The model demonstrated improvements of nearly 1% across key metrics using the EchoNext benchmark of over 80,000 ECG-ECHO pairs.

AIBullisharXiv – CS AI · Mar 46/102
🧠

CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

Researchers introduce CoBELa, a new AI framework for interpretable image generation that uses concept bottlenecks on energy landscapes to enable transparent, controllable synthesis without requiring decoder retraining. The system achieves strong performance on benchmark datasets while allowing users to compositionally manipulate concepts through energy function combinations.

AIBullisharXiv – CS AI · 3d ago6/10
🧠

Sample-Efficient Neurosymbolic Deep Reinforcement Learning

Researchers propose a neuro-symbolic deep reinforcement learning approach that integrates logical rules and symbolic knowledge to improve sample efficiency and generalization in RL systems. The method transfers partial policies from simple tasks to complex ones, reducing training data requirements and improving performance in sparse-reward environments compared to existing baselines.

AIBullisharXiv – CS AI · Apr 66/10
🧠

Hierarchical, Interpretable, Label-Free Concept Bottleneck Model

Researchers have developed HIL-CBM, a new hierarchical interpretable AI model that enhances explainability by mimicking human cognitive processes across multiple semantic levels. The model outperforms existing Concept Bottleneck Models in classification accuracy while providing more interpretable explanations without requiring manual concept annotations.

AIBullisharXiv – CS AI · Mar 266/10
🧠

From Untamed Black Box to Interpretable Pedagogical Orchestration: The Ensemble of Specialized LLMs Architecture for Adaptive Tutoring

Researchers introduced ES-LLMs, a new AI tutoring architecture that separates decision-making from language generation to create more reliable and interpretable educational AI systems. The system outperformed traditional monolithic LLMs in human evaluations (91.7% preference) while reducing costs by 54% and achieving 100% adherence to pedagogical constraints.

AIBullisharXiv – CS AI · Mar 266/10
🧠

Learning To Guide Human Decision Makers With Vision-Language Models

Researchers introduce Learning to Guide (LTG), a new AI framework where machines provide interpretable guidance to human decision-makers rather than making automated decisions. The SLOG approach transforms vision-language models into guidance generators using human feedback, showing promise in medical diagnosis applications.

AIBullisharXiv – CS AI · Mar 176/10
🧠

From Stochastic Answers to Verifiable Reasoning: Interpretable Decision-Making with LLM-Generated Code

Researchers propose a new framework that uses LLMs as code generators rather than per-instance evaluators for high-stakes decision-making, creating interpretable and reproducible AI systems. The approach generates executable decision logic once instead of querying LLMs for each prediction, demonstrated through venture capital founder screening with competitive performance while maintaining full transparency.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 96/10
🧠

A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts

Researchers developed an interpretable AI framework for fetal ultrasound image classification that incorporates medical concepts and clinical knowledge. The system uses graph convolutional networks to establish relationships between key medical concepts, providing explanations that align with clinicians' cognitive processes rather than just pixel-level analysis.

AIBullisharXiv – CS AI · Mar 37/107
🧠

An Interpretable Local Editing Model for Counterfactual Medical Image Generation

Researchers developed InstructX2X, a new AI model for generating counterfactual medical images that provides interpretable explanations and prevents unintended modifications. The model achieves state-of-the-art performance in creating high-quality chest X-ray images with visual guidance maps for medical applications.

AIBullisharXiv – CS AI · Mar 36/106
🧠

GlassMol: Interpretable Molecular Property Prediction with Concept Bottleneck Models

Researchers introduce GlassMol, a new interpretable AI model for molecular property prediction that addresses the black-box problem in drug discovery. The model uses Concept Bottleneck Models with automated concept curation and LLM-guided selection, achieving performance that matches or exceeds traditional black-box models across thirteen benchmarks.

AIBullisharXiv – CS AI · Mar 36/107
🧠

QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

Interpretable Debiasing of Vision-Language Models for Social Fairness

Researchers have developed DeBiasLens, a new framework that uses sparse autoencoders to identify and deactivate social bias neurons in Vision-Language models without degrading their performance. The model-agnostic approach addresses concerns about unintended social bias in VLMs by making the debiasing process interpretable and targeting internal model dynamics rather than surface-level fixes.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

VISTA: Knowledge-Driven Vessel Trajectory Imputation with Repair Provenance

Researchers introduce VISTA, a framework for vessel trajectory imputation that uses knowledge-driven LLM reasoning to repair incomplete maritime tracking data. The system provides 'repair provenance' - documented reasoning behind data repairs - achieving 5-91% accuracy improvements over existing methods while reducing inference time by 51-93%.

AINeutralarXiv – CS AI · 2d ago5/10
🧠

Enhanced-FQL($\lambda$), an Efficient and Interpretable RL with novel Fuzzy Eligibility Traces and Segmented Experience Replay

Researchers propose Enhanced-FQL(λ), a fuzzy reinforcement learning framework that combines fuzzified eligibility traces and segmented experience replay to improve interpretability and efficiency in continuous control tasks. The method demonstrates competitive performance with neural network approaches while maintaining computational simplicity through interpretable fuzzy rule bases rather than complex black-box architectures.

$FET
AINeutralarXiv – CS AI · Mar 95/10
🧠

Visual Words Meet BM25: Sparse Auto-Encoder Visual Word Scoring for Image Retrieval

Researchers introduce BM25-V, a new image retrieval method that combines sparse visual-word activations from Vision Transformers with BM25 scoring for efficient and interpretable image search. The approach achieves 99.3%+ recall across seven benchmarks while offering explainable results and serving as an efficient first-stage retriever for dense reranking systems.

AINeutralarXiv – CS AI · Mar 44/104
🧠

Differentiable Time-Varying IIR Filtering for Real-Time Speech Denoising

Researchers have developed TVF (Time-Varying Filtering), a lightweight 1 million parameter speech enhancement model that combines digital signal processing with deep learning for real-time speech denoising. The model uses a neural network to predict coefficients for a 35-band IIR filter cascade, offering interpretable processing while adapting dynamically to changing noise conditions.

AINeutralarXiv – CS AI · Mar 25/108
🧠

Hierarchical Concept-based Interpretable Models

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

AINeutralarXiv – CS AI · Feb 274/105
🧠

Knob: A Physics-Inspired Gating Interface for Interpretable and Controllable Neural Dynamics

Researchers propose Knob, a new framework that applies control theory principles to neural networks by mapping gating dynamics to mechanical systems. The approach enables real-time human adjustment of AI model behavior through intuitive physical parameters like damping and frequency, offering both static and continuous processing modes.