y0news
#interpretability4 articles
4 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago5
๐Ÿง 

Efficient Discovery of Approximate Causal Abstractions via Neural Mechanism Sparsification

Researchers have developed a new method to extract interpretable causal mechanisms from neural networks using structured pruning as a search technique. The approach reframes network pruning as finding approximate causal abstractions, yielding closed-form criteria for simplifying networks while maintaining their causal structure under interventions.

AIBullisharXiv โ€“ CS AI ยท 4h ago14
๐Ÿง 

Reallocating Attention Across Layers to Reduce Multimodal Hallucination

Researchers propose a training-free solution to reduce hallucinations in multimodal AI models by rebalancing attention between perception and reasoning layers. The method achieves 4.2% improvement in reasoning accuracy with minimal computational overhead.

AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry

Researchers analyzed DINOv2 vision transformer using Sparse Autoencoders to understand how it processes visual information, discovering that the model uses specialized concept dictionaries for different tasks like classification and segmentation. They propose the Minkowski Representation Hypothesis as a new framework for understanding how vision transformers combine conceptual archetypes to form representations.