y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#representation-learning News & Analysis

35 articles tagged with #representation-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

35 articles
AINeutralarXiv โ€“ CS AI ยท Apr 75/10
๐Ÿง 

When Models Know More Than They Say: Probing Analogical Reasoning in LLMs

Researchers found that large language models (LLMs) have an asymmetry between their internal knowledge and prompted responses when detecting analogies. While probing reveals models understand rhetorical analogies better than their prompted responses suggest, both methods perform poorly on narrative analogies requiring deeper abstraction.

AINeutralarXiv โ€“ CS AI ยท Mar 264/10
๐Ÿง 

Perturbation: A simple and efficient adversarial tracer for representation learning in language models

Researchers propose a new method called 'perturbation' for understanding how language models learn representations by fine-tuning models on adversarial examples and measuring how changes spread to other examples. The approach reveals that trained language models develop structured linguistic abstractions without geometric assumptions, offering insights into how AI systems generalize language understanding.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Directional Neural Collapse Explains Few-Shot Transfer in Self-Supervised Learning

Researchers propose directional CDNV (decision-axis variance) as a key geometric quantity explaining why self-supervised learning representations transfer well with few labels. The study shows that small variability along class-separating directions enables strong few-shot transfer and low interference across multiple tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Understanding Sources of Demographic Predictability in Brain MRI via Disentangling Anatomy and Contrast

Researchers developed a framework to analyze how demographic attributes (age, sex, race) can be predicted from brain MRI scans by separating anatomical structure from acquisition-dependent contrast differences. The study found that demographic predictability primarily stems from anatomical variation rather than imaging artifacts, suggesting bias mitigation in medical AI must address both sources.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

ITO: Images and Texts as One via Synergizing Multiple Alignment and Training-Time Fusion

Researchers propose ITO, a new framework for image-text representation learning that addresses modality gaps through multimodal alignment and training-time fusion. The method outperforms existing baselines across classification, retrieval, and multimodal benchmarks while maintaining efficiency by discarding the fusion module during inference.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Information Routing in Atomistic Foundation Models: How Equivariance Creates Linearly Disentangled Representations

Researchers introduce Composition Projection Decomposition (CPD) to analyze how atomistic foundation models organize information in their representations. The study finds that tensor product equivariant architectures like MACE create linearly disentangled representations where geometric information is easily accessible, while handcrafted descriptors entangle information nonlinearly.

AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Differential privacy representation geometry for medical image analysis

Researchers introduce DP-RGMI, a framework that analyzes how differential privacy affects medical image analysis by decomposing performance degradation into encoder geometry and task-head utilization components. The study across 594,000 chest X-ray images reveals that differential privacy alters representation structure rather than uniformly collapsing features, providing insights for privacy model selection.

AIBullisharXiv โ€“ CS AI ยท Mar 24/106
๐Ÿง 

Permutation-Invariant Representation Learning for Robust and Privacy-Preserving Feature Selection

Researchers have developed a new framework for privacy-preserving feature selection that uses permutation-invariant representation learning and federated learning techniques. The approach addresses data imbalance and privacy constraints in distributed scenarios while improving computational efficiency and downstream task performance.

AINeutralarXiv โ€“ CS AI ยท Mar 24/107
๐Ÿง 

Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry

Researchers analyzed DINOv2 vision transformer using Sparse Autoencoders to understand how it processes visual information, discovering that the model uses specialized concept dictionaries for different tasks like classification and segmentation. They propose the Minkowski Representation Hypothesis as a new framework for understanding how vision transformers combine conceptual archetypes to form representations.

โ† PrevPage 2 of 2