35 articles tagged with #representation-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท Apr 75/10
๐ง Researchers found that large language models (LLMs) have an asymmetry between their internal knowledge and prompted responses when detecting analogies. While probing reveals models understand rhetorical analogies better than their prompted responses suggest, both methods perform poorly on narrative analogies requiring deeper abstraction.
AINeutralarXiv โ CS AI ยท Mar 264/10
๐ง Researchers propose a new method called 'perturbation' for understanding how language models learn representations by fine-tuning models on adversarial examples and measuring how changes spread to other examples. The approach reveals that trained language models develop structured linguistic abstractions without geometric assumptions, offering insights into how AI systems generalize language understanding.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers introduce ACES, a new method to analyze how automatic speech recognition systems perform differently across accents. The study finds that accent information is concentrated in early neural network layers and is deeply intertwined with speech recognition capabilities, making simple bias removal ineffective.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers propose directional CDNV (decision-axis variance) as a key geometric quantity explaining why self-supervised learning representations transfer well with few labels. The study shows that small variability along class-separating directions enables strong few-shot transfer and low interference across multiple tasks.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers developed a framework to analyze how demographic attributes (age, sex, race) can be predicted from brain MRI scans by separating anatomical structure from acquisition-dependent contrast differences. The study found that demographic predictability primarily stems from anatomical variation rather than imaging artifacts, suggesting bias mitigation in medical AI must address both sources.
AINeutralarXiv โ CS AI ยท Mar 44/103
๐ง Researchers propose ITO, a new framework for image-text representation learning that addresses modality gaps through multimodal alignment and training-time fusion. The method outperforms existing baselines across classification, retrieval, and multimodal benchmarks while maintaining efficiency by discarding the fusion module during inference.
AINeutralarXiv โ CS AI ยท Mar 44/103
๐ง Researchers introduce Composition Projection Decomposition (CPD) to analyze how atomistic foundation models organize information in their representations. The study finds that tensor product equivariant architectures like MACE create linearly disentangled representations where geometric information is easily accessible, while handcrafted descriptors entangle information nonlinearly.
AINeutralarXiv โ CS AI ยท Mar 34/104
๐ง Researchers introduce DP-RGMI, a framework that analyzes how differential privacy affects medical image analysis by decomposing performance degradation into encoder geometry and task-head utilization components. The study across 594,000 chest X-ray images reveals that differential privacy alters representation structure rather than uniformly collapsing features, providing insights for privacy model selection.
AIBullisharXiv โ CS AI ยท Mar 24/106
๐ง Researchers have developed a new framework for privacy-preserving feature selection that uses permutation-invariant representation learning and federated learning techniques. The approach addresses data imbalance and privacy constraints in distributed scenarios while improving computational efficiency and downstream task performance.
AINeutralarXiv โ CS AI ยท Mar 24/107
๐ง Researchers analyzed DINOv2 vision transformer using Sparse Autoencoders to understand how it processes visual information, discovering that the model uses specialized concept dictionaries for different tasks like classification and segmentation. They propose the Minkowski Representation Hypothesis as a new framework for understanding how vision transformers combine conceptual archetypes to form representations.