y0news
#foundation-models8 articles
8 articles
AIBullisharXiv โ€“ CS AI ยท 4h ago3
๐Ÿง 

SleepLM: Natural-Language Intelligence for Human Sleep

Researchers have developed SleepLM, a family of AI foundation models that combine natural language processing with sleep analysis using polysomnography data. The system can interpret and describe sleep patterns in natural language, trained on over 100K hours of sleep data from 10,000+ individuals, enabling new capabilities like language-guided sleep event detection and zero-shot generalization to novel sleep analysis tasks.

AIBullisharXiv โ€“ CS AI ยท 4h ago4
๐Ÿง 

Brain-OF: An Omnifunctional Foundation Model for fMRI, EEG and MEG

Researchers have developed Brain-OF, the first omnifunctional brain foundation model that can process fMRI, EEG, and MEG data simultaneously within a unified framework. The model introduces novel techniques like Any-Resolution Neural Signal Sampler and Masked Temporal-Frequency Modeling, trained on 40 datasets to achieve superior performance across diverse neuroscience tasks.

AIBullisharXiv โ€“ CS AI ยท 4h ago3
๐Ÿง 

Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning

Researchers introduce HDFLIM, a new framework that aligns vision and language AI models without requiring computationally expensive fine-tuning by using hyperdimensional computing to create cross-modal mappings while keeping foundation models frozen. The approach achieves comparable performance to traditional training methods while being significantly more resource-efficient.

AIBullisharXiv โ€“ CS AI ยท 4h ago3
๐Ÿง 

TradeFM: A Generative Foundation Model for Trade-flow and Market Microstructure

Researchers introduced TradeFM, a 524M-parameter generative AI model that learns from billions of trade events across 9,000+ equities to understand market microstructure. The model can generate synthetic market data and generalizes across different markets without asset-specific calibration, potentially enabling new applications in trading and market simulation.

$COMP
AIBullisharXiv โ€“ CS AI ยท 4h ago3
๐Ÿง 

The Geometry of Transfer: Unlocking Medical Vision Manifolds for Training-Free Model Ranking

Researchers developed a new framework for selecting optimal medical AI foundation models without costly fine-tuning, achieving 31% better performance than existing methods. The topology-driven approach evaluates manifold tractability rather than statistical overlap to better assess model transferability for medical image segmentation tasks.

AIBullisharXiv โ€“ CS AI ยท 4h ago4
๐Ÿง 

Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments

Researchers propose a new framework for foundation world models that enables autonomous agents to learn, verify, and adapt reliably in dynamic environments. The approach combines reinforcement learning with formal verification and adaptive abstraction to create agents that can synthesize verifiable programs and maintain correctness while adapting to novel conditions.

AIBullisharXiv โ€“ CS AI ยท 4h ago5
๐Ÿง 

SocialNav: Training Human-Inspired Foundation Model for Socially-Aware Embodied Navigation

Researchers developed SocialNav, a foundation model for socially-aware robot navigation that uses a hierarchical architecture to understand social norms and generate compliant movement paths. The model was trained on 7 million samples and achieved 38% better success rates and 46% improved social compliance compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

DirMixE: Harnessing Test Agnostic Long-tail Recognition with Hierarchical Label Vartiations

Researchers introduce DirMixE, a new machine learning approach for handling test-agnostic long-tail recognition problems where test data distributions are unknown and imbalanced. The method uses a hierarchical Mixture-of-Expert strategy with Dirichlet meta-distributions and includes a Latent Skill Finetuning framework for efficient parameter tuning of foundation models.