y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#foundation-models News & Analysis

98 articles tagged with #foundation-models. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

98 articles
AIBullishMarkTechPost · Apr 56/10
🧠

Meet MaxToki: The AI That Predicts How Your Cells Age — and What to Do About It

MaxToki is a new AI foundation model that can predict cellular aging patterns and trajectories, addressing a key limitation in existing biological models that only analyze cells as static snapshots. The technology represents a significant advancement in computational biology by incorporating temporal dynamics into cellular analysis.

Meet MaxToki: The AI That Predicts How Your Cells Age — and What to Do About It
AIBullisharXiv – CS AI · Mar 176/10
🧠

ES-Merging: Biological MLLM Merging via Embedding Space Signals

Researchers propose ES-Merging, a new framework for combining specialized biological multimodal large language models (MLLMs) by using embedding space signals rather than traditional parameter-based methods. The approach estimates merging coefficients at both layer-wise and element-wise granularities, outperforming existing merging techniques and even task-specific fine-tuned models on cross-modal scientific problems.

AIBullisharXiv – CS AI · Mar 176/10
🧠

MVHOI: Bridge Multi-view Condition to Complex Human-Object Interaction Video Reenactment via 3D Foundation Model

Researchers introduce MVHOI, a new AI framework that significantly improves human-object interaction video generation by handling complex 3D manipulations through a two-stage process using 3D foundation models. The system can create realistic long-duration videos showing intricate object manipulations from multiple viewpoints, addressing limitations of existing approaches that struggle with non-planar movements.

AINeutralarXiv – CS AI · Mar 166/10
🧠

Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models

Researchers propose integrating causal methods into machine learning systems to balance competing objectives like fairness, privacy, robustness, accuracy, and explainability. The paper argues that addressing these principles in isolation leads to conflicts and suboptimal solutions, while causal approaches can help navigate trade-offs in both trustworthy ML and foundation models.

AINeutralarXiv – CS AI · Mar 126/10
🧠

RandMark: On Random Watermarking of Visual Foundation Models

Researchers propose RandMark, a new method for watermarking visual foundation models to protect intellectual property rights. The approach uses a small encoder-decoder network to embed random digital watermarks into internal representations, enabling ownership verification with low false detection rates.

AIBullisharXiv – CS AI · Mar 116/10
🧠

From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

FALCON introduces a novel vision-language-action model that bridges the spatial reasoning gap by injecting 3D spatial tokens into action heads while preserving language reasoning capabilities. The system achieves state-of-the-art performance across simulation benchmarks and real-world tasks by leveraging spatial foundation models to provide geometric priors from RGB input alone.

AIBullisharXiv – CS AI · Mar 96/10
🧠

Prompt Group-Aware Training for Robust Text-Guided Nuclei Segmentation

Researchers developed a new training method to improve the robustness of AI foundation models like SAM3 for medical image segmentation by reducing sensitivity to prompt variations. The approach groups semantically similar prompts together and uses consistency constraints to ensure more reliable predictions across different prompt formulations.

AIBullisharXiv – CS AI · Mar 96/10
🧠

Think with 3D: Geometric Imagination Grounded Spatial Reasoning from Limited Views

Researchers introduce 3DThinker, a new framework that enables vision-language models to perform 3D spatial reasoning from limited 2D views without requiring 3D training data. The system uses a two-stage training approach to align 3D representations with foundation models and demonstrates superior performance across multiple benchmarks.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Toward Graph-Tokenizing Large Language Models with Reconstructive Graph Instruction Tuning

Researchers have developed RGLM, a new approach to improve how large language models understand and process graph data by incorporating explicit graph supervision alongside text instructions. The method addresses limitations in existing Graph-Tokenizing LLMs that rely too heavily on text supervision, leading to underutilization of graph context.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Modular Memory is the Key to Continual Learning Agents

Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Intention-Conditioned Flow Occupancy Models

Researchers introduce Intention-Conditioned Flow Occupancy Models (InFOM), a new reinforcement learning approach that uses flow matching to predict future states and incorporates user intention as a latent variable. The method demonstrates significant improvements with 1.8x median return improvement and 36% higher success rates across 40 benchmark tasks.

AIBullisharXiv – CS AI · Mar 36/104
🧠

TTOM: Test-Time Optimization and Memorization for Compositional Video Generation

Researchers introduce TTOM (Test-Time Optimization and Memorization), a training-free framework that improves compositional video generation in Video Foundation Models during inference. The system uses layout-attention optimization and parametric memory to better align text prompts with generated video outputs, showing strong transferability across different scenarios.

AIBullisharXiv – CS AI · Mar 36/108
🧠

FCN-LLM: Empower LLM for Brain Functional Connectivity Network Understanding via Graph-level Multi-task Instruction Tuning

Researchers have developed FCN-LLM, a framework that enables Large Language Models to understand brain functional connectivity networks from fMRI scans through multi-task instruction tuning. The system uses a multi-scale encoder to capture brain features and demonstrates strong zero-shot generalization across unseen datasets, outperforming conventional supervised models.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Zero-Shot and Supervised Bird Image Segmentation Using Foundation Models: A Dual-Pipeline Approach with Grounding DINO~1.5, YOLOv11, and SAM~2.1

Researchers developed a dual-pipeline framework for bird image segmentation using foundation models including Grounding DINO 1.5, YOLOv11, and SAM 2.1. The supervised pipeline achieved state-of-the-art results with 0.912 IoU on the CUB-200-2011 dataset, while the zero-shot pipeline achieved 0.831 IoU using only text prompts.

AIBearisharXiv – CS AI · Mar 36/106
🧠

Knowledge without Wisdom: Measuring Misalignment between LLMs and Intended Impact

Research reveals that leading foundation models (LLMs) perform poorly on real-world educational tasks despite excelling on AI benchmarks. The study found that 50% of misalignment errors are shared across models due to common pretraining approaches, with model ensembles actually worsening performance on learning outcomes.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

SleepLM: Natural-Language Intelligence for Human Sleep

Researchers have developed SleepLM, a family of AI foundation models that combine natural language processing with sleep analysis using polysomnography data. The system can interpret and describe sleep patterns in natural language, trained on over 100K hours of sleep data from 10,000+ individuals, enabling new capabilities like language-guided sleep event detection and zero-shot generalization to novel sleep analysis tasks.

AIBullisharXiv – CS AI · Mar 27/1013
🧠

Brain-OF: An Omnifunctional Foundation Model for fMRI, EEG and MEG

Researchers have developed Brain-OF, the first omnifunctional brain foundation model that can process fMRI, EEG, and MEG data simultaneously within a unified framework. The model introduces novel techniques like Any-Resolution Neural Signal Sampler and Masked Temporal-Frequency Modeling, trained on 40 datasets to achieve superior performance across diverse neuroscience tasks.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

Hyperdimensional Cross-Modal Alignment of Frozen Language and Image Models for Efficient Image Captioning

Researchers introduce HDFLIM, a new framework that aligns vision and language AI models without requiring computationally expensive fine-tuning by using hyperdimensional computing to create cross-modal mappings while keeping foundation models frozen. The approach achieves comparable performance to traditional training methods while being significantly more resource-efficient.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

TradeFM: A Generative Foundation Model for Trade-flow and Market Microstructure

Researchers introduced TradeFM, a 524M-parameter generative AI model that learns from billions of trade events across 9,000+ equities to understand market microstructure. The model can generate synthetic market data and generalizes across different markets without asset-specific calibration, potentially enabling new applications in trading and market simulation.

$COMP
AIBullisharXiv – CS AI · Mar 27/1012
🧠

The Geometry of Transfer: Unlocking Medical Vision Manifolds for Training-Free Model Ranking

Researchers developed a new framework for selecting optimal medical AI foundation models without costly fine-tuning, achieving 31% better performance than existing methods. The topology-driven approach evaluates manifold tractability rather than statistical overlap to better assess model transferability for medical image segmentation tasks.

AIBullisharXiv – CS AI · Mar 27/1011
🧠

Foundation World Models for Agents that Learn, Verify, and Adapt Reliably Beyond Static Environments

Researchers propose a new framework for foundation world models that enables autonomous agents to learn, verify, and adapt reliably in dynamic environments. The approach combines reinforcement learning with formal verification and adaptive abstraction to create agents that can synthesize verifiable programs and maintain correctness while adapting to novel conditions.

AIBullisharXiv – CS AI · Mar 27/1019
🧠

SocialNav: Training Human-Inspired Foundation Model for Socially-Aware Embodied Navigation

Researchers developed SocialNav, a foundation model for socially-aware robot navigation that uses a hierarchical architecture to understand social norms and generate compliant movement paths. The model was trained on 7 million samples and achieved 38% better success rates and 46% improved social compliance compared to existing methods.

← PrevPage 3 of 4Next →