y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#computer-vision News & Analysis

507 articles tagged with #computer-vision. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

507 articles
AIBullisharXiv – CS AI · Mar 36/105
🧠

Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

Researchers developed a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control using Bezier-curve representations and neural ordinary differential equations. The system achieves accurate shape-position regulation with shape errors within 1.56% and end-effector errors within 2% while enabling obstacle avoidance and environmental awareness.

$CRV
AIBullisharXiv – CS AI · Mar 36/105
🧠

Dataset Color Quantization: A Training-Oriented Framework for Dataset-Level Compression

Researchers propose Dataset Color Quantization (DCQ), a new framework that compresses visual datasets by reducing color-space redundancy while preserving information crucial for AI model training. The method achieves significant storage reduction across major datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K while maintaining training performance.

AINeutralarXiv – CS AI · Mar 36/104
🧠

EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark

Researchers introduce EgoNight, the first comprehensive benchmark for nighttime egocentric vision understanding, featuring day-night aligned videos and visual question answering tasks. The benchmark reveals significant performance drops in state-of-the-art multimodal large language models when operating under low-light conditions.

AIBullisharXiv – CS AI · Mar 36/103
🧠

BiMotion: B-spline Motion for Text-guided Dynamic 3D Character Generation

Researchers introduce BiMotion, a new AI framework that uses B-spline curves to generate high-quality 3D character animations from text descriptions. The method addresses limitations in existing approaches by using continuous motion representation instead of discrete frames, enabling more expressive and coherent character movements.

AIBullisharXiv – CS AI · Mar 36/103
🧠

HIMM: Human-Inspired Long-Term Memory Modeling for Embodied Exploration and Question Answering

Researchers propose HIMM, a new memory framework for AI embodied agents that separates episodic and semantic memory to improve long-term performance. The system achieves significant gains on benchmarks, with 7.3% improvement in LLM-Match and 11.4% in LLM MatchXSPL, addressing key challenges in deploying multimodal language models as embodied agent brains.

AIBullisharXiv – CS AI · Mar 37/107
🧠

CT-Flow: Orchestrating CT Interpretation Workflow with Model Context Protocol Servers

Researchers have developed CT-Flow, an AI framework that mimics how radiologists actually work by using tools interactively to analyze 3D CT scans. The system achieved 41% better diagnostic accuracy than existing models and 95% success in autonomous tool use, potentially revolutionizing clinical radiology workflows.

AIBullisharXiv – CS AI · Mar 36/104
🧠

ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations

Researchers propose ChainMPQ, a training-free method to reduce relation hallucinations in Large Vision-Language Models (LVLMs) by using interleaved text-image reasoning chains. The approach addresses the most common but least studied type of AI hallucination by sequentially analyzing subjects, objects, and their relationships through multi-perspective questioning.

AIBullisharXiv – CS AI · Mar 37/107
🧠

QuickGrasp: Responsive Video-Language Querying Service via Accelerated Tokenization and Edge-Augmented Inference

Researchers propose QuickGrasp, a video-language querying system that combines local processing with edge computing to achieve both fast response times and high accuracy. The system achieves up to 12.8x reduction in response delay while maintaining the accuracy of large video-language models through accelerated tokenization and adaptive edge augmentation.

AIBullisharXiv – CS AI · Mar 37/106
🧠

M-Gaussian: An Magnetic Gaussian Framework for Efficient Multi-Stack MRI Reconstruction

Researchers developed M-Gaussian, a new AI framework that adapts 3D Gaussian Splatting for efficient multi-stack MRI reconstruction. The method achieves 40.31 dB PSNR while being 14 times faster than existing implicit neural representation methods, offering improved balance between quality and computational efficiency.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Dr. Seg: Revisiting GRPO Training for Visual Large Language Models through Perception-Oriented Design

Researchers introduce Dr. Seg, a new framework that improves Group Relative Policy Optimization (GRPO) training for Visual Large Language Models by addressing key differences between language reasoning and visual perception tasks. The framework includes a Look-to-Confirm mechanism and Distribution-Ranked Reward module that enhance performance in complex visual scenarios without requiring architectural changes.

AIBullisharXiv – CS AI · Mar 36/108
🧠

FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation

FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.

AIBullisharXiv – CS AI · Mar 36/106
🧠

DINOv3 Meets YOLO26 for Weed Detection in Vegetable Crops

Researchers developed a foundational crop-weed detection model combining DINOv3 vision transformer with YOLO26 architecture, achieving significant improvements in precision agriculture applications. The model showed up to 14% better performance on cross-domain datasets while maintaining real-time processing at 28.5 fps despite increased computational requirements.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Multimodal Optimal Transport for Unsupervised Temporal Segmentation in Surgical Robotics

Researchers developed TASOT, an unsupervised AI method for surgical phase recognition that combines visual and textual information without requiring expensive large-scale pre-training. The approach showed significant improvements over existing zero-shot methods across multiple surgical datasets, demonstrating that effective surgical AI can be achieved with more efficient training methods.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer

Researchers introduce DiffusionHarmonizer, an AI framework that enhances neural reconstruction simulations for autonomous robots by converting multi-step image diffusion models into single-step enhancers. The system addresses artifacts in NeRF and 3D Gaussian Splatting methods while improving realism for applications like self-driving vehicle simulation.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

A Mixed Diet Makes DINO An Omnivorous Vision Encoder

Researchers have developed an 'Omnivorous Vision Encoder' that creates consistent feature representations across different visual modalities (RGB, depth, segmentation) of the same scene. The framework addresses the poor cross-modal alignment in existing vision encoders like DINOv2 by training with dual objectives to maximize feature alignment while preserving discriminative semantics.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

See, Act, Adapt: Active Perception for Unsupervised Cross-Domain Visual Adaptation via Personalized VLM-Guided Agent

Researchers introduce Sea² (See, Act, Adapt), a novel approach that improves AI perception models in new environments by using an intelligent pose-control agent rather than retraining the models themselves. The method keeps perception modules frozen and uses a vision-language model as a controller, achieving significant performance improvements of 13-27% across visual tasks without requiring additional training data.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation

Researchers developed AMBER-AFNO, a new lightweight architecture for 3D medical image segmentation that replaces traditional attention mechanisms with Adaptive Fourier Neural Operators. The model achieves state-of-the-art results on medical datasets while maintaining linear memory scaling and quasi-linear computational complexity.

$NEAR
AIBullisharXiv – CS AI · Mar 27/1015
🧠

Interpretable Debiasing of Vision-Language Models for Social Fairness

Researchers have developed DeBiasLens, a new framework that uses sparse autoencoders to identify and deactivate social bias neurons in Vision-Language models without degrading their performance. The model-agnostic approach addresses concerns about unintended social bias in VLMs by making the debiasing process interpretable and targeting internal model dynamics rather than surface-level fixes.

AIBullisharXiv – CS AI · Mar 27/1015
🧠

PointCoT: A Multi-modal Benchmark for Explicit 3D Geometric Reasoning

Researchers introduce PointCoT, a new AI framework that enables multimodal large language models to perform explicit geometric reasoning on 3D point cloud data using Chain-of-Thought methodology. The framework addresses current limitations where AI models suffer from geometric hallucinations by implementing a 'Look, Think, then Answer' paradigm with 86k instruction-tuning samples.

AIBullisharXiv – CS AI · Mar 27/1017
🧠

SemVideo: Reconstructs What You Watch from Brain Activity via Hierarchical Semantic Guidance

Researchers introduced SemVideo, a breakthrough AI framework that can reconstruct videos from brain activity using fMRI scans. The system uses hierarchical semantic guidance to overcome previous limitations in visual consistency and temporal coherence, achieving state-of-the-art results in brain-to-video reconstruction.

$RNDR
AIBullisharXiv – CS AI · Mar 26/1021
🧠

Reallocating Attention Across Layers to Reduce Multimodal Hallucination

Researchers propose a training-free solution to reduce hallucinations in multimodal AI models by rebalancing attention between perception and reasoning layers. The method achieves 4.2% improvement in reasoning accuracy with minimal computational overhead.

AINeutralarXiv – CS AI · Mar 26/1012
🧠

Ref-Adv: Exploring MLLM Visual Reasoning in Referring Expression Tasks

Researchers introduce Ref-Adv, a new benchmark for testing multimodal large language models' visual reasoning capabilities in referring expression tasks. The benchmark reveals that current MLLMs, despite performing well on standard datasets like RefCOCO, rely heavily on shortcuts and show significant gaps in genuine visual reasoning and grounding abilities.

← PrevPage 13 of 21Next →