507 articles tagged with #computer-vision. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv – CS AI · Mar 36/105
🧠Researchers developed a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control using Bezier-curve representations and neural ordinary differential equations. The system achieves accurate shape-position regulation with shape errors within 1.56% and end-effector errors within 2% while enabling obstacle avoidance and environmental awareness.
$CRV
AIBullisharXiv – CS AI · Mar 36/105
🧠Researchers propose Dataset Color Quantization (DCQ), a new framework that compresses visual datasets by reducing color-space redundancy while preserving information crucial for AI model training. The method achieves significant storage reduction across major datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K while maintaining training performance.
AINeutralarXiv – CS AI · Mar 36/104
🧠Researchers introduce EgoNight, the first comprehensive benchmark for nighttime egocentric vision understanding, featuring day-night aligned videos and visual question answering tasks. The benchmark reveals significant performance drops in state-of-the-art multimodal large language models when operating under low-light conditions.
AIBullisharXiv – CS AI · Mar 36/103
🧠Researchers introduce BiMotion, a new AI framework that uses B-spline curves to generate high-quality 3D character animations from text descriptions. The method addresses limitations in existing approaches by using continuous motion representation instead of discrete frames, enabling more expressive and coherent character movements.
AIBullisharXiv – CS AI · Mar 36/103
🧠Researchers propose HIMM, a new memory framework for AI embodied agents that separates episodic and semantic memory to improve long-term performance. The system achieves significant gains on benchmarks, with 7.3% improvement in LLM-Match and 11.4% in LLM MatchXSPL, addressing key challenges in deploying multimodal language models as embodied agent brains.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers developed SurgFusion-Net, a multimodal AI system for assessing surgical skills in robotic-assisted surgery. The system introduces new clinical datasets and fusion techniques that outperform existing baselines, addressing the domain gap between simulation and real clinical environments.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers have developed CT-Flow, an AI framework that mimics how radiologists actually work by using tools interactively to analyze 3D CT scans. The system achieved 41% better diagnostic accuracy than existing models and 95% success in autonomous tool use, potentially revolutionizing clinical radiology workflows.
AIBullisharXiv – CS AI · Mar 36/104
🧠Researchers propose ChainMPQ, a training-free method to reduce relation hallucinations in Large Vision-Language Models (LVLMs) by using interleaved text-image reasoning chains. The approach addresses the most common but least studied type of AI hallucination by sequentially analyzing subjects, objects, and their relationships through multi-perspective questioning.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose QuickGrasp, a video-language querying system that combines local processing with edge computing to achieve both fast response times and high accuracy. The system achieves up to 12.8x reduction in response delay while maintaining the accuracy of large video-language models through accelerated tokenization and adaptive edge augmentation.
AIBullisharXiv – CS AI · Mar 37/106
🧠Researchers developed TinyVLM, the first framework enabling zero-shot object detection on microcontrollers with less than 1MB memory. The system achieves real-time inference at 26 FPS on STM32H7 and over 1,000 FPS on MAX78000, making AI vision capabilities practical for resource-constrained edge devices.
AIBullisharXiv – CS AI · Mar 37/106
🧠Researchers developed M-Gaussian, a new AI framework that adapts 3D Gaussian Splatting for efficient multi-stack MRI reconstruction. The method achieves 40.31 dB PSNR while being 14 times faster than existing implicit neural representation methods, offering improved balance between quality and computational efficiency.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce Dr. Seg, a new framework that improves Group Relative Policy Optimization (GRPO) training for Visual Large Language Models by addressing key differences between language reasoning and visual perception tasks. The framework includes a Look-to-Confirm mechanism and Distribution-Ranked Reward module that enhance performance in complex visual scenarios without requiring architectural changes.
AIBullisharXiv – CS AI · Mar 36/108
🧠FlowPortrait is a new reinforcement learning framework that uses Multimodal Large Language Models for evaluation to generate more realistic talking-head videos with better lip synchronization. The system combines human-aligned assessment with policy optimization techniques to address persistent issues in audio-driven portrait animation.
AIBullisharXiv – CS AI · Mar 36/106
🧠Researchers developed a foundational crop-weed detection model combining DINOv3 vision transformer with YOLO26 architecture, achieving significant improvements in precision agriculture applications. The model showed up to 14% better performance on cross-domain datasets while maintaining real-time processing at 28.5 fps despite increased computational requirements.
AIBullisharXiv – CS AI · Mar 26/1011
🧠Researchers developed TASOT, an unsupervised AI method for surgical phase recognition that combines visual and textual information without requiring expensive large-scale pre-training. The approach showed significant improvements over existing zero-shot methods across multiple surgical datasets, demonstrating that effective surgical AI can be achieved with more efficient training methods.
AIBullisharXiv – CS AI · Mar 26/1015
🧠Researchers introduce DiffusionHarmonizer, an AI framework that enhances neural reconstruction simulations for autonomous robots by converting multi-step image diffusion models into single-step enhancers. The system addresses artifacts in NeRF and 3D Gaussian Splatting methods while improving realism for applications like self-driving vehicle simulation.
AIBullisharXiv – CS AI · Mar 26/1015
🧠Researchers have developed an 'Omnivorous Vision Encoder' that creates consistent feature representations across different visual modalities (RGB, depth, segmentation) of the same scene. The framework addresses the poor cross-modal alignment in existing vision encoders like DINOv2 by training with dual objectives to maximize feature alignment while preserving discriminative semantics.
AIBullisharXiv – CS AI · Mar 26/1012
🧠Researchers introduce Sea² (See, Act, Adapt), a novel approach that improves AI perception models in new environments by using an intelligent pose-control agent rather than retraining the models themselves. The method keeps perception modules frozen and uses a vision-language model as a controller, achieving significant performance improvements of 13-27% across visual tasks without requiring additional training data.
AIBullisharXiv – CS AI · Mar 26/1011
🧠Researchers developed AMBER-AFNO, a new lightweight architecture for 3D medical image segmentation that replaces traditional attention mechanisms with Adaptive Fourier Neural Operators. The model achieves state-of-the-art results on medical datasets while maintaining linear memory scaling and quasi-linear computational complexity.
$NEAR
AIBullisharXiv – CS AI · Mar 27/1015
🧠Researchers have developed DeBiasLens, a new framework that uses sparse autoencoders to identify and deactivate social bias neurons in Vision-Language models without degrading their performance. The model-agnostic approach addresses concerns about unintended social bias in VLMs by making the debiasing process interpretable and targeting internal model dynamics rather than surface-level fixes.
AIBullisharXiv – CS AI · Mar 27/1015
🧠Researchers introduce PointCoT, a new AI framework that enables multimodal large language models to perform explicit geometric reasoning on 3D point cloud data using Chain-of-Thought methodology. The framework addresses current limitations where AI models suffer from geometric hallucinations by implementing a 'Look, Think, then Answer' paradigm with 86k instruction-tuning samples.
AIBullisharXiv – CS AI · Mar 27/1017
🧠Researchers introduced SemVideo, a breakthrough AI framework that can reconstruct videos from brain activity using fMRI scans. The system uses hierarchical semantic guidance to overcome previous limitations in visual consistency and temporal coherence, achieving state-of-the-art results in brain-to-video reconstruction.
$RNDR
AIBullisharXiv – CS AI · Mar 26/1021
🧠Researchers propose a training-free solution to reduce hallucinations in multimodal AI models by rebalancing attention between perception and reasoning layers. The method achieves 4.2% improvement in reasoning accuracy with minimal computational overhead.
AIBullisharXiv – CS AI · Mar 26/109
🧠Researchers propose ProtoDCS, a new framework for robust test-time adaptation of Vision-Language Models in open-set scenarios. The method uses Gaussian Mixture Model verification and uncertainty-aware learning to better handle distribution shifts while maintaining computational efficiency.
AINeutralarXiv – CS AI · Mar 26/1012
🧠Researchers introduce Ref-Adv, a new benchmark for testing multimodal large language models' visual reasoning capabilities in referring expression tasks. The benchmark reveals that current MLLMs, despite performing well on standard datasets like RefCOCO, rely heavily on shortcuts and show significant gaps in genuine visual reasoning and grounding abilities.