13,307 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers introduced ViCLIP-OT, the first foundation vision-language model specifically designed for Vietnamese image-text retrieval. The model integrates CLIP-style contrastive learning with Similarity-Graph Regularized Optimal Transport (SIGROT) loss, achieving significant improvements over existing baselines with 67.34% average Recall@K on UIT-OpenViIC benchmark.
AIBullisharXiv – CS AI · Feb 276/105
🧠BetterScene is a new AI approach that enhances 3D scene synthesis and novel view generation from sparse photos by leveraging Stable Video Diffusion with improved regularization techniques. The method integrates 3D Gaussian Splatting and addresses consistency issues in existing diffusion-based solutions through temporal equivariance and vision foundation model alignment.
$RNDR
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers propose ContextRL, a new framework that uses context augmentation to improve machine learning model efficiency in knowledge discovery. The framework enables smaller models like Qwen3-VL-8B to achieve performance comparable to much larger 32B models through enhanced reward modeling and multi-turn sampling strategies.
AIBullisharXiv – CS AI · Feb 275/107
🧠A study of 1,201 climate-concerned individuals found that personalized AI conversations using climate-equipped large language models significantly improved understanding of climate action impacts and increased intentions to adopt high-impact behaviors. The personalized climate LLM outperformed web searches, unspecialized LLMs, and control groups in motivating behavior change through tailored guidance.
AIBullisharXiv – CS AI · Feb 275/106
🧠Researchers propose QARMVC, a new AI framework for multi-view clustering that addresses heterogeneous noise in real-world data. The system uses quality scores to identify contamination levels and employs hierarchical learning to improve clustering performance, showing superior results across benchmark datasets.
AIBullisharXiv – CS AI · Feb 276/105
🧠DrivePTS introduces a new AI framework for generating diverse driving scenes to improve autonomous vehicle testing. The system uses progressive learning, multi-view descriptions, and frequency-guided structure loss to overcome limitations in current scene generation methods.
AIBullisharXiv – CS AI · Feb 276/103
🧠Researchers developed DisQ-HNet, a new AI framework that synthesizes tau-PET brain scans from MRI data to detect Alzheimer's disease pathology. The method uses advanced neural network architectures to generate cost-effective alternatives to expensive PET imaging while maintaining diagnostic accuracy.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers developed AVDE, a lightweight framework for decoding visual information from EEG brain signals using autoregressive generation. The system outperforms existing methods while using only 10% of the parameters, potentially advancing practical brain-computer interface applications.
AIBullisharXiv – CS AI · Feb 276/104
🧠Researchers developed HARU-Net, a novel AI architecture for denoising cone-beam computed tomography (CBCT) medical images that outperforms existing state-of-the-art methods while using less computational resources. The system addresses critical noise issues in low-dose dental and maxillofacial imaging by combining hybrid attention mechanisms with residual U-Net architecture.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers introduce dLLM, an open-source framework that unifies core components of diffusion language modeling including training, inference, and evaluation. The framework enables users to reproduce, finetune, and deploy large diffusion language models like LLaDA and Dream while providing tools to build smaller models from scratch with accessible compute resources.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed a two-stage framework to optimize large reasoning models, reducing overthinking on simple queries while maintaining accuracy on complex problems. The approach achieved up to 3.7 accuracy point improvements while reducing token generation by over 40% through hybrid fine-tuning and adaptive reinforcement learning techniques.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers demonstrated that prompt optimization using Genetic-Pareto (GEPA) significantly improves language models' ability to detect errors in medical notes. The technique boosted accuracy from 0.669 to 0.785 with GPT-5 and from 0.578 to 0.690 with Qwen3-32B, achieving state-of-the-art performance on medical error detection benchmarks.
AINeutralarXiv – CS AI · Feb 276/106
🧠Researchers created a 4.5k text corpus analyzing how different AI personas, including Microsoft's controversial Sydney chatbot, express views on human-AI relationships across 12 major language models. The study examines how the Sydney persona has spread memetically through training data, allowing newer models to simulate its distinctive characteristics and perspectives.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose RL-aware distillation (RLAD), a new method to efficiently transfer knowledge from large language models to smaller ones during reinforcement learning training. The approach uses Trust Region Ratio Distillation (TRRD) to selectively guide student models only when it improves policy updates, outperforming existing distillation methods across reasoning benchmarks.
AIBullisharXiv – CS AI · Feb 276/103
🧠Researchers have developed SignVLA, the first sign language-driven Vision-Language-Action framework for human-robot interaction that directly translates sign gestures into robotic commands without requiring intermediate gloss annotations. The system currently focuses on real-time alphabet-level finger-spelling for robotic control and is designed to support future expansion to word and sentence-level understanding.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed LEREDD, an LLM-based system that automates the detection of dependencies between software requirements using Retrieval-Augmented Generation and In-Context Learning. The system achieved 93% accuracy in classifying requirement dependencies, significantly outperforming existing baselines with relative gains of over 94% in F1 scores for specific dependency types.
AINeutralarXiv – CS AI · Feb 275/106
🧠Researchers developed Fair-PaperRec, an AI system that uses fairness regularization to reduce bias in academic peer review processes. The system achieved up to 42% increased participation from underrepresented groups while maintaining scholarly quality with minimal utility loss.
$NEAR
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce ECHO, a new Graph Neural Network architecture that solves community detection in large networks by overcoming computational bottlenecks and memory constraints. The system can process networks with over 1.6 million nodes and 30 million edges in minutes, achieving throughputs exceeding 2,800 nodes per second.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce GetBatch, a new object store API that optimizes machine learning data loading by replacing thousands of individual GET requests with a single batch operation. The system achieves up to 15x throughput improvement for small objects and reduces batch retrieval latency by 2x in production ML training workloads.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers developed a new AI framework using RNN-T architecture to improve speech recognition for Taiwanese Hakka, an endangered low-resource language with high dialectal variability. The system achieved 57% and 40% relative error rate reductions for two different writing systems, marking the first systematic investigation into Hakka dialect variations in ASR.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers have developed AeroDGS, a physics-guided 4D Gaussian splatting framework that enables accurate dynamic scene reconstruction from single-view aerial UAV footage. The system addresses key challenges in monocular aerial reconstruction by incorporating physics-based optimization and geometric constraints to resolve depth ambiguity and improve motion estimation.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed EyeLayer, a module that integrates human eye-tracking patterns into large language models to improve code summarization. The system achieved up to 13.17% improvement on BLEU-4 metrics by using human gaze data to guide AI attention mechanisms.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers developed a deep learning framework using Organ Focused Attention (OFA) to predict renal tumor malignancy from 3D CT scans without requiring manual segmentation. The system achieved AUC scores of 0.685-0.760 across datasets, outperforming traditional segmentation-based approaches while reducing labor and costs.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers propose GRAU, a new reconfigurable activation unit design for neural network hardware accelerators that uses piecewise linear fitting with power-of-two slopes. The design reduces LUT consumption by over 90% compared to traditional multi-threshold activators while supporting mixed-precision quantization and nonlinear functions.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed MomentMix and Length-Aware DETR to improve video moment retrieval, addressing challenges in localizing short video segments based on natural language queries. The method achieves significant performance gains on benchmark datasets, with up to 16.9% improvement in average mAP on QVHighlights.