9,287 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce ECHO, a new Graph Neural Network architecture that solves community detection in large networks by overcoming computational bottlenecks and memory constraints. The system can process networks with over 1.6 million nodes and 30 million edges in minutes, achieving throughputs exceeding 2,800 nodes per second.
AIBearisharXiv – CS AI · Feb 276/105
🧠A new research study reveals that Large Language Models' moral decision-making can be significantly influenced by contextual cues in prompts, even when the models claim neutrality. The research shows that LLMs exhibit systematic bias when given directed contextual influences in moral dilemma scenarios, challenging assumptions about AI moral consistency.
AIBullisharXiv – CS AI · Feb 276/103
🧠Researchers have developed SignVLA, the first sign language-driven Vision-Language-Action framework for human-robot interaction that directly translates sign gestures into robotic commands without requiring intermediate gloss annotations. The system currently focuses on real-time alphabet-level finger-spelling for robotic control and is designed to support future expansion to word and sentence-level understanding.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce GetBatch, a new object store API that optimizes machine learning data loading by replacing thousands of individual GET requests with a single batch operation. The system achieves up to 15x throughput improvement for small objects and reduces batch retrieval latency by 2x in production ML training workloads.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed EyeLayer, a module that integrates human eye-tracking patterns into large language models to improve code summarization. The system achieved up to 13.17% improvement on BLEU-4 metrics by using human gaze data to guide AI attention mechanisms.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed Decoder-based Sense Knowledge Distillation (DSKD), a new framework that integrates lexical resources into decoder-style large language models during training. The method enhances knowledge distillation performance while enabling generative models to inherit structured semantics without requiring dictionary lookup during inference.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers propose GRAU, a new reconfigurable activation unit design for neural network hardware accelerators that uses piecewise linear fitting with power-of-two slopes. The design reduces LUT consumption by over 90% compared to traditional multi-threshold activators while supporting mixed-precision quantization and nonlinear functions.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed a multimodal AI framework using transformer-based large language models to analyze the critical first three seconds of video advertisements. The system combines visual, auditory, and textual analysis to predict ad performance metrics and optimize video advertising strategies.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers introduce UpSkill, a new training method that uses Mutual Information Skill Learning to improve large language models' ability to generate diverse correct responses across multiple attempts. The technique shows ~3% improvements in pass@k metrics on mathematical reasoning tasks using models like Llama 3.1-8B and Qwen 2.5-7B without degrading single-attempt accuracy.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose a new approach using Adversarial Inverse Reinforcement Learning for machinery fault detection that learns from healthy operational data without requiring manual fault labels. The framework treats fault detection as a sequential decision-making problem and demonstrates effective early fault detection on three benchmark datasets.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed a hybrid system combining machine learning ensembles with large language models for heart disease prediction, achieving 96.62% accuracy. The study found that traditional ML models (95.78% accuracy) outperformed standalone LLMs (78.9% accuracy), but combining both approaches yielded the best results for clinical decision-support tools.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed a self-supervised learning method that can reconstruct audio and images from clipped/saturated measurements without requiring ground truth training data. The approach extends self-supervised learning to non-linear inverse problems and performs nearly as well as fully supervised methods while using only clipped measurements for training.
AIBullisharXiv – CS AI · Feb 276/107
🧠CryoNet.Refine introduces a deep learning framework that uses one-step diffusion models to rapidly refine molecular structures in cryo-electron microscopy. The AI system automates and accelerates the traditionally manual and computationally expensive process of fitting atomic models into experimental density maps.
AIBullisharXiv – CS AI · Feb 276/104
🧠Researchers decoded the internal representations of scGPT, a single-cell foundation model, revealing it organizes genes into interpretable biological coordinate systems rather than opaque features. The model encodes cellular organization patterns including protein localization, interaction networks, and regulatory relationships across its transformer layers.
AIBearisharXiv – CS AI · Feb 276/107
🧠Researchers evaluated prompt injection and jailbreak vulnerabilities across multiple open-source LLMs including Phi, Mistral, DeepSeek-R1, Llama 3.2, Qwen, and Gemma. The study found significant behavioral variations across models and that lightweight defense mechanisms can be consistently bypassed by long, reasoning-heavy prompts.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers introduce a quantum-inspired sequence modeling framework that uses complex-valued wave functions and quantum interference for language processing. The approach shows theoretical advantages over traditional recurrent neural networks by utilizing quantum dynamics and the Born rule for token probability extraction.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers developed a new framework called 'Stitching Noisy Diffusion Thoughts' that improves AI reasoning by combining the best parts of multiple solution attempts rather than just selecting complete answers. The method achieves up to 23.8% accuracy improvement on math and coding tasks while reducing computation time by 1.8x compared to existing approaches.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers have developed AeroDGS, a physics-guided 4D Gaussian splatting framework that enables accurate dynamic scene reconstruction from single-view aerial UAV footage. The system addresses key challenges in monocular aerial reconstruction by incorporating physics-based optimization and geometric constraints to resolve depth ambiguity and improve motion estimation.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers introduce AOT (Adversarial Opponent Training), a self-play framework that improves Multimodal Large Language Models' robustness by having an AI attacker generate adversarial image manipulations to train a defender model. The method addresses perceptual fragility in MLLMs when processing visually complex scenes, reducing hallucinations through dynamic adversarial training.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers developed improved neural retriever-reranker pipelines for Retrieval-Augmented Generation (RAG) systems over knowledge graphs in e-commerce applications. The study achieved 20.4% higher Hit@1 and 14.5% higher Mean Reciprocal Rank compared to existing benchmarks, providing a framework for production-ready RAG systems.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers have developed SmartChunk retrieval, a query-adaptive framework that improves retrieval-augmented generation (RAG) systems by dynamically adjusting chunk sizes and compression for document question answering. The system uses a planner to predict optimal chunk abstraction levels and a compression module to create efficient embeddings, outperforming existing RAG baselines while reducing costs.
AIBullisharXiv – CS AI · Feb 276/106
🧠DS-Serve is a new framework that converts massive text datasets (up to half a trillion tokens) into efficient neural retrieval systems. The framework provides web interfaces and APIs with low latency and supports applications like retrieval-augmented generation (RAG) and training data attribution.
AIBearisharXiv – CS AI · Feb 276/105
🧠Researchers analyzed factual accuracy of Chinese web information systems, comparing traditional search engines, standalone LLMs, and AI overviews using 12,161 real-world queries. The study found substantial differences in factual accuracy across systems and estimated potential misinformation exposure for Chinese users.
AIBullisharXiv – CS AI · Feb 275/106
🧠Researchers propose a new AI inference method that uses invariant transformations and resampling to reduce epistemic uncertainty and improve model accuracy. The approach involves applying multiple transformed versions of an input to a trained AI model and aggregating the outputs for more reliable results.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce Duel-Evolve, a new optimization algorithm that improves LLM performance at test time without requiring external rewards or labels. The method uses self-generated pairwise comparisons and achieved 20 percentage points higher accuracy on MathBench and 12 percentage points improvement on LiveCodeBench.