13,316 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce ECHO, a new Graph Neural Network architecture that solves community detection in large networks by overcoming computational bottlenecks and memory constraints. The system can process networks with over 1.6 million nodes and 30 million edges in minutes, achieving throughputs exceeding 2,800 nodes per second.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed LEREDD, an LLM-based system that automates the detection of dependencies between software requirements using Retrieval-Augmented Generation and In-Context Learning. The system achieved 93% accuracy in classifying requirement dependencies, significantly outperforming existing baselines with relative gains of over 94% in F1 scores for specific dependency types.
AINeutralarXiv – CS AI · Feb 275/106
🧠Researchers developed Fair-PaperRec, an AI system that uses fairness regularization to reduce bias in academic peer review processes. The system achieved up to 42% increased participation from underrepresented groups while maintaining scholarly quality with minimal utility loss.
$NEAR
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce GetBatch, a new object store API that optimizes machine learning data loading by replacing thousands of individual GET requests with a single batch operation. The system achieves up to 15x throughput improvement for small objects and reduces batch retrieval latency by 2x in production ML training workloads.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers developed a deep learning framework using Organ Focused Attention (OFA) to predict renal tumor malignancy from 3D CT scans without requiring manual segmentation. The system achieved AUC scores of 0.685-0.760 across datasets, outperforming traditional segmentation-based approaches while reducing labor and costs.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers have developed AeroDGS, a physics-guided 4D Gaussian splatting framework that enables accurate dynamic scene reconstruction from single-view aerial UAV footage. The system addresses key challenges in monocular aerial reconstruction by incorporating physics-based optimization and geometric constraints to resolve depth ambiguity and improve motion estimation.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers propose GRAU, a new reconfigurable activation unit design for neural network hardware accelerators that uses piecewise linear fitting with power-of-two slopes. The design reduces LUT consumption by over 90% compared to traditional multi-threshold activators while supporting mixed-precision quantization and nonlinear functions.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers demonstrated that prompt optimization using Genetic-Pareto (GEPA) significantly improves language models' ability to detect errors in medical notes. The technique boosted accuracy from 0.669 to 0.785 with GPT-5 and from 0.578 to 0.690 with Qwen3-32B, achieving state-of-the-art performance on medical error detection benchmarks.
AINeutralarXiv – CS AI · Feb 276/106
🧠Researchers created a 4.5k text corpus analyzing how different AI personas, including Microsoft's controversial Sydney chatbot, express views on human-AI relationships across 12 major language models. The study examines how the Sydney persona has spread memetically through training data, allowing newer models to simulate its distinctive characteristics and perspectives.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed EyeLayer, a module that integrates human eye-tracking patterns into large language models to improve code summarization. The system achieved up to 13.17% improvement on BLEU-4 metrics by using human gaze data to guide AI attention mechanisms.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed a multimodal AI framework using transformer-based large language models to analyze the critical first three seconds of video advertisements. The system combines visual, auditory, and textual analysis to predict ad performance metrics and optimize video advertising strategies.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers introduce UpSkill, a new training method that uses Mutual Information Skill Learning to improve large language models' ability to generate diverse correct responses across multiple attempts. The technique shows ~3% improvements in pass@k metrics on mathematical reasoning tasks using models like Llama 3.1-8B and Qwen 2.5-7B without degrading single-attempt accuracy.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose a new approach using Adversarial Inverse Reinforcement Learning for machinery fault detection that learns from healthy operational data without requiring manual fault labels. The framework treats fault detection as a sequential decision-making problem and demonstrates effective early fault detection on three benchmark datasets.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed a self-supervised learning method that can reconstruct audio and images from clipped/saturated measurements without requiring ground truth training data. The approach extends self-supervised learning to non-linear inverse problems and performs nearly as well as fully supervised methods while using only clipped measurements for training.
AIBullisharXiv – CS AI · Feb 276/107
🧠CryoNet.Refine introduces a deep learning framework that uses one-step diffusion models to rapidly refine molecular structures in cryo-electron microscopy. The AI system automates and accelerates the traditionally manual and computationally expensive process of fitting atomic models into experimental density maps.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose RL-aware distillation (RLAD), a new method to efficiently transfer knowledge from large language models to smaller ones during reinforcement learning training. The approach uses Trust Region Ratio Distillation (TRRD) to selectively guide student models only when it improves policy updates, outperforming existing distillation methods across reasoning benchmarks.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed Decoder-based Sense Knowledge Distillation (DSKD), a new framework that integrates lexical resources into decoder-style large language models during training. The method enhances knowledge distillation performance while enabling generative models to inherit structured semantics without requiring dictionary lookup during inference.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed a hybrid system combining machine learning ensembles with large language models for heart disease prediction, achieving 96.62% accuracy. The study found that traditional ML models (95.78% accuracy) outperformed standalone LLMs (78.9% accuracy), but combining both approaches yielded the best results for clinical decision-support tools.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers introduce AOT (Adversarial Opponent Training), a self-play framework that improves Multimodal Large Language Models' robustness by having an AI attacker generate adversarial image manipulations to train a defender model. The method addresses perceptual fragility in MLLMs when processing visually complex scenes, reducing hallucinations through dynamic adversarial training.
AIBearisharXiv – CS AI · Feb 276/107
🧠Researchers evaluated prompt injection and jailbreak vulnerabilities across multiple open-source LLMs including Phi, Mistral, DeepSeek-R1, Llama 3.2, Qwen, and Gemma. The study found significant behavioral variations across models and that lightweight defense mechanisms can be consistently bypassed by long, reasoning-heavy prompts.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers have developed SmartChunk retrieval, a query-adaptive framework that improves retrieval-augmented generation (RAG) systems by dynamically adjusting chunk sizes and compression for document question answering. The system uses a planner to predict optimal chunk abstraction levels and a compression module to create efficient embeddings, outperforming existing RAG baselines while reducing costs.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers developed improved neural retriever-reranker pipelines for Retrieval-Augmented Generation (RAG) systems over knowledge graphs in e-commerce applications. The study achieved 20.4% higher Hit@1 and 14.5% higher Mean Reciprocal Rank compared to existing benchmarks, providing a framework for production-ready RAG systems.
AIBullisharXiv – CS AI · Feb 276/104
🧠Researchers decoded the internal representations of scGPT, a single-cell foundation model, revealing it organizes genes into interpretable biological coordinate systems rather than opaque features. The model encodes cellular organization patterns including protein localization, interaction networks, and regulatory relationships across its transformer layers.
AIBullisharXiv – CS AI · Feb 276/106
🧠DS-Serve is a new framework that converts massive text datasets (up to half a trillion tokens) into efficient neural retrieval systems. The framework provides web interfaces and APIs with low latency and supports applications like retrieval-augmented generation (RAG) and training data attribution.
AIBearisharXiv – CS AI · Feb 276/105
🧠Researchers analyzed factual accuracy of Chinese web information systems, comparing traditional search engines, standalone LLMs, and AI overviews using 12,161 real-world queries. The study found substantial differences in factual accuracy across systems and estimated potential misinformation exposure for Chinese users.