9,304 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose a new approach using Adversarial Inverse Reinforcement Learning for machinery fault detection that learns from healthy operational data without requiring manual fault labels. The framework treats fault detection as a sequential decision-making problem and demonstrates effective early fault detection on three benchmark datasets.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers introduce UpSkill, a new training method that uses Mutual Information Skill Learning to improve large language models' ability to generate diverse correct responses across multiple attempts. The technique shows ~3% improvements in pass@k metrics on mathematical reasoning tasks using models like Llama 3.1-8B and Qwen 2.5-7B without degrading single-attempt accuracy.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers developed a multimodal AI framework using transformer-based large language models to analyze the critical first three seconds of video advertisements. The system combines visual, auditory, and textual analysis to predict ad performance metrics and optimize video advertising strategies.
AIBullisharXiv – CS AI · Feb 275/107
🧠Researchers have developed a self-supervised learning method that can reconstruct audio and images from clipped/saturated measurements without requiring ground truth training data. The approach extends self-supervised learning to non-linear inverse problems and performs nearly as well as fully supervised methods while using only clipped measurements for training.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed a hybrid system combining machine learning ensembles with large language models for heart disease prediction, achieving 96.62% accuracy. The study found that traditional ML models (95.78% accuracy) outperformed standalone LLMs (78.9% accuracy), but combining both approaches yielded the best results for clinical decision-support tools.
AIBullisharXiv – CS AI · Feb 276/108
🧠Researchers introduce a quantum-inspired sequence modeling framework that uses complex-valued wave functions and quantum interference for language processing. The approach shows theoretical advantages over traditional recurrent neural networks by utilizing quantum dynamics and the Born rule for token probability extraction.
AIBullisharXiv – CS AI · Feb 276/107
🧠CryoNet.Refine introduces a deep learning framework that uses one-step diffusion models to rapidly refine molecular structures in cryo-electron microscopy. The AI system automates and accelerates the traditionally manual and computationally expensive process of fitting atomic models into experimental density maps.
AIBullisharXiv – CS AI · Feb 276/104
🧠Researchers decoded the internal representations of scGPT, a single-cell foundation model, revealing it organizes genes into interpretable biological coordinate systems rather than opaque features. The model encodes cellular organization patterns including protein localization, interaction networks, and regulatory relationships across its transformer layers.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers have developed SmartChunk retrieval, a query-adaptive framework that improves retrieval-augmented generation (RAG) systems by dynamically adjusting chunk sizes and compression for document question answering. The system uses a planner to predict optimal chunk abstraction levels and a compression module to create efficient embeddings, outperforming existing RAG baselines while reducing costs.
AIBullisharXiv – CS AI · Feb 276/106
🧠DS-Serve is a new framework that converts massive text datasets (up to half a trillion tokens) into efficient neural retrieval systems. The framework provides web interfaces and APIs with low latency and supports applications like retrieval-augmented generation (RAG) and training data attribution.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers introduce AOT (Adversarial Opponent Training), a self-play framework that improves Multimodal Large Language Models' robustness by having an AI attacker generate adversarial image manipulations to train a defender model. The method addresses perceptual fragility in MLLMs when processing visually complex scenes, reducing hallucinations through dynamic adversarial training.
AIBearisharXiv – CS AI · Feb 276/105
🧠Researchers analyzed factual accuracy of Chinese web information systems, comparing traditional search engines, standalone LLMs, and AI overviews using 12,161 real-world queries. The study found substantial differences in factual accuracy across systems and estimated potential misinformation exposure for Chinese users.
AIBullisharXiv – CS AI · Feb 275/106
🧠Researchers propose QARMVC, a new AI framework for multi-view clustering that addresses heterogeneous noise in real-world data. The system uses quality scores to identify contamination levels and employs hierarchical learning to improve clustering performance, showing superior results across benchmark datasets.
AIBullisharXiv – CS AI · Feb 276/102
🧠Researchers developed a Retrieval-Augmented Generation (RAG) assistant for anatomical pathology laboratories to replace outdated static documentation with dynamic, searchable protocol guidance. The system achieved strong performance using biomedical-specific embeddings and could transform healthcare laboratory workflows by providing technicians with accurate, context-grounded answers to protocol queries.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers developed improved neural retriever-reranker pipelines for Retrieval-Augmented Generation (RAG) systems over knowledge graphs in e-commerce applications. The study achieved 20.4% higher Hit@1 and 14.5% higher Mean Reciprocal Rank compared to existing benchmarks, providing a framework for production-ready RAG systems.
AIBullisharXiv – CS AI · Feb 276/106
🧠ColoDiff is a new AI framework that uses diffusion models to generate high-quality colonoscopy videos for medical training and diagnosis. The system addresses data scarcity in medical imaging by creating synthetic videos with temporal consistency and precise clinical attribute control, achieving 90% faster generation through optimized sampling.
AINeutralarXiv – CS AI · Feb 275/106
🧠Researchers have developed Taxoria, a new taxonomy enrichment pipeline that uses Large Language Models to enhance existing taxonomies by proposing, validating, and integrating new nodes. The system addresses limitations in current taxonomies such as limited coverage and outdated information while including hallucination mitigation and provenance tracking.
AIBearisharXiv – CS AI · Feb 276/107
🧠Researchers evaluated prompt injection and jailbreak vulnerabilities across multiple open-source LLMs including Phi, Mistral, DeepSeek-R1, Llama 3.2, Qwen, and Gemma. The study found significant behavioral variations across models and that lightweight defense mechanisms can be consistently bypassed by long, reasoning-heavy prompts.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers introduce Duel-Evolve, a new optimization algorithm that improves LLM performance at test time without requiring external rewards or labels. The method uses self-generated pairwise comparisons and achieved 20 percentage points higher accuracy on MathBench and 12 percentage points improvement on LiveCodeBench.
AINeutralarXiv – CS AI · Feb 276/105
🧠Researchers identified stochasticity (variability) as a critical barrier to deploying Deep Research Agents in real-world applications like financial decision-making and medical analysis. The study proposes mitigation strategies that reduce output variance by 22% while maintaining research quality, addressing a key obstacle for enterprise AI agent adoption.
AINeutralarXiv – CS AI · Feb 276/107
🧠Researchers developed ReCoN-Ipsundrum, an AI agent architecture designed to exhibit consciousness-like behaviors through recurrent persistence loops and affect-coupled control mechanisms. The study demonstrates how engineered systems can display preference stability, exploratory scanning, and sustained caution behaviors that mimic aspects of conscious experience.
$LINK
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers have introduced ESAA (Event Sourcing for Autonomous Agents), a new architecture that improves LLM-based autonomous agents by separating cognitive intention from state mutation using structured JSON events and deterministic orchestration. The system addresses key limitations like context degradation and execution reliability, with successful validation through multi-agent case studies using various LLMs including Claude Sonnet and GPT-5.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers have developed PATRA, a new AI model that improves time series question answering by better understanding patterns like trends and seasonality. The model addresses limitations in existing LLM approaches that treat time series data as simple text or images, introducing pattern-aware mechanisms and balanced learning across tasks of varying difficulty.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers propose a new approach to generalized planning that learns explicit transition models rather than directly predicting action sequences. This method achieves better out-of-distribution performance with fewer training instances and smaller models compared to Transformer-based planners like PlanGPT.
AINeutralarXiv – CS AI · Feb 276/103
🧠Researchers developed CXReasonAgent, a diagnostic AI agent that combines large language models with clinical diagnostic tools to provide evidence-based chest X-ray analysis. The system addresses limitations of current vision-language models that generate plausible but ungrounded medical diagnoses, introducing a new benchmark with 1,946 diagnostic dialogues.