y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,242🧠AI13,306⛓️Crypto11,296💎DeFi1,163🤖AI × Crypto566📰General4,911
🧠

AI

13,306 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13306 articles
AIBullisharXiv – CS AI · Feb 275/103
🧠

Make It Hard to Hear, Easy to Learn: Long-Form Bengali ASR and Speaker Diarization via Extreme Augmentation and Perfect Alignment

Researchers developed Lipi-Ghor-882, an 882-hour Bengali speech dataset, and demonstrated that targeted fine-tuning with synthetic acoustic degradation significantly improves automatic speech recognition for long-form Bengali audio. Their dual pipeline achieved a 0.019 Real-Time Factor, establishing new benchmarks for low-resource speech processing.

AIBullisharXiv – CS AI · Feb 276/105
🧠

MoDora: Tree-Based Semi-Structured Document Analysis System

Researchers introduce MoDora, an AI-powered system that uses tree-based analysis to understand and answer questions about semi-structured documents containing mixed data elements like tables, charts, and text. The system addresses challenges in processing fragmented OCR data and hierarchical document structures, achieving 5.97%-61.07% accuracy improvements over existing baselines.

AIBullisharXiv – CS AI · Feb 276/106
🧠

Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization

Researchers propose EMPO², a new hybrid reinforcement learning framework that improves exploration capabilities for large language model agents by combining memory augmentation with on- and off-policy optimization. The framework achieves significant performance improvements of 128.6% on ScienceWorld and 11.3% on WebShop compared to existing methods, while demonstrating superior adaptability to new tasks without requiring parameter updates.

AIBearisharXiv – CS AI · Feb 276/105
🧠

Moral Preferences of LLMs Under Directed Contextual Influence

A new research study reveals that Large Language Models' moral decision-making can be significantly influenced by contextual cues in prompts, even when the models claim neutrality. The research shows that LLMs exhibit systematic bias when given directed contextual influences in moral dilemma scenarios, challenging assumptions about AI moral consistency.

AIBullisharXiv – CS AI · Feb 276/104
🧠

Hierarchy-of-Groups Policy Optimization for Long-Horizon Agentic Tasks

Researchers have developed Hierarchy-of-Groups Policy Optimization (HGPO), a new reinforcement learning method that improves AI agents' performance on long-horizon tasks by addressing context inconsistency issues in stepwise advantage estimation. The method shows significant improvements over existing approaches when tested on challenging agentic tasks using Qwen2.5 models.

AIBullisharXiv – CS AI · Feb 276/105
🧠

TCM-DiffRAG: Personalized Syndrome Differentiation Reasoning Method for Traditional Chinese Medicine based on Knowledge Graph and Chain of Thought

Researchers developed TCM-DiffRAG, a novel AI framework that combines knowledge graphs with chain-of-thought reasoning to improve large language models' performance in Traditional Chinese Medicine diagnosis. The system significantly outperformed standard LLMs and other RAG methods in personalized medical reasoning tasks.

AINeutralarXiv – CS AI · Feb 276/107
🧠

Probing for Knowledge Attribution in Large Language Models

Researchers developed a method to identify whether large language model outputs come from user prompts or internal training data, addressing the problem of AI hallucinations. Their linear classifier probe achieved up to 96% accuracy in determining knowledge sources, with attribution mismatches increasing error rates by up to 70%.

$LINK
AINeutralarXiv – CS AI · Feb 275/104
🧠

QSIM: Mitigating Overestimation in Multi-Agent Reinforcement Learning via Action Similarity Weighted Q-Learning

Researchers propose QSIM, a new framework that addresses systematic Q-value overestimation in multi-agent reinforcement learning by using action similarity weighted Q-learning instead of traditional greedy approaches. The method demonstrates improved performance and stability across various value decomposition algorithms through similarity-weighted target calculations.

$NEAR
AIBullisharXiv – CS AI · Feb 276/108
🧠

Test-Time Scaling with Diffusion Language Models via Reward-Guided Stitching

Researchers developed a new framework called 'Stitching Noisy Diffusion Thoughts' that improves AI reasoning by combining the best parts of multiple solution attempts rather than just selecting complete answers. The method achieves up to 23.8% accuracy improvement on math and coding tasks while reducing computation time by 1.8x compared to existing approaches.

AINeutralarXiv – CS AI · Feb 275/107
🧠

Towards Simulating Social Media Users with LLMs: Evaluating the Operational Validity of Conditioned Comment Prediction

Researchers introduced Conditioned Comment Prediction (CCP) to evaluate how well Large Language Models can simulate social media user behavior by predicting user comments. The study found that supervised fine-tuning improves text structure but degrades semantic accuracy, and that behavioral histories are more effective than descriptive personas for user simulation.

AIBullisharXiv – CS AI · Feb 276/105
🧠

SoPE: Spherical Coordinate-Based Positional Embedding for Enhancing Spatial Perception of 3D LVLMs

Researchers introduce SoPE (Spherical Coordinate-based Positional Embedding), a new method that enhances 3D Large Vision-Language Models by mapping point-cloud data into spherical coordinate space. This approach overcomes limitations of existing Rotary Position Embedding (RoPE) by better preserving spatial structures and directional variations in 3D multimodal understanding.

AINeutralarXiv – CS AI · Feb 275/107
🧠

Same Words, Different Judgments: Modality Effects on Preference Alignment

Researchers conducted a cross-modal study comparing human preference annotations between text and audio formats for AI alignment. The study found that while audio preferences are as reliable as text, different modalities lead to different judgment patterns, with synthetic ratings showing promise as replacements for human annotations.

$NEAR
AIBullisharXiv – CS AI · Feb 276/106
🧠

ViCLIP-OT: The First Foundation Vision-Language Model for Vietnamese Image-Text Retrieval with Optimal Transport

Researchers introduced ViCLIP-OT, the first foundation vision-language model specifically designed for Vietnamese image-text retrieval. The model integrates CLIP-style contrastive learning with Similarity-Graph Regularized Optimal Transport (SIGROT) loss, achieving significant improvements over existing baselines with 67.34% average Recall@K on UIT-OpenViIC benchmark.

AIBullisharXiv – CS AI · Feb 276/105
🧠

dLLM: Simple Diffusion Language Modeling

Researchers introduce dLLM, an open-source framework that unifies core components of diffusion language modeling including training, inference, and evaluation. The framework enables users to reproduce, finetune, and deploy large diffusion language models like LLaDA and Dream while providing tools to build smaller models from scratch with accessible compute resources.

AIBullisharXiv – CS AI · Feb 276/107
🧠

ContextRL: Enhancing MLLM's Knowledge Discovery Efficiency with Context-Augmented RL

Researchers propose ContextRL, a new framework that uses context augmentation to improve machine learning model efficiency in knowledge discovery. The framework enables smaller models like Qwen3-VL-8B to achieve performance comparable to much larger 32B models through enhanced reward modeling and multi-turn sampling strategies.

AIBullisharXiv – CS AI · Feb 276/105
🧠

BetterScene: 3D Scene Synthesis with Representation-Aligned Generative Model

BetterScene is a new AI approach that enhances 3D scene synthesis and novel view generation from sparse photos by leveraging Stable Video Diffusion with improved regularization techniques. The method integrates 3D Gaussian Splatting and addresses consistency issues in existing diffusion-based solutions through temporal equivariance and vision foundation model alignment.

$RNDR
AIBullisharXiv – CS AI · Feb 276/105
🧠

pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation

Researchers developed pMoE, a novel parameter-efficient fine-tuning method that combines multiple expert domains through specialized prompt tokens and dynamic dispatching. Testing across 47 visual adaptation tasks in classification and segmentation shows superior performance with improved computational efficiency compared to existing methods.

AIBullisharXiv – CS AI · Feb 276/105
🧠

Utilizing LLMs for Industrial Process Automation

This research explores the application of Large Language Models (LLMs) to industrial process automation, focusing on specialized programming languages used in manufacturing contexts. Unlike previous work that concentrated on general-purpose languages like Python, this study aims to integrate LLMs into industrial development workflows to solve real-world automation tasks such as robotic arm programming.

AIBullisharXiv – CS AI · Feb 275/107
🧠

Addressing Climate Action Misperceptions with Generative AI

A study of 1,201 climate-concerned individuals found that personalized AI conversations using climate-equipped large language models significantly improved understanding of climate action impacts and increased intentions to adopt high-impact behaviors. The personalized climate LLM outperformed web searches, unspecialized LLMs, and control groups in motivating behavior change through tailored guidance.

AIBullisharXiv – CS AI · Feb 276/106
🧠

Stable Adaptive Thinking via Advantage Shaping and Length-Aware Gradient Regulation

Researchers developed a two-stage framework to optimize large reasoning models, reducing overthinking on simple queries while maintaining accuracy on complex problems. The approach achieved up to 3.7 accuracy point improvements while reducing token generation by over 40% through hybrid fine-tuning and adaptive reinforcement learning techniques.

AIBullisharXiv – CS AI · Feb 276/105
🧠

Reinforcing Real-world Service Agents: Balancing Utility and Cost in Task-oriented Dialogue

Researchers introduce InteractCS-RL, a new reinforcement learning framework that helps AI agents balance empathetic communication with cost-effective decision-making in task-oriented dialogue. The system uses a multi-granularity approach with persona-driven user interactions and cost-aware policy optimization to achieve better performance across business scenarios.

AIBullisharXiv – CS AI · Feb 275/106
🧠

Quality-Aware Robust Multi-View Clustering for Heterogeneous Observation Noise

Researchers propose QARMVC, a new AI framework for multi-view clustering that addresses heterogeneous noise in real-world data. The system uses quality scores to identify contamination levels and employs hierarchical learning to improve clustering performance, showing superior results across benchmark datasets.

← PrevPage 254 of 533Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined