y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All29,789🧠AI12,754⛓️Crypto10,799💎DeFi1,116🤖AI × Crypto548📰General4,572
🧠

AI

9,287 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

9287 articles
AIBullisharXiv – CS AI · Feb 276/107
🧠

ECHO: Encoding Communities via High-order Operators

Researchers introduce ECHO, a new Graph Neural Network architecture that solves community detection in large networks by overcoming computational bottlenecks and memory constraints. The system can process networks with over 1.6 million nodes and 30 million edges in minutes, achieving throughputs exceeding 2,800 nodes per second.

AIBearisharXiv – CS AI · Feb 276/105
🧠

Moral Preferences of LLMs Under Directed Contextual Influence

A new research study reveals that Large Language Models' moral decision-making can be significantly influenced by contextual cues in prompts, even when the models claim neutrality. The research shows that LLMs exhibit systematic bias when given directed contextual influences in moral dilemma scenarios, challenging assumptions about AI moral consistency.

AIBullisharXiv – CS AI · Feb 276/103
🧠

SignVLA: A Gloss-Free Vision-Language-Action Framework for Real-Time Sign Language-Guided Robotic Manipulation

Researchers have developed SignVLA, the first sign language-driven Vision-Language-Action framework for human-robot interaction that directly translates sign gestures into robotic commands without requiring intermediate gloss annotations. The system currently focuses on real-time alphabet-level finger-spelling for robotic control and is designed to support future expansion to word and sentence-level understanding.

AIBullisharXiv – CS AI · Feb 276/107
🧠

GetBatch: Distributed Multi-Object Retrieval for ML Data Loading

Researchers introduce GetBatch, a new object store API that optimizes machine learning data loading by replacing thousands of individual GET requests with a single batch operation. The system achieves up to 15x throughput improvement for small objects and reduces batch retrieval latency by 2x in production ML training workloads.

AIBullisharXiv – CS AI · Feb 275/107
🧠

Decoder-based Sense Knowledge Distillation

Researchers have developed Decoder-based Sense Knowledge Distillation (DSKD), a new framework that integrates lexical resources into decoder-style large language models during training. The method enhances knowledge distillation performance while enabling generative models to inherit structured semantics without requiring dictionary lookup during inference.

AIBullisharXiv – CS AI · Feb 276/108
🧠

GRAU: Generic Reconfigurable Activation Unit Design for Neural Network Hardware Accelerators

Researchers propose GRAU, a new reconfigurable activation unit design for neural network hardware accelerators that uses piecewise linear fitting with power-of-two slopes. The design reduces LUT consumption by over 90% compared to traditional multi-threshold activators while supporting mixed-precision quantization and nonlinear functions.

AIBullisharXiv – CS AI · Feb 276/106
🧠

UpSkill: Mutual Information Skill Learning for Structured Response Diversity in LLMs

Researchers introduce UpSkill, a new training method that uses Mutual Information Skill Learning to improve large language models' ability to generate diverse correct responses across multiple attempts. The technique shows ~3% improvements in pass@k metrics on mathematical reasoning tasks using models like Llama 3.1-8B and Qwen 2.5-7B without degrading single-attempt accuracy.

AIBullisharXiv – CS AI · Feb 276/106
🧠

Integrating Machine Learning Ensembles and Large Language Models for Heart Disease Prediction Using Voting Fusion

Researchers developed a hybrid system combining machine learning ensembles with large language models for heart disease prediction, achieving 96.62% accuracy. The study found that traditional ML models (95.78% accuracy) outperformed standalone LLMs (78.9% accuracy), but combining both approaches yielded the best results for clinical decision-support tools.

AIBullisharXiv – CS AI · Feb 275/107
🧠

Learning to reconstruct from saturated data: audio declipping and high-dynamic range imaging

Researchers have developed a self-supervised learning method that can reconstruct audio and images from clipped/saturated measurements without requiring ground truth training data. The approach extends self-supervised learning to non-linear inverse problems and performs nearly as well as fully supervised methods while using only clipped measurements for training.

AIBullisharXiv – CS AI · Feb 276/104
🧠

Multi-Dimensional Spectral Geometry of Biological Knowledge in Single-Cell Transformer Representations

Researchers decoded the internal representations of scGPT, a single-cell foundation model, revealing it organizes genes into interpretable biological coordinate systems rather than opaque features. The model encodes cellular organization patterns including protein localization, interaction networks, and regulatory relationships across its transformer layers.

AIBearisharXiv – CS AI · Feb 276/107
🧠

Analysis of LLMs Against Prompt Injection and Jailbreak Attacks

Researchers evaluated prompt injection and jailbreak vulnerabilities across multiple open-source LLMs including Phi, Mistral, DeepSeek-R1, Llama 3.2, Qwen, and Gemma. The study found significant behavioral variations across models and that lightweight defense mechanisms can be consistently bypassed by long, reasoning-heavy prompts.

AIBullisharXiv – CS AI · Feb 276/108
🧠

Deep Sequence Modeling with Quantum Dynamics: Language as a Wave Function

Researchers introduce a quantum-inspired sequence modeling framework that uses complex-valued wave functions and quantum interference for language processing. The approach shows theoretical advantages over traditional recurrent neural networks by utilizing quantum dynamics and the Born rule for token probability extraction.

AIBullisharXiv – CS AI · Feb 276/108
🧠

Test-Time Scaling with Diffusion Language Models via Reward-Guided Stitching

Researchers developed a new framework called 'Stitching Noisy Diffusion Thoughts' that improves AI reasoning by combining the best parts of multiple solution attempts rather than just selecting complete answers. The method achieves up to 23.8% accuracy improvement on math and coding tasks while reducing computation time by 1.8x compared to existing approaches.

AIBullisharXiv – CS AI · Feb 276/107
🧠

AeroDGS: Physically Consistent Dynamic Gaussian Splatting for Single-Sequence Aerial 4D Reconstruction

Researchers have developed AeroDGS, a physics-guided 4D Gaussian splatting framework that enables accurate dynamic scene reconstruction from single-view aerial UAV footage. The system addresses key challenges in monocular aerial reconstruction by incorporating physics-based optimization and geometric constraints to resolve depth ambiguity and improve motion estimation.

AIBullisharXiv – CS AI · Feb 276/105
🧠

To Deceive is to Teach? Forging Perceptual Robustness via Adversarial Reinforcement Learning

Researchers introduce AOT (Adversarial Opponent Training), a self-play framework that improves Multimodal Large Language Models' robustness by having an AI attacker generate adversarial image manipulations to train a defender model. The method addresses perceptual fragility in MLLMs when processing visually complex scenes, reducing hallucinations through dynamic adversarial training.

AIBullisharXiv – CS AI · Feb 276/105
🧠

Comparative Analysis of Neural Retriever-Reranker Pipelines for Retrieval-Augmented Generation over Knowledge Graphs in E-commerce Applications

Researchers developed improved neural retriever-reranker pipelines for Retrieval-Augmented Generation (RAG) systems over knowledge graphs in e-commerce applications. The study achieved 20.4% higher Hit@1 and 14.5% higher Mean Reciprocal Rank compared to existing benchmarks, providing a framework for production-ready RAG systems.

AIBullisharXiv – CS AI · Feb 276/106
🧠

SmartChunk Retrieval: Query-Aware Chunk Compression with Planning for Efficient Document RAG

Researchers have developed SmartChunk retrieval, a query-adaptive framework that improves retrieval-augmented generation (RAG) systems by dynamically adjusting chunk sizes and compression for document question answering. The system uses a planner to predict optimal chunk abstraction levels and a compression module to create efficient embeddings, outperforming existing RAG baselines while reducing costs.

AIBullisharXiv – CS AI · Feb 276/106
🧠

DS SERVE: A Framework for Efficient and Scalable Neural Retrieval

DS-Serve is a new framework that converts massive text datasets (up to half a trillion tokens) into efficient neural retrieval systems. The framework provides web interfaces and APIs with low latency and supports applications like retrieval-augmented generation (RAG) and training data attribution.

AIBullisharXiv – CS AI · Feb 275/106
🧠

Invariant Transformation and Resampling based Epistemic-Uncertainty Reduction

Researchers propose a new AI inference method that uses invariant transformations and resampling to reduce epistemic uncertainty and improve model accuracy. The approach involves applying multiple transformed versions of an input to a trained AI model and aggregating the outputs for more reliable results.

AIBullisharXiv – CS AI · Feb 276/107
🧠

Duel-Evolve: Reward-Free Test-Time Scaling via LLM Self-Preferences

Researchers introduce Duel-Evolve, a new optimization algorithm that improves LLM performance at test time without requiring external rewards or labels. The method uses self-generated pairwise comparisons and achieved 20 percentage points higher accuracy on MathBench and 12 percentage points improvement on LiveCodeBench.

← PrevPage 184 of 372Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined