y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All35,483🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General6,588

AI × Crypto News Feed

Real-time AI-curated news from 34,840+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34840 articles
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Transformer autoencoder with local attention for sparse and irregular time series with application on risk estimation

Researchers present a Transformer Autoencoder framework with local attention mechanisms designed to detect non-technical losses (electricity theft) in power grids using sparse, irregular time series data. The model demonstrates superior performance in risk estimation for Greek electrical systems compared to existing methods, achieving high recall and precision while effectively handling data collection irregularities.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Explainable Knowledge Tracing via Probabilistic Embeddings and Pattern-based Reasoning

Researchers introduce Probabilistic Logical Knowledge Tracing (PLKT), an interpretable AI framework that uses Beta-distributed probabilistic embeddings to model student knowledge states and predict learning performance. Unlike conventional deep learning approaches that rely on opaque deterministic embeddings, PLKT constructs transparent reasoning paths showing how past interactions influence predictions while maintaining superior accuracy compared to existing methods.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

DAPE: Dynamic Non-uniform Alignment and Progressive Detail Enhancement Techniques for Improving the Performance of Efficient Visual Language Models

Researchers propose DAPE, a novel framework for visual-language models that uses dynamic, non-uniform alignment between text and image data rather than traditional uniform approaches. The method improves model accuracy across downstream tasks while reducing computational overhead by intelligently matching varying amounts of visual information to text segments based on their information density.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Sink vs. diagonal patterns as mechanisms for attention switch and oversmoothing prevention

Researchers analyze how attention mechanisms in transformers use sinks (special tokens) and diagonal patterns to prevent oversmoothing and enable efficient computation. The study establishes mathematical conditions for when sinks outperform alternatives and proves equivalence between sinks and hard attention switches, providing theoretical foundation for design choices in pretrained transformers.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Extrusion Segmentation Strategy to improve CAD Reconstruction from Point Cloud

Researchers have developed an end-to-end deep learning model that reconstructs CAD (Computer-Aided Design) models from point cloud data by segmenting objects into individual extrusions. This approach improves the generalization and robustness of AI models for reverse engineering and quality control applications across manufacturing industries.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

LLM Translation of Compiler Intermediate Representation

Researchers introduce IRIS-14B, a 14-billion-parameter LLM fine-tuned to translate compiler intermediate representations between GCC's GIMPLE and LLVM IR, achieving up to 44 percentage points higher accuracy than existing state-of-the-art models. The approach demonstrates how LLMs can function as interoperability layers in hybrid compiler architectures, enabling cross-toolchain workflows without modifying existing compiler infrastructure.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

CT-IDP: Segmentation-Derived Quantitative Phenotypes for Interpretable Abdominal CT Disease Classification

Researchers developed CT-IDP, a quantitative phenotyping framework that uses organ segmentation and derived descriptors to classify abdominal CT diseases through interpretable logistic regression. The approach achieved superior performance compared to vision-transformer baselines across multiple datasets, demonstrating the value of explainable AI in medical imaging.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Fitting Is Not Enough: Smoothness in Extremely Quantized LLMs

Researchers demonstrate that extreme quantization of large language models causes degradation beyond numerical precision loss, specifically through reduced smoothness in prediction spaces. They introduce smoothness-preserving techniques in post-training and quantization-aware training that improve generation quality independent of numerical accuracy gains.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Generating Leakage-Free Benchmarks for Robust RAG Evaluation

Researchers introduce SeedRG, a benchmark generation pipeline that addresses knowledge leakage in retrieval-augmented generation (RAG) evaluation by creating novel, structurally similar test instances that cannot be answered from language models' existing parametric memory. The approach tackles the critical problem of benchmark aging, where reused datasets become less effective for evaluation as their content gets absorbed into model training.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Geometrically Constrained Stenosis Editing in Coronary Angiography via Entropic Optimal Transport

Researchers have developed OT-Bridge Editor, an AI method that uses optimal transport theory to synthesize realistic coronary angiography images with artificial stenosis lesions. The technique achieves 27.8% improvement in stenosis detection performance on benchmark datasets, addressing the critical shortage of high-quality medical imaging training data.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

The Grounding Gap: How LLMs Anchor the Meaning of Abstract Concepts Differently from Humans

Researchers studying 21 large language models found a significant 'grounding gap' in how LLMs understand abstract concepts compared to humans. While LLMs rely heavily on word associations, they systematically underreproduce emotional and internal-state properties, achieving maximum correlation of r=0.37 versus human-to-human baselines above r=0.9. The findings suggest current models can identify grounding dimensions when explicitly queried but fail to recruit them naturally during free generation.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Compressed Video Aggregator: Content-driven Module for Efficient Micro-Video Recommendation

Researchers propose Compressed Video Aggregator (CVA), a lightweight module that improves micro-video recommendation systems by decoupling video processing from preference learning. The method reduces training time and GPU memory by orders of magnitude while maintaining or improving performance through intelligent frame selection based on video titles.

AIBearisharXiv – CS AI · 1d ago6/10
🧠

FraudBench: A Multimodal Benchmark for Detecting AI-Generated Fraudulent Refund Evidence

Researchers introduce FraudBench, a multimodal benchmark dataset designed to detect AI-generated fraudulent refund evidence in e-commerce, food delivery, and travel services. The study reveals that current AI detection systems struggle significantly with claim-conditioned fake-damage detection, with specialized detectors failing to reliably distinguish synthetic fraud from authentic evidence.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

SimReg: Achieving Higher Performance in the Pretraining via Embedding Similarity Regularization

Researchers introduce SimReg, an embedding similarity regularization technique for large language model pretraining that improves training efficiency by encouraging similar token representations to cluster together while separating different tokens. The approach achieves over 30% faster training convergence and 1% improvement in zero-shot performance across standard benchmarks.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Curvature-Aware Captioning:Leveraging Geodesic Attention for 3D Scene Understanding

Researchers introduce Curvature-Aware Captioning, a novel framework using non-Euclidean geodesic attention mechanisms to improve 3D scene understanding from point cloud data. The approach combines Oblique and Lorentz space geometries to simultaneously achieve precise object localization and coherent scene descriptions, demonstrating state-of-the-art results on ScanRefer and Nr3D benchmarks.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

VECTOR-Drive: Tightly Coupled Vision-Language and Trajectory Expert Routing for End-to-End Autonomous Driving

VECTOR-Drive introduces a tightly coupled vision-language-action framework for autonomous driving that balances semantic reasoning with motion planning through expert routing. Built on Qwen2.5-VL-3B, the system achieves 88.91 Driving Score on Bench2Drive by routing vision-language tokens to semantic experts while handling trajectory computation separately, demonstrating advances in multimodal AI for real-world driving tasks.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

PPU-Bench:Real World Benchmark for Personalized Partial Unlearning in Vision Language Models

Researchers introduce PPU-Bench, a benchmark for testing personalized partial unlearning in multimodal AI models, addressing the challenge of selectively removing sensitive memorized information while preserving model utility. The study reveals significant trade-offs between forgetting target knowledge and retaining non-target facts, proposing Boundary-Aware Optimization as a solution for fine-grained factual control.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Quantile Geometry Regularization for Distributional Reinforcement Learning

Researchers propose RQIQN, a new reinforcement learning method that improves quantile-based distributional RL by addressing distorted distribution estimates through Wasserstein distributionally robust optimization. The approach adds a lightweight correction to quantile targets that prevents distributional collapse while maintaining computational efficiency, demonstrating superior performance on navigation and Atari benchmarks.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Beyond Accuracy: Evaluating Strategy Diversity in LLM Mathematical Reasoning

Researchers introduce a strategy-level evaluation framework for large language models on mathematical reasoning tasks, revealing a significant gap between high answer accuracy and actual reasoning flexibility. While frontier models achieve 95-100% accuracy on single-solution prompts, they recover substantially fewer problem-solving strategies than human references when asked to generate multiple approaches, with only 39-71% coverage depending on the model and iteration count.

🧠 Claude🧠 Gemini
← PrevPage 427 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined