y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All35,497🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General6,602

AI × Crypto News Feed

Real-time AI-curated news from 34,840+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34840 articles
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Interactive Critique-Revision Training for Reliable Structured LLM Generation

Researchers propose DPA-GRPO, a novel training method for large language models that improves structured decision-making by using a generator-verifier framework where one model produces outputs and another validates them through safety assurance cases. The method demonstrates improved accuracy on tax calculation benchmarks and addresses the challenge of ensuring LLM outputs are locally correct, globally consistent, and auditable.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Quantum Inspired Variational Kernel and Explainable AI Framework for Cross Region Solar and Wind Energy Forecasting

Researchers have developed a hybrid forecasting framework combining classical machine learning, quantum-inspired variational kernels, and generative AI to predict solar and wind energy generation across different geographic regions. The system achieves competitive performance with classical baselines while demonstrating superior ability to distinguish between calm and stormy weather patterns, with potential applications for power grid management and renewable energy optimization.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

A Geometric Perspective on Next-Token Prediction in Large Language Models: Three Emerging Phases

Researchers have developed a geometric framework for understanding how large language models process information across their layers, identifying three distinct phases in next-token prediction: Seeding Multiplexing, Hoisting Overriding, and Focal Convergence. The study reveals that model depth primarily increases capacity for candidate disambiguation rather than adding fundamentally new computational stages.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Done, But Not Sure: Disentangling World Completion from Self-Termination in Embodied Agents

Researchers introduce VIGIL, an evaluation framework that separately measures whether embodied AI agents correctly complete tasks and properly report success, rather than conflating execution failures with commitment failures. Testing across 20 models reveals significant performance gaps in terminal commitment despite similar task execution, highlighting a critical blind spot in current AI agent benchmarking.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Towards Backdoor-Based Ownership Verification for Vision-Language-Action Models

Researchers introduce GuardVLA, a backdoor-based watermarking framework designed to verify ownership of Vision-Language-Action models used in robotic control systems. The technique embeds hidden triggers during training that remain detectable after model release and adaptation, enabling creators to prove intellectual property rights without compromising model performance.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

CT-IDP: Segmentation-Derived Quantitative Phenotypes for Interpretable Abdominal CT Disease Classification

Researchers developed CT-IDP, a quantitative phenotyping framework that uses organ segmentation and derived descriptors to classify abdominal CT diseases through interpretable logistic regression. The approach achieved superior performance compared to vision-transformer baselines across multiple datasets, demonstrating the value of explainable AI in medical imaging.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Research on Security Enhancement Methods for Adversarial Robust Large Language Model Intelligent Agents for Medical Decision-Making Tasks

Researchers developed ARSM-Agent, a security-enhanced framework for medical decision-making AI systems that defends against adversarial attacks through multi-module validation. The system reduces attack success rates to 8.7% while maintaining 91% knowledge consistency, demonstrating significant improvements over existing baseline approaches.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Absurd World: A Simple Yet Powerful Method to Absurdify the Real-world for Probing LLM Reasoning Capabilities

Researchers introduce Absurd World, a benchmarking framework that tests large language models' logical reasoning by creating logically coherent but unrealistic scenarios derived from real-world problems. The framework reveals whether LLMs can reason independently of learned patterns by breaking down real-world models into symbols, actions, sequences, and events, then systematically altering them while preserving underlying logic.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Phase Transitions in Affective Meaning Divergence: The Hidden Drift Before the Break

Researchers formalize 'affective meaning divergence' (AMD)—the divergence in emotional interpretation of shared words between conversation partners—and demonstrate that it undergoes a critical phase transition before conversational breakdown. Using game-theoretic modeling and empirical analysis of 652 conversations, they show that AMD exhibits critical-slowing-down signatures predictive of relationship rupture, outperforming toxicity and sentiment baselines.

AINeutralarXiv – CS AI · 1d ago5/10
🧠

Improving Lexical Difficulty Prediction with Context-Aligned Contrastive Learning and Ridge Ensembling

Researchers propose Context-Aligned Contrastive Regression, a machine learning approach that combines contrastive learning with ridge regression ensembling to improve lexical difficulty prediction across multiple language backgrounds. The method addresses limitations in existing regression-only models by structuring representation spaces to better capture cross-lingual alignment and ordinal difficulty rankings, showing improved performance stability across difficulty levels.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Tracking the Truth: Object-Centric Spatio-Temporal Monitoring for Video Large Language Models

Researchers introduce STEMO-Bench, a benchmark for evaluating video understanding in multimodal large language models (MLLMs), and propose STEMO-Track, a framework that reduces hallucinations by explicitly tracking object identities and states across time. The work addresses a critical limitation in current video AI systems: their inability to persistently monitor objects and temporal relationships in dynamic scenes.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Extrusion Segmentation Strategy to improve CAD Reconstruction from Point Cloud

Researchers have developed an end-to-end deep learning model that reconstructs CAD (Computer-Aided Design) models from point cloud data by segmenting objects into individual extrusions. This approach improves the generalization and robustness of AI models for reverse engineering and quality control applications across manufacturing industries.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Shapley Regression for Rare Disease Diagnosis Support: a case study on APDS

Researchers propose Shapley regression, a game-theoretic machine learning method for diagnosing APDS, a rare genetic immune disorder. The approach combines interpretability with predictive power by modeling symptom interactions while maintaining transparency, validated on both public datasets and a real-world cohort of 222 patients.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

DAPE: Dynamic Non-uniform Alignment and Progressive Detail Enhancement Techniques for Improving the Performance of Efficient Visual Language Models

Researchers propose DAPE, a novel framework for visual-language models that uses dynamic, non-uniform alignment between text and image data rather than traditional uniform approaches. The method improves model accuracy across downstream tasks while reducing computational overhead by intelligently matching varying amounts of visual information to text segments based on their information density.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Reasoning-Aware Training for Time Series Forecasting

Researchers introduce STRIDE, a framework that integrates large language model reasoning into time series foundation models by projecting LLM reasoning into continuous embedding spaces rather than discrete tokens. The approach achieves state-of-the-art forecasting performance while providing interpretable reasoning, addressing the modality gap that previously limited combining LLMs with numerical time series data.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

TinySSL: Distilled Self-Supervised Pretraining for Sub-Megabyte MCU Models

Researchers introduce CA-DSSL, a new self-supervised learning technique that enables efficient AI model training on microcontrollers with under 500K parameters. The method surpasses existing approaches by 18 percentage points on standard benchmarks while requiring significantly fewer parameters, achieving 94% of supervised learning performance with models deployable in just 378 KB of memory.

AINeutralarXiv – CS AI · 1d ago5/10
🧠

Multi-Level Graph Attention Network Contrastive Learning for Knowledge-Aware Recommendation

Researchers propose a multi-level graph attention network framework that uses contrastive learning to improve knowledge-graph-based recommendation systems. The approach addresses limitations in existing methods by leveraging multi-view learning and self-supervised techniques to better model user preferences and item representations.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Transformer autoencoder with local attention for sparse and irregular time series with application on risk estimation

Researchers present a Transformer Autoencoder framework with local attention mechanisms designed to detect non-technical losses (electricity theft) in power grids using sparse, irregular time series data. The model demonstrates superior performance in risk estimation for Greek electrical systems compared to existing methods, achieving high recall and precision while effectively handling data collection irregularities.

AIBullisharXiv – CS AI · 1d ago6/10
🧠

Lattice Deduction Transformers

Researchers introduce Lattice Deduction Transformers (LDT), a specialized neural architecture that achieves near-perfect accuracy on constraint-solving puzzles like Sudoku and Mazes while remaining logically sound. The approach demonstrates that smaller models with domain-specific architectures can outperform large language models on reasoning tasks.

AINeutralarXiv – CS AI · 1d ago5/10
🧠

Cplus2ASP: Computing Action Language C+ in Answer Set Programming

Cplus2ASP Version 2 is a new system that translates action language C+ into answer set programming, offering significant performance improvements over the Causal Calculator through modern ASP solving techniques. The tool supports incremental execution, external atoms via Lua integration, and extensible translations for other action languages, making it relevant for automated reasoning and planning applications.

← PrevPage 428 of 1394Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined