y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All34,646🧠AI14,870⛓️Crypto12,086💎DeFi1,240🤖AI × Crypto699📰General5,751

AI × Crypto News Feed

Real-time AI-curated news from 34,688+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

34688 articles
AINeutralarXiv – CS AI · 10h ago6/10
🧠

Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks

Researchers demonstrate that modified feedback alignment (FA) algorithms can train convolutional neural networks while maintaining biological plausibility, with internal representations converging to structures similar to backpropagation despite using fundamentally different weight update mechanisms. This finding suggests that successful learning algorithms may achieve comparable results through different computational paths, bridging biologically plausible alternatives with practical neural network training.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

From Ontology Conformance to Admissible Reconfiguration: A RoSO/SMGI Adequacy Argument for Robotic Service Governance

Researchers propose embedding the Robotic Service Ontology (RoSO) into the Structural Model of General Intelligence (SMGI) to enable dynamic governance of robotic services during runtime reconfigurations. The framework addresses how service semantics can remain valid and admissible when systems are rebound, recomposed, or redeployed, moving beyond static ontology conformance to formally governed runtime change.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

An Explainable Unsupervised-to-Supervised Machine Learning Framework for Dietary Pattern Discovery Using UK National Dietary Survey Data

Researchers developed an explainable machine learning framework that uses unsupervised and supervised learning to identify and interpret dietary patterns from UK nutrition survey data. The system discovered four distinct eating patterns and achieved high accuracy in reproducing classifications, with potential applications for dietitian-assisted clinical assessments and personalized nutrition counseling.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Resource-Aware Evolutionary Neural Architecture Search for Cardiac MRI Segmentation

CardiacNAS presents an evolutionary neural architecture search framework that optimizes cardiac MRI segmentation models for both accuracy and computational efficiency. The approach achieves 93.22% dice similarity with only 3.58M parameters, demonstrating how resource-aware AI design can enable deployment of medical imaging models on resource-constrained environments.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

From Historical Tabular Image to Knowledge Graphs: A Provenance-Aware Modular Pipeline

Researchers present a modular, provenance-aware pipeline that converts handwritten archival tables into Knowledge Graphs while maintaining transparency through intermediate inspection points. The approach combines table structure recognition, handwriting recognition, and semantic interpretation while tracking data lineage to ensure all extracted information remains traceable to its source, addressing the opacity problem in end-to-end AI systems.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

OracleTSC: Oracle-Informed Reward Hurdle and Uncertainty Regularization for Traffic Signal Control

Researchers introduce OracleTSC, an LLM-based traffic signal control system that combines reward hurdle mechanisms and uncertainty regularization to stabilize reinforcement learning training. The approach achieves 75% reduction in travel time while maintaining interpretability through natural language explanations, with strong cross-intersection generalization capabilities.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

LAGO: Language-Guided Adaptive Object-Region Focus for Zero-Shot Visual-Text Alignment

Researchers introduce LAGO, a framework for zero-shot visual-text alignment that improves classification accuracy by intelligently focusing on relevant image regions rather than analyzing entire images. The method reduces computational cost while avoiding error-amplification feedback loops that plague existing localized alignment approaches.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Continuity Laws for Sequential Models

Researchers formalize the concept of model continuity in sequential neural networks, finding that S4 maintains stable continuous behavior while Mamba's S6 exhibits sensitivity to input amplitude despite continuous-time origins. The study establishes empirical alignment between task continuity, model continuity, and performance, with practical implications for temporal subsampling strategies.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Scaling Limits of Long-Context Transformers

Researchers present a theoretical analysis of how transformer attention mechanisms scale with context length, identifying a critical threshold where attention shifts from uniform averaging to focusing on individual keys. The findings establish that this transition point depends on local geometric properties of the key distribution rather than global features, with implications for understanding transformer behavior at extreme context lengths.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Beyond Penalization: Diffusion-based Out-of-Distribution Detection and Selective Regularization in Offline Reinforcement Learning

DOSER introduces a diffusion-model-based framework for offline reinforcement learning that improves out-of-distribution (OOD) action detection beyond traditional penalization methods. The approach uses single-step denoising reconstruction error to identify risky actions while selectively encouraging beneficial exploration, with theoretical guarantees of convergence and empirical superiority on suboptimal datasets.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

From Holo Pockets to Electron Density: GPT-style Drug Design with Density

Researchers introduce EDMolGPT, a generative AI model that uses electron density data from protein binding pockets to design novel drug molecules. The approach improves upon existing methods by incorporating physically grounded density information rather than empty pocket structures, enabling more accurate molecular generation with realistic 3D conformations.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Quantile Geometry Regularization for Distributional Reinforcement Learning

Researchers propose RQIQN, a new reinforcement learning method that improves quantile-based distributional RL by addressing distorted distribution estimates through Wasserstein distributionally robust optimization. The approach adds a lightweight correction to quantile targets that prevents distributional collapse while maintaining computational efficiency, demonstrating superior performance on navigation and Atari benchmarks.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

SKG-VLA: Scene Knowledge Graph Priors for Structured Scene Semantics and Multimodal Reasoning for Decision Making

Researchers present SKG-VLA, an AI system that uses Scene Knowledge Graphs to improve decision-making in large-scale complaint handling by integrating multimodal evidence (text, images, metadata) with structured reasoning about entities, policies, and temporal events. The approach demonstrates improved accuracy and robustness across policy-grounded reasoning and long-tail scenarios.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Alignment as Jurisprudence

A new academic paper draws parallels between jurisprudence (how judges decide cases) and AI alignment (ensuring AI systems conform to human values), arguing that legal theory can inform AI safety approaches. The essay bridges Constitutional AI and case-based reasoning methods with established legal frameworks like interpretivism and analogical reasoning, suggesting mutual insights between law and AI development.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Playing games with knowledge: AI-Induced delusions need game theoretic interventions

Researchers propose that conversational AI systems create epistemic problems not through flawed models but through game-theoretic dynamics where sycophantic responses reinforce user biases. They introduce an "Epistemic Mediator" mechanism with belief versioning to break feedback loops that lead users toward delusional certainty, achieving 48x reduction in belief spirals.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

ReplaySCM: A Benchmark for Executable Causal Mechanism Induction from Interventions

ReplaySCM introduces a 1,300-item benchmark for evaluating how well language models can infer causal mechanisms from limited intervention data. The benchmark tests whether AI systems can output executable Boolean causal models that generalize to unseen intervention scenarios, revealing that frontier LLMs struggle significantly when structural information is hidden.

AINeutralarXiv – CS AI · 10h ago5/10
🧠

PYTHALAB-MERA: Validation-Grounded Memory, Retrieval, and Acceptance Control for Frozen-LLM Coding Agents

PYTHALAB-MERA is a novel external controller system that enhances frozen local language models for code generation by integrating validation-grounded memory, adaptive retrieval, and reinforcement learning techniques. In a constrained benchmark, the system achieved 8/9 validation successes compared to 0/9 for baseline approaches, though the authors explicitly limit claims to this specific experimental setting.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Normalization Equivariance for Arbitrary Backbones, with Application to Image Denoising

Researchers present a parameter-free wrapper method (WNE) that enforces Normalization Equivariance—robustness to brightness and contrast shifts—around any neural network backbone without architectural constraints. The approach characterizes NE as a normalize-process-denormalize factorization, enabling compatibility with modern components like transformers and attention mechanisms while avoiding the 1.6x computational overhead of existing methods.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Mid-Training with Self-Generated Data Improves Reinforcement Learning in Language Models

Researchers propose a mid-training technique using self-generated data to improve reinforcement learning in large language models. By exposing models to multiple problem-solving approaches before RL training, the method demonstrates consistent improvements across mathematical reasoning, code generation, and narrative tasks.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

LLM-guided Semi-Supervised Approaches for Social Media Crisis Data Classification

Researchers evaluate LLM-guided semi-supervised learning methods for classifying crisis-related social media data, finding that LG-CoTrain significantly outperforms traditional approaches in low-resource settings while compact models can rival large zero-shot LLMs. This demonstrates practical pathways for deploying AI in disaster response applications with minimal labeled training data.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Behavioral Determinants of Deployed AI Agents in Social Networks: A Multi-Factor Study of Personality, Model, and Guardrail Specification

Researchers deployed thirteen AI agents on Moltbook, a Reddit-like social network for AI systems, to study how configuration specifications affect emergent social behavior. Results show personality specification is the dominant factor influencing agent responses, while underlying LLM models and operational rules have more moderate effects on communication style and topic engagement.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

Researchers present causal evidence that large language models learn in-context through dual mechanisms combining genuine structure inference with local pattern-matching, rather than relying on either approach alone. Using graph random-walk tasks and activation patching techniques, they demonstrate that LLMs simultaneously encode multiple competing graph topologies in orthogonal representational subspaces and show that late-layer circuits causally drive graph-preference predictions.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

Transformers Can Implement Preconditioned Richardson Iteration for In-Context Gaussian Kernel Regression

Researchers demonstrate that standard transformer models with softmax attention can implement preconditioned Richardson iteration to solve Gaussian kernel ridge regression tasks during in-context learning. The theoretical construction and empirical validation reveal how transformers decompose nonlinear prediction into interpretable algorithmic steps, advancing mechanistic understanding of transformer capabilities.

AINeutralarXiv – CS AI · 10h ago6/10
🧠

LLM Translation of Compiler Intermediate Representation

Researchers introduce IRIS-14B, a 14-billion-parameter LLM fine-tuned to translate compiler intermediate representations between GCC's GIMPLE and LLVM IR, achieving up to 44 percentage points higher accuracy than existing state-of-the-art models. The approach demonstrates how LLMs can function as interoperability layers in hybrid compiler architectures, enabling cross-toolchain workflows without modifying existing compiler infrastructure.

← PrevPage 410 of 1388Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined