y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#neural-networks News & Analysis

358 articles tagged with #neural-networks. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

358 articles
AINeutralarXiv – CS AI · Feb 274/109
🧠

Positional-aware Spatio-Temporal Network for Large-Scale Traffic Prediction

Researchers propose PASTN, a lightweight neural network for large-scale traffic flow prediction that uses positional-aware embeddings and temporal attention mechanisms. The model demonstrates improved efficiency and effectiveness across various geographical scales from counties to entire states.

AINeutralarXiv – CS AI · Feb 274/105
🧠

Learning geometry-dependent lead-field operators for forward ECG modeling

Researchers developed a new AI-powered surrogate model for ECG simulations that combines geometry encoding with neural networks to predict lead-field gradients. The method achieves high accuracy (5° mean angular error, <2.5% relative error) while reducing computational costs and data requirements compared to traditional full-order models.

AINeutralarXiv – CS AI · Feb 274/106
🧠

Scattering Transform for Auditory Attention Decoding

Researchers propose using scattering transform as a preprocessing method for EEG-based auditory attention decoding to solve the cocktail party problem in hearing aids. The two-layer scattering transform showed significant performance improvements on subject-related classification tasks, particularly on the KU Leuven dataset when compared to traditional preprocessing methods.

AINeutralarXiv – CS AI · Feb 274/106
🧠

Model Agreement via Anchoring

Researchers developed a new mathematical technique called 'anchoring' to control model disagreement between machine learning models trained independently. The method provides bounds for reducing disagreement to zero across four common ML algorithms including stacked aggregation, gradient boosting, neural networks, and regression trees.

AINeutralGoogle Research Blog · Apr 244/107
🧠

Improving brain models with ZAPBench

ZAPBench is introduced as a new benchmarking tool designed to improve brain models in artificial intelligence research. The development represents progress in neuroscience-inspired AI modeling approaches.

AINeutralOpenAI News · Feb 74/105
🧠

Discovering types for entity disambiguation

Researchers have developed an automated system that uses neural networks to disambiguate entities by classifying words into approximately 100 automatically-discovered non-exclusive categories or 'types'. This approach helps determine which specific object or entity a word refers to when multiple interpretations are possible.

AINeutralOpenAI News · Dec 44/108
🧠

Learning sparse neural networks through L₀ regularization

The article discusses L₀ regularization techniques for creating sparse neural networks, which can reduce model complexity and computational requirements. This approach helps optimize neural network architectures by encouraging sparsity during training.

AINeutralarXiv – CS AI · Mar 34/104
🧠

Heterophily-Agnostic Hypergraph Neural Networks with Riemannian Local Exchanger

Researchers propose HealHGNN, a novel Hypergraph Neural Network that addresses limitations in traditional networks when dealing with heterophilic hypergraphs. The system uses Riemannian geometry and adaptive local heat exchangers to enable better long-range dependency modeling with linear complexity.

AINeutralarXiv – CS AI · Mar 34/107
🧠

A Case Study on Concept Induction for Neuron-Level Interpretability in CNN

Researchers successfully applied a Concept Induction framework for neural network interpretability to the SUN2012 dataset, demonstrating the method's broader applicability beyond the original ADE20K dataset. The study assigns interpretable semantic labels to hidden neurons in CNNs and validates them through statistical testing and web-sourced images.

AINeutralarXiv – CS AI · Mar 34/105
🧠

Decoupling Stability and Plasticity for Multi-Modal Test-Time Adaptation

Researchers propose DASP (Decoupling Adaptation for Stability and Plasticity), a novel framework for adapting multi-modal AI models to changing test environments. The method addresses key challenges of negative transfer and catastrophic forgetting by using asymmetric adaptation strategies that treat biased and unbiased modalities differently.

AINeutralarXiv – CS AI · Mar 34/105
🧠

Neural Latent Arbitrary Lagrangian-Eulerian Grids for Fluid-Solid Interaction

Researchers have developed Fisale, a new AI framework for modeling complex fluid-solid interactions using neural networks inspired by classical Arbitrary Lagrangian-Eulerian methods. The system addresses limitations in existing deep learning approaches by enabling two-way interactions between fluids and solids with unified geometry-aware embeddings.

AINeutralarXiv – CS AI · Mar 34/106
🧠

Content-Aware Frequency Encoding for Implicit Neural Representations with Fourier-Chebyshev Features

Researchers propose Content-Aware Frequency Encoding (CAFE), a new method for Implicit Neural Representations that addresses spectral bias limitations through adaptive frequency selection. The technique uses parallel linear layers with Hadamard products and extends to CAFE+ with Chebyshev features, demonstrating superior performance across multiple benchmarks.

AINeutralarXiv – CS AI · Mar 24/106
🧠

SegReg: Latent Space Regularization for Improved Medical Image Segmentation

Researchers propose SegReg, a latent-space regularization framework for medical image segmentation that improves model generalization and continual learning capabilities. The method operates on U-Net feature maps and demonstrates consistent improvements across prostate, cardiac, and hippocampus segmentation tasks without adding extra parameters.

AINeutralarXiv – CS AI · Mar 24/105
🧠

Flowette: Flow Matching with Graphette Priors for Graph Generation

Researchers propose Flowette, a new AI framework for generating graphs with recurring structural patterns using continuous flow matching and graph neural networks. The model introduces 'graphettes' as probabilistic priors to better capture domain-specific structures like molecular patterns, showing improvements in synthetic and small-molecule generation tasks.

AINeutralarXiv – CS AI · Mar 24/105
🧠

Optimizer-Induced Low-Dimensional Drift and Transverse Dynamics in Transformer Training

Researchers analyzed training trajectories in small transformer models, finding that parameter updates organize into a dominant drift direction with transverse dynamics. The study reveals that different optimizers (AdamW vs SGD) create substantially different trajectory geometries, with AdamW developing multi-dimensional structures while SGD produces more linear evolution.

AINeutralarXiv – CS AI · Mar 24/106
🧠

Micro-expression Recognition Based on Dual-branch Feature Extraction and Fusion

Researchers developed a dual-branch neural network for micro-expression recognition that combines residual and Inception networks with parallel attention mechanisms. The method achieved 74.67% accuracy on the CASME II dataset, significantly outperforming existing approaches like LBP-TOP by over 11%.

AINeutralarXiv – CS AI · Mar 24/106
🧠

Intrinsic Lorentz Neural Network

Researchers propose the Intrinsic Lorentz Neural Network (ILNN), a fully intrinsic hyperbolic architecture that performs all computations within the Lorentz model for better handling of hierarchical data structures. The network introduces novel components including point-to-hyperplane layers and GyroLBN batch normalization, achieving state-of-the-art performance on CIFAR and genomic benchmarks while outperforming Euclidean baselines.

AINeutralarXiv – CS AI · Mar 24/106
🧠

Less is more -- the Dispatcher/ Executor principle for multi-task Reinforcement Learning

Researchers propose a dispatcher/executor principle for multi-task Reinforcement Learning that partitions controllers into task-understanding and device-specific components connected by a regularized communication channel. This structural approach aims to improve generalization and data efficiency as an alternative to simply scaling large neural networks with vast datasets.

AIBullisharXiv – CS AI · Mar 24/106
🧠

Asymptotically Stable Quaternion-valued Hopfield-structured Neural Network with Periodic Projection-based Supervised Learning Rules

Researchers propose a quaternion-valued supervised learning Hopfield neural network (QSHNN) that leverages quaternions' geometric advantages for representing rotations and postures. The model introduces periodic projection-based learning rules to maintain quaternionic consistency while achieving high accuracy and fast convergence, with potential applications in robotics and control systems.

AINeutralarXiv – CS AI · Mar 24/105
🧠

MEDIC: a network for monitoring data quality in collider experiments

Researchers have developed MEDIC, a neural network framework for Data Quality Monitoring (DQM) in particle physics experiments that uses machine learning to automatically detect detector anomalies and identify malfunctioning components. The simulation-driven approach using modified Delphes detector simulation represents an initial step toward comprehensive ML-based DQM systems for future particle detectors.

GeneralNeutralGoogle Research Blog · May 73/102
📰

A new light on neural connections

The article discusses new scientific research on neural connections, representing a general science topic. Without more specific content, this appears to be basic neuroscience research with no direct implications for AI, cryptocurrency, or financial markets.

← PrevPage 14 of 15Next →