y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#neural-networks News & Analysis

358 articles tagged with #neural-networks. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

358 articles
AINeutralarXiv – CS AI · Mar 47/103
🧠

Structured vs. Unstructured Pruning: An Exponential Gap

Research reveals an exponential gap between structured and unstructured neural network pruning methods. While unstructured weight pruning can approximate target functions with O(d log(1/ε)) neurons, structured neuron pruning requires Ω(d/ε) neurons, demonstrating fundamental limitations of structured approaches.

AIBullisharXiv – CS AI · Mar 46/103
🧠

PRISM: Exploring Heterogeneous Pretrained EEG Foundation Model Transfer to Clinical Differential Diagnosis

Researchers introduce PRISM, an EEG foundation model that demonstrates how diverse pretraining data leads to better clinical performance than narrow-source datasets. The study shows that geographically diverse EEG data outperforms larger but homogeneous datasets in medical diagnosis tasks, particularly achieving 12.3% better accuracy in distinguishing epilepsy from similar conditions.

$COMP
AINeutralarXiv – CS AI · Mar 47/102
🧠

WARP: Weight Teleportation for Attack-Resilient Unlearning Protocols

Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.

AINeutralarXiv – CS AI · Mar 47/102
🧠

No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

Researchers developed linear probes that can predict whether large language models will answer questions correctly by analyzing neural activations before any answer is generated. The method works across different model sizes and generalizes to out-of-distribution datasets, though it struggles with mathematical reasoning tasks.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Robust Heterogeneous Analog-Digital Computing for Mixture-of-Experts Models with Theoretical Generalization Guarantees

Researchers propose a heterogeneous computing framework for Mixture-of-Experts AI models that combines analog in-memory computing with digital processing to improve energy efficiency. The approach identifies noise-sensitive experts for digital computation while running the majority on analog hardware, eliminating the need for costly retraining of large models.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Dual Randomized Smoothing: Beyond Global Noise Variance

Researchers propose a dual Randomized Smoothing framework that overcomes limitations of standard neural network robustness certification by using input-dependent noise variances instead of global ones. The method achieves strong performance at both small and large radii with gains of 15-20% on CIFAR-10 and 8-17% on ImageNet, while adding only 60% computational overhead.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Loss Barcode: A Topological Measure of Escapability in Loss Landscapes

Researchers developed a new topological measure called the 'TO-score' to analyze neural network loss landscapes and understand how gradient descent optimization escapes local minima. Their findings show that deeper and wider networks have fewer topological obstructions to learning, and there's a connection between loss barcode characteristics and generalization performance.

AINeutralarXiv – CS AI · Mar 47/102
🧠

Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression

Researchers have derived tight bounds on covering numbers for deep ReLU neural networks, providing fundamental insights into network capacity and approximation capabilities. The work removes a log^6(n) factor from the best known sample complexity rate for estimating Lipschitz functions via deep networks, establishing optimality in nonparametric regression.

AIBullisharXiv – CS AI · Mar 47/103
🧠

On the Structural Limitations of Weight-Based Neural Adaptation and the Role of Reversible Behavioral Learning

Researchers introduce reversible behavioral learning for AI models, addressing the problem of structural irreversibility in neural network adaptation. The study demonstrates that traditional fine-tuning methods cause permanent changes to model behavior that cannot be deterministically reversed, while their new approach allows models to return to original behavior within numerical precision.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Expectation and Acoustic Neural Network Representations Enhance Music Identification from Brain Activity

Researchers developed a method to improve EEG-based music identification by using artificial neural networks that distinguish between acoustic and expectation-related brain representations. The approach combines both types of neural representations to achieve better performance than traditional methods, potentially advancing brain-computer interfaces and neural decoding applications.

AIBullisharXiv – CS AI · Mar 46/103
🧠

cPNN: Continuous Progressive Neural Networks for Evolving Streaming Time Series

Researchers developed cPNN (Continuous Progressive Neural Networks), a new AI architecture that handles evolving data streams with temporal dependencies while avoiding catastrophic forgetting. The system addresses concept drift in time series data by combining recurrent neural networks with progressive learning techniques, showing quick adaptation to new concepts.

AIBullisharXiv – CS AI · Mar 46/103
🧠

On the Expressive Power of Transformers for Maxout Networks and Continuous Piecewise Linear Functions

Researchers establish theoretical foundations for Transformer networks' expressive power by connecting them to maxout networks and continuous piecewise linear functions. The study proves Transformers inherit universal approximation capabilities of ReLU networks while revealing that self-attention layers implement max-type operations and feedforward layers perform token-wise affine transformations.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Odin: Multi-Signal Graph Intelligence for Autonomous Discovery in Knowledge Graphs

Researchers present Odin, the first production-deployed graph intelligence engine that autonomously discovers patterns in knowledge graphs without predefined queries. The system uses a novel COMPASS scoring metric combining structural, semantic, temporal, and community-aware signals, and has been successfully deployed in regulated healthcare and insurance environments.

AI × CryptoBullisharXiv – CS AI · Mar 37/104
🤖

TAO: Tolerance-Aware Optimistic Verification for Floating-Point Neural Networks

TAO is a new verification protocol that enables users to verify neural network outputs from untrusted cloud services without requiring exact computation matches. The system uses tolerance-aware verification with IEEE-754 bounds and empirical profiles, implementing a dispute resolution mechanism deployed on Ethereum testnet.

$ETH$TAO
AINeutralarXiv – CS AI · Mar 37/104
🧠

Barriers for Learning in an Evolving World: Mathematical Understanding of Loss of Plasticity

Researchers have identified the mathematical mechanisms behind 'loss of plasticity' (LoP), explaining why deep learning models struggle to continue learning in changing environments. The study reveals that properties promoting generalization in static settings actually hinder continual learning by creating parameter space traps.

AIBullisharXiv – CS AI · Mar 37/102
🧠

RMAAT: Astrocyte-Inspired Memory Compression and Replay for Efficient Long-Context Transformers

Researchers introduce RMAAT (Recurrent Memory Augmented Astromorphic Transformer), a new architecture inspired by brain astrocyte cells that addresses the quadratic complexity problem in Transformer models for long sequences. The system uses recurrent memory tokens and adaptive compression to achieve linear complexity while maintaining competitive accuracy on benchmark tests.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling

Researchers provide mathematical proof that implicit models can achieve greater expressive power through increased test-time computation, explaining how these memory-efficient architectures can match larger explicit networks. The study validates this scaling property across image reconstruction, scientific computing, operations research, and LLM reasoning domains.

AINeutralarXiv – CS AI · Mar 37/104
🧠

When Bias Meets Trainability: Connecting Theories of Initialization

New research connects initial guessing bias in untrained deep neural networks to established mean field theories, proving that optimal initialization for learning requires systematic bias toward specific classes rather than neutral initialization. The study demonstrates that efficient training is fundamentally linked to architectural prejudices present before data exposure.

AIBullisharXiv – CS AI · Mar 37/102
🧠

GradientStabilizer:Fix the Norm, Not the Gradient

Researchers propose GradientStabilizer, a new technique to address training instability in deep learning by replacing gradient magnitude with statistically stabilized estimates while preserving direction. The method outperforms gradient clipping across multiple AI training scenarios including LLM pre-training, reinforcement learning, and computer vision tasks.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Brain-IT: Image Reconstruction from fMRI via Brain-Interaction Transformer

Researchers developed Brain-IT, a new AI system using Brain Interaction Transformer technology to reconstruct images from fMRI brain recordings with significantly improved accuracy. The method requires only 1 hour of data versus 40 hours needed by current approaches while surpassing state-of-the-art results.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials

Researchers developed NextHAM, a deep learning method for predicting electronic-structure Hamiltonians of materials, offering significant computational efficiency advantages over traditional DFT methods. The system introduces neural E(3)-symmetry architecture and a new dataset Materials-HAM-SOC with 17,000 material structures spanning 68 elements.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Topological derivative approach for deep neural network architecture adaptation

Researchers developed a novel algorithm using topological derivatives to automatically determine where and how to add new layers to neural networks during training. The approach uses mathematical principles from optimal control theory and topology optimization to adaptively grow network architecture, showing superior performance compared to baseline networks and other adaptation strategies.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Polynomial, trigonometric, and tropical activations

Researchers developed new activation functions for deep neural networks based on polynomial and trigonometric orthonormal bases that can successfully train models like GPT-2 and ConvNeXt. The work addresses gradient problems common with polynomial activations and shows these networks can be interpreted as multivariate polynomial mappings.

AINeutralarXiv – CS AI · Mar 37/104
🧠

The Information-Theoretic Imperative: Compression and the Epistemic Foundations of Intelligence

Researchers propose the Compression Efficiency Principle (CEP) to explain why artificial neural networks and biological brains develop similar representations despite different substrates. The theory suggests both systems converge on efficient compression strategies that encode stable invariants rather than unstable correlations, providing a unified framework for understanding intelligence across biological and artificial systems.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Dream2Learn: Structured Generative Dreaming for Continual Learning

Researchers introduce Dream2Learn (D2L), a continual learning framework that enables AI models to generate synthetic training data from their own internal representations, mimicking human dreaming for knowledge consolidation. The system creates novel 'dreamed classes' using diffusion models to improve forward knowledge transfer and prevent catastrophic forgetting in neural networks.

← PrevPage 4 of 15Next →