y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#deep-learning News & Analysis

257 articles tagged with #deep-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

257 articles
AINeutralarXiv – CS AI · Mar 44/102
🧠

CASR-Net: An Image Processing-focused Deep Learning-based Coronary Artery Segmentation and Refinement Network for X-ray Coronary Angiogram

Researchers developed CASR-Net, a deep learning pipeline for automated coronary artery segmentation in X-ray angiograms that combines image preprocessing, UNet-based segmentation, and refinement stages. The system achieved superior performance with 61.43% IoU and 76.10% DSC on public datasets, potentially improving clinical diagnosis of coronary artery disease.

AIBullisharXiv – CS AI · Mar 35/105
🧠

Efficient Long-Sequence Diffusion Modeling for Symbolic Music Generation

Researchers developed SMDIM, a new diffusion model for symbolic music generation that efficiently handles long sequences by combining global structure construction with local refinement. The model outperforms existing approaches in both generation quality and computational efficiency across various musical styles including Western classical, popular, and folk music.

$NEAR
AINeutralarXiv – CS AI · Mar 34/103
🧠

MAC: A Conversion Rate Prediction Benchmark Featuring Labels Under Multiple Attribution Mechanisms

Researchers have created MAC, the first public conversion rate prediction dataset featuring labels from multiple attribution mechanisms, along with PyMAL, an open-source library for multi-attribution learning approaches. The study introduces a new method called Mixture of Asymmetric Experts (MoAE) that significantly outperforms existing state-of-the-art multi-attribution learning methods.

AINeutralarXiv – CS AI · Mar 34/103
🧠

Discovering Symmetry Groups with Flow Matching

Researchers introduce LieFlow, a machine learning framework that automatically discovers symmetries in data by treating symmetry discovery as a distribution learning problem on Lie groups. The approach can identify both continuous and discrete symmetries within a unified framework, significantly outperforming existing methods like LieGAN in experiments on synthetic and real datasets.

AINeutralarXiv – CS AI · Mar 34/103
🧠

Latent 3D Brain MRI Counterfactual

Researchers developed a two-stage method using Structural Causal Models in latent space to generate high-quality 3D brain MRI counterfactuals, addressing the challenge of limited training data in medical imaging. The approach combines VQ-VAE encoding with causal modeling to produce diverse, high-fidelity brain MRI data beyond the original training distribution.

AINeutralarXiv – CS AI · Mar 34/104
🧠

Improving Wildlife Out-of-Distribution Detection: Africas Big Five

Researchers developed improved out-of-distribution detection methods for wildlife classification, specifically focusing on Africa's Big Five animals to reduce human-wildlife conflict. The study found that feature-based methods using Nearest Class Mean with ImageNet pre-trained features achieved significant improvements of 2%, 4%, and 22% over existing out-of-distribution detection methods.

AIBullisharXiv – CS AI · Mar 34/104
🧠

Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution

Researchers propose TADSR, a Time-Aware one-step Diffusion Network that improves real-world image super-resolution by dynamically varying timesteps instead of using fixed ones. The method achieves state-of-the-art performance while allowing controllable trade-offs between image fidelity and realism in a single processing step.

AINeutralarXiv – CS AI · Mar 34/103
🧠

Rejuvenating Cross-Entropy Loss in Knowledge Distillation for Recommender Systems

Researchers propose Rejuvenated Cross-Entropy for Knowledge Distillation (RCE-KD) to improve knowledge distillation in recommender systems by addressing limitations of Cross-Entropy loss when distilling teacher model rankings. The method splits teacher's top items into subsets and uses adaptive sampling to better align with theoretical assumptions.

AINeutralarXiv – CS AI · Mar 34/103
🧠

Towards Generalizable PDE Dynamics Forecasting via Physics-Guided Invariant Learning

Researchers propose iMOOE, a physics-guided invariant learning method for forecasting partial differential equations (PDEs) dynamics with improved zero-shot generalization. The method addresses limitations in existing deep learning approaches that require test-time adaptation by incorporating fundamental physical invariance principles.

AINeutralarXiv – CS AI · Mar 34/104
🧠

Data-Augmented Deep Learning for Downhole Depth Sensing and Validation

Researchers developed a data-augmented deep learning system for accurate downhole depth sensing in oil and gas wells using casing collar locator (CCL) technology. The system addresses limited real well data challenges through comprehensive preprocessing methods, achieving F1 score improvements of up to 0.057 for collar recognition models.

AINeutralarXiv – CS AI · Mar 34/103
🧠

CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions

Researchers introduce CloDS (Cloth Dynamics Splatting), an unsupervised AI framework that learns cloth dynamics from visual observations without requiring known physical properties. The system uses a three-stage pipeline with dual-position opacity modulation to handle complex cloth deformations and self-occlusions through mesh-based Gaussian splatting.

AINeutralarXiv – CS AI · Mar 34/103
🧠

Deformation-Free Cross-Domain Image Registration via Position-Encoded Temporal Attention

Researchers developed GPEReg-Net, a new AI method for cross-domain image registration that eliminates the need for explicit deformation field estimation by decomposing images into domain-invariant scene representations and appearance statistics. The system achieves state-of-the-art performance on benchmarks while running 1.87x faster than existing methods, using position-encoded temporal attention for sequential image processing.

AINeutralarXiv – CS AI · Mar 25/108
🧠

Hierarchical Concept-based Interpretable Models

Researchers introduce Hierarchical Concept Embedding Models (HiCEMs), a new approach to make deep neural networks more interpretable by modeling relationships between concepts in hierarchical structures. The method includes Concept Splitting to automatically discover fine-grained sub-concepts without additional annotations, reducing the burden of manual labeling while improving model accuracy and interpretability.

AINeutralarXiv – CS AI · Mar 25/104
🧠

NuBench: An Open Benchmark for Deep Learning-Based Event Reconstruction in Neutrino Telescopes

NuBench is a new open benchmark for deep learning-based event reconstruction in neutrino telescopes, comprising seven large-scale simulated datasets with nearly 130 million neutrino interactions. The benchmark enables comparison of machine learning reconstruction methods across different detector geometries and evaluates four algorithms including ParticleNeT and DynEdge on core reconstruction tasks.

AINeutralarXiv – CS AI · Mar 25/106
🧠

General vs Domain-Specific CNNs: Understanding Pretraining Effects on Brain MRI Tumor Classification

Research comparing CNN architectures for brain tumor classification found that general-purpose models like ConvNeXt-Tiny (93% accuracy) outperformed domain-specific medical pre-trained models like RadImageNet DenseNet121 (68% accuracy). The study suggests that contemporary general-purpose CNNs with diverse pre-training may be more effective for medical imaging tasks in data-scarce scenarios.

AINeutralarXiv – CS AI · Feb 274/105
🧠

Knob: A Physics-Inspired Gating Interface for Interpretable and Controllable Neural Dynamics

Researchers propose Knob, a new framework that applies control theory principles to neural networks by mapping gating dynamics to mechanical systems. The approach enables real-time human adjustment of AI model behavior through intuitive physical parameters like damping and frequency, offering both static and continuous processing modes.

AINeutralarXiv – CS AI · Feb 274/106
🧠

FlexMS is a flexible framework for benchmarking deep learning-based mass spectrum prediction tools in metabolomics

Researchers have developed FlexMS, a flexible benchmark framework for evaluating deep learning models that predict mass spectra for molecular identification in drug discovery and material science. The framework addresses current challenges in assessing different prediction approaches by providing standardized evaluation methods and insights into performance factors across various model architectures.

AINeutralarXiv – CS AI · Feb 274/109
🧠

Positional-aware Spatio-Temporal Network for Large-Scale Traffic Prediction

Researchers propose PASTN, a lightweight neural network for large-scale traffic flow prediction that uses positional-aware embeddings and temporal attention mechanisms. The model demonstrates improved efficiency and effectiveness across various geographical scales from counties to entire states.

AINeutralarXiv – CS AI · Feb 274/106
🧠

MEDNA-DFM: A Dual-View FiLM-MoE Model for Explainable DNA Methylation Prediction

Researchers developed MEDNA-DFM, a dual-view deep learning model that predicts DNA methylation patterns while providing biological explanations. The model achieves high accuracy across species and includes explainable AI features that reveal conserved genetic motifs and cooperative sequence-structure relationships.

AINeutralarXiv – CS AI · Feb 274/104
🧠

PCReg-Net: Progressive Contrast-Guided Registration for Cross-Domain Image Alignment

Researchers have developed PCReg-Net, a lightweight AI framework for cross-domain image registration that achieves real-time performance at 141 FPS with only 2.56M parameters. The system uses a progressive contrast-guided approach with four modules to align images across different domains, showing improvements over traditional and deep learning baselines on retinal and microscopy benchmarks.

AIBullishApple Machine Learning · Feb 244/103
🧠

depyf: Open the Opaque Box of PyTorch Compiler for Machine Learning Researchers

Researchers introduce depyf, a new tool designed to make PyTorch 2.x's compiler more transparent for machine learning researchers. The tool decompiles bytecode back into readable source code, helping researchers better understand and utilize the compiler's optimization capabilities.

← PrevPage 9 of 11Next →