y0news
AnalyticsDigestsSourcesRSSAICrypto
#dimensionality-reduction2 articles
2 articles
AINeutralarXiv โ€“ CS AI ยท 7h ago6/10
๐Ÿง 

Explaining Neural Networks in Preference Learning: a Post-hoc Inductive Logic Programming Approach

Researchers propose using Inductive Learning of Answer Set Programs (ILASP) to create interpretable approximations of neural networks trained on preference learning tasks. The approach combines dimensionality reduction through Principal Component Analysis with logic-based explanations, addressing the challenge of explaining black-box AI models while maintaining computational efficiency.

AINeutralarXiv โ€“ CS AI ยท 7h ago6/10
๐Ÿง 

Sparse-Aware Neural Networks for Nonlinear Functionals: Mitigating the Exponential Dependence on Dimension

Researchers propose a sparse-aware neural network framework that combines convolutional architectures with fully connected networks to improve operator learning over infinite-dimensional function spaces. The approach significantly reduces the curse of dimensionality and sample complexity requirements for approximating nonlinear functionals, with improved theoretical guarantees for both deterministic and random sampling schemes.