y0news
#ai-security6 articles
6 articles
CryptoNeutralNewsBTC · 1h ago0
⛓️

Crypto’s Quietest Month In Nearly A Year — But Hackers Haven’t Gone Away

February 2026 saw crypto hack losses drop to just $26.5 million across 15 incidents, representing a 69% decline from January and the lowest monthly figure in 11 months. Two major attacks on YieldBlox ($10M) and IoTeX ($9M) accounted for over 70% of total losses, while improved security standards and AI-powered monitoring tools are helping reduce vulnerabilities.

$BTC$XRP
AIBullisharXiv – CS AI · 6h ago3
🧠

Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning

Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.

AINeutralarXiv – CS AI · 6h ago5
🧠

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Researchers introduce Jailbreak Foundry (JBF), a system that automatically converts AI jailbreak research papers into executable code modules for standardized testing. The system successfully reproduced 30 attacks with high accuracy and reduces implementation code by nearly half while enabling consistent evaluation across multiple AI models.

AINeutralarXiv – CS AI · 6h ago4
🧠

Veritas: Generalizable Deepfake Detection via Pattern-Aware Reasoning

Researchers introduce Veritas, a multi-modal large language model designed for deepfake detection that uses pattern-aware reasoning to mimic human forensic processes. The system addresses real-world challenges through the HydraFake dataset and achieves significant improvements in detecting unseen forgeries across different domains.

AINeutralarXiv – CS AI · 6h ago1
🧠

Concept-based Adversarial Attack: a Probabilistic Perspective

Researchers propose a new concept-based adversarial attack framework that targets entire concept distributions rather than single images, generating diverse adversarial examples while preserving the original concept identity. The method creates adversarial images with variations in pose, viewpoint, or background that can still mislead classifiers while remaining recognizable as instances of the original category.