y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#cybersecurity News & Analysis

211 articles tagged with #cybersecurity. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

211 articles
AIBearisharXiv – CS AI · Mar 37/107
🧠

Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection

Researchers developed 'Reverse CAPTCHA,' a framework that tests how large language models respond to invisible Unicode-encoded instructions embedded in normal text. The study found that AI models can follow hidden instructions that humans cannot see, with tool use dramatically increasing compliance rates and different AI providers showing distinct preferences for encoding schemes.

AINeutralarXiv – CS AI · Mar 37/106
🧠

Formal Analysis and Supply Chain Security for Agentic AI Skills

Researchers developed SkillFortify, the first formal analysis framework for securing AI agent skill supply chains, addressing critical vulnerabilities exposed by attacks like ClawHavoc that infiltrated over 1,200 malicious skills. The framework achieved 96.95% F1 score with 100% precision and zero false positives in detecting malicious AI agent skills.

AIBullisharXiv – CS AI · Mar 36/107
🧠

LiaisonAgent: An Multi-Agent Framework for Autonomous Risk Investigation and Governance

Researchers introduce LiaisonAgent, an autonomous multi-agent cybersecurity system built on the QWQ-32B reasoning model that automates risk investigation and governance for Security Operations Centers. The system achieves 97.8% success rate in tool-calling and 95% accuracy in risk judgment while reducing manual investigation overhead by 92.7%.

AIBearisharXiv – CS AI · Mar 37/106
🧠

Learning to Attack: A Bandit Approach to Adversarial Context Poisoning

Researchers developed AdvBandit, a new black-box adversarial attack method that can exploit neural contextual bandits by poisoning context data without requiring access to internal model parameters. The attack uses bandit theory and inverse reinforcement learning to adaptively learn victim policies and optimize perturbations, achieving higher victim regret than existing methods.

AIBullisharXiv – CS AI · Mar 36/105
🧠

AMDS: Attack-Aware Multi-Stage Defense System for Network Intrusion Detection with Two-Stage Adaptive Weight Learning

Researchers developed AMDS, an attack-aware multi-stage defense system for network intrusion detection that uses adaptive weight learning to counter adversarial attacks. The system achieved 94.2% AUC and improved classification accuracy by 4.5 percentage points over existing adversarially trained ensembles by learning attack-specific detection strategies.

$CRV
AIBullisharXiv – CS AI · Mar 36/109
🧠

AWE: Adaptive Agents for Dynamic Web Penetration Testing

Researchers introduced AWE, a memory-augmented multi-agent framework for autonomous web penetration testing that outperforms existing tools on injection vulnerabilities. AWE achieved 87% XSS success and 66.7% blind SQL injection success on benchmark tests, demonstrating superior accuracy and efficiency compared to general-purpose AI penetration testing tools.

AIBullisharXiv – CS AI · Mar 37/107
🧠

ATLAS: AI-Assisted Threat-to-Assertion Learning for System-on-Chip Security Verification

ATLAS is a new AI-driven framework that uses large language models to automate System-on-Chip (SoC) security verification by converting threat models into formal verification properties. The system successfully detected 39 out of 48 security weaknesses in benchmark tests and generated correct security properties for 33 of those vulnerabilities.

AIBearisharXiv – CS AI · Mar 37/109
🧠

Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

A study reveals that safety-aligned large language models exhibit "Defensive Refusal Bias," refusing legitimate cybersecurity defense tasks 2.72x more often when they contain security-sensitive keywords. The research found particularly high refusal rates for critical defensive operations like system hardening (43.8%) and malware analysis (34.3%), suggesting current AI safety measures rely on semantic similarity rather than understanding intent.

AIBearisharXiv – CS AI · Mar 37/108
🧠

VidDoS: Universal Denial-of-Service Attack on Video-based Large Language Models

Researchers have discovered VidDoS, a new universal attack framework that can severely degrade Video-based Large Language Models by causing extreme computational resource exhaustion. The attack increases token generation by over 205x and inference latency by more than 15x, creating critical safety risks in real-world applications like autonomous driving.

AIBullisharXiv – CS AI · Mar 37/108
🧠

DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

Researchers introduce DualSentinel, a lightweight framework for detecting targeted attacks on Large Language Models by identifying 'Entropy Lull' patterns - periods of abnormally low token probability entropy that indicate when LLMs are being coercively controlled. The system uses dual-check verification to accurately detect backdoor and prompt injection attacks with near-zero false positives while maintaining minimal computational overhead.

$NEAR
AIBullisharXiv – CS AI · Mar 27/1015
🧠

Learning to Generate Secure Code via Token-Level Rewards

Researchers have developed Vul2Safe, a new framework for generating secure code using large language models, which addresses security vulnerabilities through self-reflection and token-level reinforcement learning. The approach introduces the PrimeVul+ dataset and SRCode training framework to provide more precise optimization of security patterns in code generation.

AIBullisharXiv – CS AI · Mar 26/1012
🧠

Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning

Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.

AINeutralarXiv – CS AI · Mar 27/1017
🧠

Exploring Robust Intrusion Detection: A Benchmark Study of Feature Transferability in IoT Botnet Attack Detection

Researchers conducted a benchmark study on IoT botnet intrusion detection systems, finding that models trained on one network domain suffer significant performance degradation when applied to different environments. The study evaluated three feature sets across four IoT datasets and provided guidelines for improving cross-domain robustness through better feature engineering and algorithm selection.

AINeutralarXiv – CS AI · Mar 26/1014
🧠

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Researchers introduce Jailbreak Foundry (JBF), a system that automatically converts AI jailbreak research papers into executable code modules for standardized testing. The system successfully reproduced 30 attacks with high accuracy and reduces implementation code by nearly half while enabling consistent evaluation across multiple AI models.

CryptoNeutralBitcoinist · Feb 276/106
⛓️

Ransomware Crooks Are Busier Than Ever — But Making Less Money, Researchers Say

Ransomware attacks surged 50% in 2025 with nearly 8,000 incidents recorded, but hackers earned less money than the previous year according to Chainalysis research. Despite the increased frequency of attacks, the cybercrime business appears to be generating lower returns per incident.

AIBullisharXiv – CS AI · Feb 276/105
🧠

A Lightweight IDS for Early APT Detection Using a Novel Feature Selection Method

Researchers developed a lightweight intrusion detection system using XGBoost and explainable AI to detect Advanced Persistent Threats (APTs) at early stages. The system reduced required features from 77 to just 4 while maintaining 97% precision and 100% recall performance.

$APT
AIBullisharXiv – CS AI · Feb 276/106
🧠

Towards Small Language Models for Security Query Generation in SOC Workflows

Researchers developed a three-stage framework using Small Language Models (SLMs) to automatically translate natural language queries into Kusto Query Language (KQL) for cybersecurity operations. The approach achieves high accuracy (98.7% syntax, 90.6% semantic) while reducing costs by up to 10x compared to GPT-4, potentially solving bottlenecks in Security Operations Centers.

CryptoBearishThe Defiant · Feb 266/108
⛓️

Ransomware Payments Topped $800 Million in 2025: Chainalysis

Ransomware payments reached over $800 million in 2025 according to Chainalysis data. While hackers' total earnings decreased compared to the previous year, individual victims who chose to pay ransoms faced significantly higher payment amounts.

Ransomware Payments Topped $800 Million in 2025: Chainalysis
AIBearishOpenAI News · Feb 256/106
🧠

Disrupting malicious uses of AI | February 2026

A new threat report analyzes how malicious actors are combining AI models with websites and social platforms to carry out attacks. The report examines the implications of these AI-powered threats for detection and defense systems.

CryptoNeutralDL News · Feb 226/105
⛓️

Prosecutors recover $22 million worth of lost Bitcoin

South Korean prosecutors have successfully recovered $22 million worth of Bitcoin that was previously lost to phishing site operators. This represents a significant recovery of stolen cryptocurrency assets through law enforcement action.

$BTC
← PrevPage 7 of 9Next →