211 articles tagged with #cybersecurity. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBearisharXiv – CS AI · Mar 37/107
🧠Researchers developed 'Reverse CAPTCHA,' a framework that tests how large language models respond to invisible Unicode-encoded instructions embedded in normal text. The study found that AI models can follow hidden instructions that humans cannot see, with tool use dramatically increasing compliance rates and different AI providers showing distinct preferences for encoding schemes.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers developed ThreatFormer-IDS, a Transformer-based intrusion detection system that achieves robust cybersecurity monitoring for IoT and industrial networks. The system demonstrates superior performance in detecting zero-day attacks while providing explainable threat attribution, achieving 99.4% AUC-ROC on benchmark tests.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers have developed RLShield, a multi-agent reinforcement learning system designed to automate cyber defense in financial institutions. The system uses AI to coordinate real-time responses across multiple assets and services during cyberattacks, balancing containment speed with operational costs and business disruption.
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers developed SkillFortify, the first formal analysis framework for securing AI agent skill supply chains, addressing critical vulnerabilities exposed by attacks like ClawHavoc that infiltrated over 1,200 malicious skills. The framework achieved 96.95% F1 score with 100% precision and zero false positives in detecting malicious AI agent skills.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce LiaisonAgent, an autonomous multi-agent cybersecurity system built on the QWQ-32B reasoning model that automates risk investigation and governance for Security Operations Centers. The system achieves 97.8% success rate in tool-calling and 95% accuracy in risk judgment while reducing manual investigation overhead by 92.7%.
AIBearisharXiv – CS AI · Mar 37/106
🧠Researchers developed AdvBandit, a new black-box adversarial attack method that can exploit neural contextual bandits by poisoning context data without requiring access to internal model parameters. The attack uses bandit theory and inverse reinforcement learning to adaptively learn victim policies and optimize perturbations, achieving higher victim regret than existing methods.
AIBullisharXiv – CS AI · Mar 36/105
🧠Researchers developed AMDS, an attack-aware multi-stage defense system for network intrusion detection that uses adaptive weight learning to counter adversarial attacks. The system achieved 94.2% AUC and improved classification accuracy by 4.5 percentage points over existing adversarially trained ensembles by learning attack-specific detection strategies.
$CRV
AIBullisharXiv – CS AI · Mar 36/109
🧠Researchers introduced AWE, a memory-augmented multi-agent framework for autonomous web penetration testing that outperforms existing tools on injection vulnerabilities. AWE achieved 87% XSS success and 66.7% blind SQL injection success on benchmark tests, demonstrating superior accuracy and efficiency compared to general-purpose AI penetration testing tools.
AIBullisharXiv – CS AI · Mar 37/107
🧠ATLAS is a new AI-driven framework that uses large language models to automate System-on-Chip (SoC) security verification by converting threat models into formal verification properties. The system successfully detected 39 out of 48 security weaknesses in benchmark tests and generated correct security properties for 33 of those vulnerabilities.
AIBearisharXiv – CS AI · Mar 37/109
🧠A study reveals that safety-aligned large language models exhibit "Defensive Refusal Bias," refusing legitimate cybersecurity defense tasks 2.72x more often when they contain security-sensitive keywords. The research found particularly high refusal rates for critical defensive operations like system hardening (43.8%) and malware analysis (34.3%), suggesting current AI safety measures rely on semantic similarity rather than understanding intent.
AIBearisharXiv – CS AI · Mar 37/108
🧠Researchers have discovered VidDoS, a new universal attack framework that can severely degrade Video-based Large Language Models by causing extreme computational resource exhaustion. The attack increases token generation by over 205x and inference latency by more than 15x, creating critical safety risks in real-world applications like autonomous driving.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers introduce DualSentinel, a lightweight framework for detecting targeted attacks on Large Language Models by identifying 'Entropy Lull' patterns - periods of abnormally low token probability entropy that indicate when LLMs are being coercively controlled. The system uses dual-check verification to accurately detect backdoor and prompt injection attacks with near-zero false positives while maintaining minimal computational overhead.
$NEAR
AIBullisharXiv – CS AI · Mar 36/106
🧠Researchers developed SpecularNet, a lightweight AI framework for detecting phishing websites that operates without external databases or cloud services. The system achieves 93.9% F1 score while reducing inference time from several seconds to 20 milliseconds per webpage, making it practical for real-world deployment.
AIBullisharXiv – CS AI · Mar 27/1015
🧠Researchers have developed Vul2Safe, a new framework for generating secure code using large language models, which addresses security vulnerabilities through self-reflection and token-level reinforcement learning. The approach introduces the PrimeVul+ dataset and SRCode training framework to provide more precise optimization of security patterns in code generation.
AIBullisharXiv – CS AI · Mar 26/1012
🧠Researchers developed Hybrid Class-Aware Selective Replay (Hybrid-CASR), a continual learning method that improves AI-based software vulnerability detection by addressing catastrophic forgetting in temporal scenarios. The method achieved 0.667 Macro-F1 score while reducing training time by 17% compared to baseline approaches on CVE data from 2018-2024.
AIBullisharXiv – CS AI · Mar 27/1013
🧠Researchers developed MI²DAS, a multi-layer intrusion detection framework for Industrial IoT networks that uses incremental learning to adapt to new cyber threats. The system achieved strong performance across multiple layers, with 95.3% accuracy in normal-attack discrimination and robust detection of both known and unknown attacks.
$DAS
AINeutralarXiv – CS AI · Mar 27/1017
🧠Researchers conducted a benchmark study on IoT botnet intrusion detection systems, finding that models trained on one network domain suffer significant performance degradation when applied to different environments. The study evaluated three feature sets across four IoT datasets and provided guidelines for improving cross-domain robustness through better feature engineering and algorithm selection.
AINeutralarXiv – CS AI · Mar 26/1014
🧠Researchers introduce Jailbreak Foundry (JBF), a system that automatically converts AI jailbreak research papers into executable code modules for standardized testing. The system successfully reproduced 30 attacks with high accuracy and reduces implementation code by nearly half while enabling consistent evaluation across multiple AI models.
CryptoNeutralBitcoinist · Feb 276/106
⛓️Ransomware attacks surged 50% in 2025 with nearly 8,000 incidents recorded, but hackers earned less money than the previous year according to Chainalysis research. Despite the increased frequency of attacks, the cybercrime business appears to be generating lower returns per incident.
AIBullisharXiv – CS AI · Feb 276/105
🧠Researchers developed a lightweight intrusion detection system using XGBoost and explainable AI to detect Advanced Persistent Threats (APTs) at early stages. The system reduced required features from 77 to just 4 while maintaining 97% precision and 100% recall performance.
$APT
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers developed a three-stage framework using Small Language Models (SLMs) to automatically translate natural language queries into Kusto Query Language (KQL) for cybersecurity operations. The approach achieves high accuracy (98.7% syntax, 90.6% semantic) while reducing costs by up to 10x compared to GPT-4, potentially solving bottlenecks in Security Operations Centers.
CryptoBearishThe Defiant · Feb 266/108
⛓️Ransomware payments reached over $800 million in 2025 according to Chainalysis data. While hackers' total earnings decreased compared to the previous year, individual victims who chose to pay ransoms faced significantly higher payment amounts.
CryptoNeutralChainalysis Blog · Feb 266/104
⛓️Ransomware payments declined 8% to $820 million in 2024 despite a record number of claimed attacks, marking the second consecutive year of payment stagnation. This suggests victims are increasingly refusing to pay ransoms even as cybercriminal activity escalates.
AIBearishOpenAI News · Feb 256/106
🧠A new threat report analyzes how malicious actors are combining AI models with websites and social platforms to carry out attacks. The report examines the implications of these AI-powered threats for detection and defense systems.
CryptoNeutralDL News · Feb 226/105
⛓️South Korean prosecutors have successfully recovered $22 million worth of Bitcoin that was previously lost to phishing site operators. This represents a significant recovery of stolen cryptocurrency assets through law enforcement action.
$BTC