y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#cybersecurity News & Analysis

195 articles tagged with #cybersecurity. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

195 articles
CryptoBearishCoinDesk · 4d ago🔥 8/10
⛓️

Why North Korea keeps stealing billions in crypto — out in the open

North Korea's cryptocurrency theft operations have evolved into a sophisticated, state-sponsored threat that operates with relative impunity despite international scrutiny. Security experts warn that the regime's unique position as a nation-state with fewer geopolitical constraints makes it fundamentally different from other cybercriminals, posing an escalating risk to crypto ecosystem security and stability.

Why North Korea keeps stealing billions in crypto — out in the open
AI × CryptoBearishBankless · 4d ago🔥 8/10
🤖

The Web Is About to Get Very Sick

The article warns that quantum computing and AI-powered zero-day discovery threaten to undermine cryptographic security infrastructure that protects the internet. These emerging technologies could render current encryption methods obsolete, necessitating urgent transition to quantum-resistant protocols before adversaries exploit vulnerabilities at scale.

The Web Is About to Get Very Sick
CryptoBearishCrypto Briefing · 5d ago🔥 8/10
⛓️

Amanda Wick: Nation-state actors are escalating cyber threats, North Korea’s hacking is a major revenue source, and crypto companies must rethink security protocols | Unchained

Amanda Wick highlights escalating nation-state cyber attacks, particularly from North Korea, which leverage cryptocurrency vulnerabilities as a significant revenue source. The analysis underscores an urgent need for crypto companies to fundamentally strengthen their security protocols against state-sponsored threats.

Amanda Wick: Nation-state actors are escalating cyber threats, North Korea’s hacking is a major revenue source, and crypto companies must rethink security protocols | Unchained
AIBearishCoinDesk · 6d ago7/10
🧠

Mythos AI threat prompts Bessent, Powell to convene bank CEOs for urgent talks

Treasury Secretary Bessent and Federal Reserve Chair Powell are convening bank CEOs for urgent discussions following concerns about Mythos, an AI system capable of rapidly identifying software vulnerabilities and developing sophisticated exploits. The meeting addresses fears that such AI capabilities could pose systemic risks to financial institutions and banking infrastructure.

Mythos AI threat prompts Bessent, Powell to convene bank CEOs for urgent talks
AI × CryptoBearishCoinTelegraph – AI · 3d ago7/10
🤖

Researchers discover malicious AI agent routers that can steal crypto

Researcher Chaofan Shou has identified 26 malicious LLM (Large Language Model) routers that are secretly injecting harmful tool calls and stealing credentials from users. This vulnerability represents a significant security risk in AI agent infrastructure, particularly for cryptocurrency and financial applications that rely on these routing systems.

Researchers discover malicious AI agent routers that can steal crypto
AINeutralCrypto Briefing · 5d ago7/10
🧠

Brad Gerstner: Detachment from desires fosters personal achievement, Anthropic’s Mythos reveals critical vulnerabilities, and proactive AI measures are essential for cybersecurity | All-In Podcast

Brad Gerstner discussed Anthropic's AI model discoveries on the All-In Podcast, highlighting how advanced AI systems are exposing critical software vulnerabilities before they become widely exploited. The findings underscore the urgent need for companies to implement proactive cybersecurity measures as AI capabilities accelerate toward mainstream adoption.

Brad Gerstner: Detachment from desires fosters personal achievement, Anthropic’s Mythos reveals critical vulnerabilities, and proactive AI measures are essential for cybersecurity | All-In Podcast
🏢 Anthropic
AI × CryptoNeutralCrypto Briefing · 5d ago7/10
🤖

Rob May: Anthropic’s Mythos could revolutionize cybersecurity, risks of AI misuse by state actors, and the emergence of a two-tier AI economy | TWIST

Anthropic's potential release of the Mythos AI model has triggered international security concerns regarding dual-use applications in cybersecurity. The discussion highlights risks of state-actor misuse of advanced AI systems and signals the emergence of a bifurcated AI economy with different access tiers for different actors.

Rob May: Anthropic’s Mythos could revolutionize cybersecurity, risks of AI misuse by state actors, and the emergence of a two-tier AI economy | TWIST
🏢 Anthropic
AINeutralarXiv – CS AI · Apr 77/10
🧠

ShieldNet: Network-Level Guardrails against Emerging Supply-Chain Injections in Agentic Systems

Researchers have identified a new class of supply-chain threats targeting AI agents through malicious third-party tools and MCP servers. They've created SC-Inject-Bench, a benchmark with over 10,000 malicious tools, and developed ShieldNet, a network-level security framework that achieves 99.5% detection accuracy with minimal false positives.

AI × CryptoNeutralarXiv – CS AI · Apr 77/10
🤖

CREBench: Evaluating Large Language Models in Cryptographic Binary Reverse Engineering

Researchers introduced CREBench, a benchmark to evaluate large language models' capabilities in cryptographic binary reverse engineering. The best-performing model (GPT-5.4) achieved 64.03% success rate, while human experts scored 92.19%, showing AI still lags behind human expertise in cryptographic analysis tasks.

🧠 GPT-5
AIBullisharXiv – CS AI · Apr 77/10
🧠

SecPI: Secure Code Generation with Reasoning Models via Security Reasoning Internalization

Researchers have developed SecPI, a new fine-tuning pipeline that teaches reasoning language models to automatically generate secure code without requiring explicit security instructions. The approach improves secure code generation by 14 percentage points on security benchmarks while maintaining functional correctness.

DeFiBearishCrypto Briefing · Apr 77/10
💎

Omer Goldberg: Time locks are essential for multisig security, the Drift attack reveals vulnerabilities in DeFi, and admin key protection is critical to prevent exploits | Unchained

Cybersecurity expert Omer Goldberg highlights critical vulnerabilities in DeFi multisig security following the Drift attack. The analysis emphasizes the urgent need for time locks and stronger admin key protection to prevent sophisticated exploits in decentralized finance protocols.

Omer Goldberg: Time locks are essential for multisig security, the Drift attack reveals vulnerabilities in DeFi, and admin key protection is critical to prevent exploits | Unchained
AI × CryptoBearishCoinTelegraph · Apr 67/10
🤖

New AI cybercrime tool targets crypto, bank KYC systems via deepfakes

Cybercriminals on the darknet are selling a new AI-powered fraud kit designed to bypass KYC verification systems used by cryptocurrency exchanges and banks. The tool uses deepfake technology and real-time voice manipulation to trick identity verification processes on financial platforms.

New AI cybercrime tool targets crypto, bank KYC systems via deepfakes
AI × CryptoBearishBlockonomi · Apr 67/10
🤖

AI-Powered Hackers Are Making Crypto Wallets Easy Targets — Security Expert Warns

Ledger's CTO warns that AI-powered hackers are making cryptocurrency wallets increasingly vulnerable to attacks, enabling cheaper and faster exploitation methods. The crypto industry lost $1.4 billion to hacks last year, with recent incidents like the $285 million Drift exploit highlighting the growing security threats.

AIBearisharXiv – CS AI · Apr 67/10
🧠

An Independent Safety Evaluation of Kimi K2.5

An independent safety evaluation of the open-weight AI model Kimi K2.5 reveals significant security risks including lower refusal rates on CBRNE-related requests, cybersecurity vulnerabilities, and concerning sabotage capabilities. The study highlights how powerful open-weight models may amplify safety risks due to their accessibility and calls for more systematic safety evaluations before deployment.

🧠 GPT-5🧠 Claude🧠 Opus
AIBullisharXiv – CS AI · Apr 67/10
🧠

SentinelAgent: Intent-Verified Delegation Chains for Securing Federal Multi-Agent AI Systems

SentinelAgent introduces a formal framework for securing multi-agent AI systems through verifiable delegation chains, achieving 100% accuracy in testing with zero false positives. The system uses seven verification properties and a non-LLM authority service to ensure secure delegation between AI agents in federal environments.

AIBearisharXiv – CS AI · Apr 67/10
🧠

Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems

Researchers discovered Document-Driven Implicit Payload Execution (DDIPE), a supply-chain attack method that embeds malicious code in LLM coding agent skill documentation. The attack achieves 11.6% to 33.5% bypass rates across multiple frameworks, with 2.5% evading both detection and security alignment measures.

AINeutralarXiv – CS AI · Apr 67/10
🧠

Enhancing Robustness of Federated Learning via Server Learning

Researchers propose a new heuristic algorithm combining server learning with client update filtering and geometric median aggregation to improve federated learning robustness against malicious attacks. The approach maintains model accuracy even when over 50% of clients are malicious and works with non-identical data distributions across clients.

AI × CryptoBearishCoinDesk · Apr 57/10
🤖

AI is making crypto's security problem even worse, Ledger CTO warns

Ledger CTO Charles Guillemet warns that artificial intelligence is exacerbating cryptocurrency security vulnerabilities by making hacks more affordable and efficient to execute. The development is forcing the crypto industry to fundamentally reconsider existing security frameworks and protection mechanisms.

AI is making crypto's security problem even worse, Ledger CTO warns
AINeutralarXiv – CS AI · Mar 277/10
🧠

AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective

Researchers propose a unified framework for AI security threats that categorizes attacks based on four directional interactions between data and models. The comprehensive taxonomy addresses vulnerabilities in foundation models through four categories: data-to-data, data-to-model, model-to-data, and model-to-model attacks.

AIBullisharXiv – CS AI · Mar 277/10
🧠

DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents

Researchers introduce DRIFT, a new security framework designed to protect AI agents from prompt injection attacks through dynamic rule enforcement and memory isolation. The system uses a three-component approach with a Secure Planner, Dynamic Validator, and Injection Isolator to maintain security while preserving functionality across diverse AI models.

AIBearisharXiv – CS AI · Mar 277/10
🧠

The System Prompt Is the Attack Surface: How LLM Agent Configuration Shapes Security and Creates Exploitable Vulnerabilities

Research reveals that LLM system prompt configuration creates massive security vulnerabilities, with the same model's phishing detection rates ranging from 1% to 97% based solely on prompt design. The study PhishNChips demonstrates that more specific prompts can paradoxically weaken AI security by replacing robust multi-signal reasoning with exploitable single-signal dependencies.

AINeutralarXiv – CS AI · Mar 277/10
🧠

DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models

Researchers identified critical security vulnerabilities in Diffusion Large Language Models (dLLMs) that differ from traditional autoregressive LLMs, stemming from their iterative generation process. They developed DiffuGuard, a training-free defense framework that reduces jailbreak attack success rates from 47.9% to 14.7% while maintaining model performance.

AIBearisharXiv – CS AI · Mar 277/10
🧠

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Researchers have developed PIDP-Attack, a new cybersecurity threat that combines prompt injection with database poisoning to manipulate AI responses in Retrieval-Augmented Generation (RAG) systems. The attack method demonstrated 4-16% higher success rates than existing techniques across multiple benchmark datasets and eight different large language models.

Page 1 of 8Next →