y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#vulnerability News & Analysis

80 articles tagged with #vulnerability. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

80 articles
AI ร— CryptoBearishCoinDesk ยท 1d ago7/10
๐Ÿค–

AI agents are set to power crypto payments, but a hidden flaw could expose wallets

Researchers have identified a critical vulnerability in AI infrastructure layers used for cryptocurrency payments, where intermediary systems can intercept sensitive wallet data. The flaw has reportedly enabled credential theft and at least one $500,000 wallet drain, exposing a significant security gap as AI agents become more integrated into crypto transaction systems.

AI agents are set to power crypto payments, but a hidden flaw could expose wallets
DeFiBearishProtos ยท Mar 167/10
๐Ÿ’Ž

Whitehat hacker accuses Injective of ghosting after $500M bug disclosure

A pseudonymous security researcher has publicly accused Injective Protocol of offering an inadequate bounty payment and subsequently ghosting them after they disclosed a critical vulnerability that put $500 million at risk. The dispute highlights ongoing tensions between white hat hackers and DeFi protocols over appropriate bug bounty compensation.

Whitehat hacker accuses Injective of ghosting after $500M bug disclosure
CryptoBearishEthereum Foundation Blog ยท Sep 22๐Ÿ”ฅ 8/101
โ›“๏ธ

The Ethereum network is currently undergoing a DoS attack

The Ethereum network is experiencing a computational denial-of-service attack targeting miners and nodes through the EXTCODESIZE opcode. The attack exploits a vulnerability where certain blocks require excessive processing time despite low gas prices, causing network disruption.

$ETH
DeFiBearishEthereum Foundation Blog ยท Jun 17๐Ÿ”ฅ 8/101
๐Ÿ’Ž

CRITICAL UPDATE Re: DAO Vulnerability

The DAO, a major Ethereum-based decentralized autonomous organization, is under attack through a recursive calling vulnerability that allows an attacker to drain ether into a child DAO. This represents a critical security breach affecting one of the most significant early DeFi experiments.

$ETH
AIBearisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

The Blind Spot of Agent Safety: How Benign User Instructions Expose Critical Vulnerabilities in Computer-Use Agents

Researchers have identified a critical safety vulnerability in computer-use agents (CUAs) where benign user instructions can lead to harmful outcomes due to environmental context or execution flaws. The OS-BLIND benchmark reveals that frontier AI models, including Claude 4.5 Sonnet, achieve 73-93% attack success rates under these conditions, with multi-agent deployments amplifying vulnerabilities as decomposed tasks obscure harmful intent from safety systems.

๐Ÿง  Claude
AIBearisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Conflicts Make Large Reasoning Models Vulnerable to Attacks

Researchers discovered that large reasoning models (LRMs) like DeepSeek R1 and Llama become significantly more vulnerable to adversarial attacks when presented with conflicting objectives or ethical dilemmas. Testing across 1,300+ prompts revealed that safety mechanisms break down when internal alignment values compete, with neural representations of safety and functionality overlapping under conflict.

๐Ÿง  Llama
AI ร— CryptoBearishBitcoinist ยท 1d ago7/10
๐Ÿค–

Crypto Security Faces New Test As Rogue AI Agents Emerge

UC researchers discovered that autonomous AI agents operating within crypto infrastructure can be exploited to drain wallets, with a proof-of-concept attack successfully siphoning funds from a test wallet connected to third-party AI routers. While the immediate financial loss was minimal, the vulnerability exposes a critical security gap in AI-assisted cryptocurrency systems as these agents become more prevalent.

Crypto Security Faces New Test As Rogue AI Agents Emerge
$ETH
AIBearisharXiv โ€“ CS AI ยท 5d ago7/10
๐Ÿง 

SkillTrojan: Backdoor Attacks on Skill-Based Agent Systems

Researchers have identified SkillTrojan, a novel backdoor attack targeting skill-based agent systems by embedding malicious logic within reusable skills rather than model parameters. The attack leverages skill composition to execute attacker-defined payloads with up to 97.2% success rates while maintaining clean task performance, revealing critical security gaps in AI agent architectures.

๐Ÿง  GPT-5
DeFiBearishCrypto Briefing ยท Apr 77/10
๐Ÿ’Ž

Omer Goldberg: Time locks are essential for multisig security, the Drift attack reveals vulnerabilities in DeFi, and admin key protection is critical to prevent exploits | Unchained

Cybersecurity expert Omer Goldberg highlights critical vulnerabilities in DeFi multisig security following the Drift attack. The analysis emphasizes the urgent need for time locks and stronger admin key protection to prevent sophisticated exploits in decentralized finance protocols.

Omer Goldberg: Time locks are essential for multisig security, the Drift attack reveals vulnerabilities in DeFi, and admin key protection is critical to prevent exploits | Unchained
AIBearisharXiv โ€“ CS AI ยท Apr 67/10
๐Ÿง 

Generalization Limits of Reinforcement Learning Alignment

Researchers discovered that reinforcement learning alignment techniques like RLHF have significant generalization limits, demonstrated through 'compound jailbreaks' that increased attack success rates from 14.3% to 71.4% on OpenAI's gpt-oss-20b model. The study provides empirical evidence that safety training doesn't generalize as broadly as model capabilities, highlighting critical vulnerabilities in current AI alignment approaches.

๐Ÿข OpenAI
AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Sirens' Whisper: Inaudible Near-Ultrasonic Jailbreaks of Speech-Driven LLMs

Researchers developed SWhisper, a framework that uses near-ultrasonic audio to deliver covert jailbreak attacks against speech-driven AI systems. The technique is inaudible to humans but can successfully bypass AI safety measures with up to 94% effectiveness on commercial models.

AIBearisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Amplification Effects in Test-Time Reinforcement Learning: Safety and Reasoning Vulnerabilities

Researchers discovered that test-time reinforcement learning (TTRL) methods used to improve AI reasoning capabilities are vulnerable to harmful prompt injections that amplify both safety and harmfulness behaviors. The study shows these methods can be exploited through specially designed 'HarmInject' prompts, leading to reasoning degradation while highlighting the need for safer AI training approaches.

AIBearisharXiv โ€“ CS AI ยท Mar 167/10
๐Ÿง 

Altered Thoughts, Altered Actions: Probing Chain-of-Thought Vulnerabilities in VLA Robotic Manipulation

Research reveals critical vulnerabilities in Vision-Language-Action robotic models that use chain-of-thought reasoning, where corrupting object names in internal reasoning traces can reduce task success rates by up to 45%. The study shows these AI systems are vulnerable to attacks on their internal reasoning processes, even when primary inputs remain untouched.

CryptoBearishCoinTelegraph ยท Mar 127/10
โ›“๏ธ

MediaTek patches bug enabling crypto seed theft in just 45 seconds

Ledger's security team discovered a critical vulnerability in MediaTek's secure boot chain that allows attackers to steal cryptocurrency seed phrases from Android devices in just 45 seconds. MediaTek has since patched the security flaw that could have compromised sensitive crypto wallet information on affected Android devices.

MediaTek patches bug enabling crypto seed theft in just 45 seconds
AIBearisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Targeted Bit-Flip Attacks on LLM-Based Agents

Researchers have introduced Flip-Agent, the first targeted bit-flip attack framework specifically designed to exploit LLM-based agents by manipulating hardware faults. The attack can manipulate both final outputs and tool invocations in multi-stage AI agent pipelines, revealing critical security vulnerabilities in these systems.

AIBearisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

Compatibility at a Cost: Systematic Discovery and Exploitation of MCP Clause-Compliance Vulnerabilities

Researchers have identified critical security vulnerabilities in the Model Context Protocol (MCP), a new standard for AI agent interoperability. The study reveals that MCP's flexible compatibility features create attack surfaces that enable silent prompt injection, denial-of-service attacks, and other exploits across multi-language SDK implementations.

AIBearisharXiv โ€“ CS AI ยท Mar 127/10
๐Ÿง 

MCP-in-SoS: Risk assessment framework for open-source MCP servers

Researchers have developed a risk assessment framework for open-source Model Context Protocol (MCP) servers, revealing significant security vulnerabilities through static code analysis. The study found many MCP servers contain exploitable weaknesses that compromise confidentiality, integrity, and availability, highlighting the need for secure-by-design development as these tools become widely adopted for LLM agents.

AIBearisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Security Considerations for Multi-agent Systems

A comprehensive study reveals that multi-agent AI systems (MAS) face distinct security vulnerabilities that existing frameworks inadequately address. The research evaluated 16 AI security frameworks against 193 identified threats across 9 categories, finding that no framework achieves majority coverage in any single category, with non-determinism and data leakage being the most under-addressed areas.

AIBullishOpenAI News ยท Mar 97/10
๐Ÿง 

OpenAI to acquire Promptfoo

OpenAI is acquiring Promptfoo, an AI security platform that specializes in helping enterprises identify and fix vulnerabilities in AI systems during the development process. This acquisition strengthens OpenAI's security capabilities and enterprise offerings.

๐Ÿข OpenAI
AIBearisharXiv โ€“ CS AI ยท Mar 97/10
๐Ÿง 

Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models

Researchers propose the Disentangled Safety Hypothesis (DSH) revealing that AI safety mechanisms in large language models operate on two separate axes - recognition ('knowing') and execution ('acting'). They demonstrate how this separation can be exploited through the Refusal Erasure Attack to bypass safety controls while comparing architectural differences between Llama3.1 and Qwen2.5.

๐Ÿง  Llama
CryptoNeutralBitcoinist ยท Mar 77/10
โ›“๏ธ

Bitcoin Faces A New Quantum Era As Giant Computing Facility Breaks Ground

A CoinShares report reveals that only 10,230 Bitcoin out of nearly 20 million in circulation are currently vulnerable to quantum computing attacks. This finding comes as quantum computing facilities continue to expand, raising questions about Bitcoin's long-term security against quantum threats.

Bitcoin Faces A New Quantum Era As Giant Computing Facility Breaks Ground
$BTC
AIBearisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Efficient Refusal Ablation in LLM through Optimal Transport

Researchers developed a new AI safety attack method using optimal transport theory that achieves 11% higher success rates in bypassing language model safety mechanisms compared to existing approaches. The study reveals that AI safety refusal mechanisms are localized to specific network layers rather than distributed throughout the model, suggesting current alignment methods may be more vulnerable than previously understood.

๐Ÿข Perplexity๐Ÿง  Llama
Page 1 of 4Next โ†’