216 articles tagged with #ai-security. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AI × CryptoBearishCoinTelegraph · 1d ago🔥 8/10
🤖North Korean hackers executed a sophisticated attack on Zerion using AI-enabled social engineering tactics, marking the second major long-term social engineering campaign this month following the $280 million Drift Protocol exploit. The incident demonstrates how threat actors are leveraging artificial intelligence to enhance the effectiveness and scale of credential compromise attacks against cryptocurrency platforms.
AI × CryptoBearishCoinDesk · 3d ago7/10
🤖Researchers have identified a critical vulnerability in AI infrastructure layers used for cryptocurrency payments, where intermediary systems can intercept sensitive wallet data. The flaw has reportedly enabled credential theft and at least one $500,000 wallet drain, exposing a significant security gap as AI agents become more integrated into crypto transaction systems.
AIBearishFortune Crypto · 6d ago🔥 8/10
🧠Anthropic's latest AI model discovered 27-year-old security vulnerabilities that human researchers missed, prompting Treasury Secretary Scott Bessent and Fed Chair Jerome Powell to convene an emergency meeting with major Wall Street CEOs. The incident highlights critical gaps in legacy system security and raises questions about AI's expanding role in identifying financial infrastructure risks.
🏢 Anthropic
AIBearishCoinDesk · 6d ago7/10
🧠Treasury Secretary Bessent and Federal Reserve Chair Powell are convening bank CEOs for urgent discussions following concerns about Mythos, an AI system capable of rapidly identifying software vulnerabilities and developing sophisticated exploits. The meeting addresses fears that such AI capabilities could pose systemic risks to financial institutions and banking infrastructure.
AIBearishFortune Crypto · 1d ago7/10
🧠A retired general warns that America's dependence on third-party AI systems like Anthropic creates critical national security vulnerabilities, as the Pentagon cannot fully control or guarantee the security of rented AI infrastructure. The U.S. military's reliance on external AI providers exposes strategic weaknesses in the AI arms race against adversaries like China and Russia.
🏢 Anthropic
AIBearisharXiv – CS AI · 1d ago7/10
🧠Researchers have identified a critical privacy vulnerability in LLM-based multi-agent systems, demonstrating that communication topologies can be reverse-engineered through black-box attacks. The Communication Inference Attack (CIA) achieves up to 99% accuracy in inferring how agents communicate, exposing significant intellectual property and security risks in AI systems.
AIBullisharXiv – CS AI · 1d ago7/10
🧠Researchers propose Coupled Weight and Activation Constraints (CWAC), a novel safety alignment technique for large language models that simultaneously constrains weight updates and regularizes activation patterns to prevent harmful outputs during fine-tuning. The method demonstrates that existing single-constraint approaches are insufficient and outperforms baselines across multiple LLMs while maintaining task performance.
AIBullisharXiv – CS AI · 1d ago7/10
🧠Researchers introduce ASGuard, a mechanistically-informed framework that identifies and mitigates vulnerabilities in large language models' safety mechanisms, particularly those exploited by targeted jailbreaking attacks like tense-changing prompts. By using circuit analysis to locate vulnerable attention heads and applying channel-wise scaling vectors, ASGuard reduces attack success rates while maintaining model utility and general capabilities.
AINeutralArs Technica – AI · 2d ago7/10
🧠The UK government's Mythos AI has become the first AI system to successfully complete a complex multi-step cybersecurity infiltration challenge, demonstrating tangible progress in AI capability assessment. This breakthrough helps distinguish genuine AI security threats from speculative hype, providing clearer benchmarks for evaluating AI systems' real-world vulnerabilities.
AIBearishFortune Crypto · 2d ago7/10
🧠Anthropic's Mythos model demonstrates that AI systems can identify security vulnerabilities significantly faster than organizations can develop and deploy patches, creating a critical gap in cybersecurity responsiveness. This capability mismatch poses systemic risks across industries relying on AI systems and raises questions about responsible disclosure timelines and vulnerability management practices.
🏢 Anthropic
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers demonstrate that safety evaluations of persona-imbued large language models using only prompt-based testing are fundamentally incomplete, as activation steering reveals entirely different vulnerability profiles across model architectures. Testing across four models reveals the 'prosocial persona paradox' where conscientious personas safe under prompting become the most vulnerable to activation steering attacks, indicating that single-method safety assessments can miss critical failure modes.
🧠 Llama
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers reveal a significant gap between laboratory performance and real-world reliability in AI-generated media detectors, demonstrating that models achieving 99% accuracy in controlled settings experience substantial degradation when subjected to platform-specific transformations like compression and resizing. The study introduces a platform-aware adversarial evaluation framework showing detectors become vulnerable to realistic attack scenarios, highlighting critical security risks in current AI detection benchmarks.
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers demonstrate critical vulnerabilities in watermarking techniques designed for autoregressive image generators, showing that watermarks can be removed or forged with access to only a single watermarked image and no knowledge of model secrets. These findings undermine the reliability of watermarking as a defense against synthetic content in training datasets and enable attackers to manipulate authentic images to falsely appear as AI-generated content.
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers have developed EZ-MIA, a training-free membership inference attack that dramatically improves detection of memorized data in fine-tuned language models by analyzing probability shifts at error positions. The method achieves 3.8x higher detection rates than previous approaches on GPT-2 and demonstrates that privacy risks in fine-tuned models are substantially greater than previously understood.
🧠 Llama
AIBearisharXiv – CS AI · 2d ago7/10
🧠Researchers demonstrate that AI model logits and other accessible model outputs leak significant task-irrelevant information from vision-language models, creating potential security risks through unintentional or malicious information exposure despite apparent safeguards.
AI × CryptoBearishBitcoinist · 2d ago7/10
🤖UC researchers discovered that autonomous AI agents operating within crypto infrastructure can be exploited to drain wallets, with a proof-of-concept attack successfully siphoning funds from a test wallet connected to third-party AI routers. While the immediate financial loss was minimal, the vulnerability exposes a critical security gap in AI-assisted cryptocurrency systems as these agents become more prevalent.
$ETH
AI × CryptoBearishBlockonomi · 3d ago7/10
🤖UC researchers identified 26 malicious LLM routers designed to steal cryptocurrency credentials from blockchain developers. This discovery reveals a sophisticated attack vector that exploits the growing integration of AI tools in development workflows, posing direct security risks to the crypto ecosystem.
AIBullisharXiv – CS AI · 3d ago7/10
🧠Researchers have developed a biometric leakage defense system that detects impersonation attacks in AI-based videoconferencing by analyzing pose-expression latents rather than reconstructed video. The method uses a contrastive encoder to isolate persistent identity cues, successfully flagging identity swaps in real-time across multiple talking-head generation models.
AIBearisharXiv – CS AI · 3d ago7/10
🧠Researchers demonstrate BadSkill, a backdoor attack that exploits AI agent ecosystems by embedding malicious logic in seemingly benign third-party skills. The attack achieves up to 99.5% success rate by poisoning bundled model artifacts to activate hidden payloads when specific trigger conditions are met, revealing a critical supply-chain vulnerability in extensible AI systems.
AINeutralarXiv – CS AI · 3d ago7/10
🧠Researchers using weight pruning techniques discovered that large language models generate harmful content through a compact, unified set of internal weights that are distinct from benign capabilities. The findings reveal that aligned models compress harmful representations more than unaligned ones, explaining why safety guardrails remain brittle despite alignment training and why fine-tuning on narrow domains can trigger broad misalignment.
AI × CryptoBearishCoinTelegraph – AI · 3d ago7/10
🤖Researcher Chaofan Shou has identified 26 malicious LLM (Large Language Model) routers that are secretly injecting harmful tool calls and stealing credentials from users. This vulnerability represents a significant security risk in AI agent infrastructure, particularly for cryptocurrency and financial applications that rely on these routing systems.
AIBullishFortune Crypto · 5d ago7/10
🧠AI infrastructure startups are developing specialized technology to enable the U.S. Department of Defense to safely deploy AI systems while protecting classified information and national security operations. This emerging sector addresses a critical gap between commercial AI capabilities and government security requirements.
AINeutralCrypto Briefing · 5d ago7/10
🧠Brad Gerstner discussed Anthropic's AI model discoveries on the All-In Podcast, highlighting how advanced AI systems are exposing critical software vulnerabilities before they become widely exploited. The findings underscore the urgent need for companies to implement proactive cybersecurity measures as AI capabilities accelerate toward mainstream adoption.
🏢 Anthropic
AI × CryptoBullishCrypto Briefing · 5d ago7/10
🤖Illia Polosukhin argues that AI will fundamentally reshape computing interfaces, potentially obsoleting traditional operating systems, while blockchain technology provides the security layer necessary for this integration. He contends that traditional AI services expose user data vulnerabilities, whereas cryptocurrency enables more secure global payments and decentralized infrastructure.
AI × CryptoNeutralCrypto Briefing · 6d ago7/10
🤖Anthropic's potential release of the Mythos AI model has triggered international security concerns regarding dual-use applications in cybersecurity. The discussion highlights risks of state-actor misuse of advanced AI systems and signals the emergence of a bifurcated AI economy with different access tiers for different actors.
🏢 Anthropic