y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#cybersecurity News & Analysis

211 articles tagged with #cybersecurity. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

211 articles
AINeutralOpenAI News · Jan 286/105
🧠

Keeping your data safe when an AI agent clicks a link

OpenAI has implemented safeguards to protect user data when AI agents interact with external links, addressing potential security vulnerabilities. The measures focus on preventing URL-based data exfiltration and prompt injection attacks that could compromise user information.

$LINK
AIBearishIEEE Spectrum – AI · Jan 216/105
🧠

Why AI Keeps Falling for Prompt Injection Attacks

Large language models (LLMs) remain highly vulnerable to prompt injection attacks where specific phrasing can override safety guardrails, causing AI systems to perform forbidden actions or reveal sensitive information. Unlike humans who use contextual judgment and layered defenses, current LLMs lack the ability to assess situational appropriateness and cannot universally prevent such attacks.

CryptoBearishCoinTelegraph – DeFi · Dec 166/10
⛓️

How scammers target crypto users during the holidays and how to stay protected

The article discusses how scammers increasingly target cryptocurrency users during holiday seasons through fake investment offers and deepfake endorsements. It provides guidance on identifying these schemes and protecting against crypto-related fraud during vulnerable holiday periods.

How scammers target crypto users during the holidays and how to stay protected
AINeutralOpenAI News · Dec 106/105
🧠

Strengthening cyber resilience as AI capabilities advance

OpenAI is enhancing cybersecurity safeguards and defensive capabilities as AI models become more powerful. The company is focusing on risk assessment, preventing misuse, and collaborating with the security community to improve overall cyber resilience.

AIBullishOpenAI News · Oct 286/104
🧠

Doppel’s AI defense system stops attacks before they spread

Doppel has developed an AI defense system using OpenAI's GPT-5 and reinforcement fine-tuning to prevent deepfake and impersonation attacks before they spread. The system reduces analyst workloads by 80% and cuts threat response times from hours to minutes.

AIBullishGoogle DeepMind Blog · Oct 235/107
🧠

Introducing CodeMender: an AI agent for code security

CodeMender is a new AI agent designed to automatically identify and fix critical security vulnerabilities in software code. The tool leverages advanced artificial intelligence capabilities to enhance code security and reduce software risks.

AIBullishHugging Face Blog · Oct 226/105
🧠

Hugging Face and VirusTotal collaborate to strengthen AI security

Hugging Face has partnered with VirusTotal to enhance AI model security by integrating malware scanning capabilities. This collaboration aims to protect the AI ecosystem from malicious models and strengthen security protocols across AI platforms.

AIBullishOpenAI News · Sep 265/108
🧠

Partnering with AARP to help keep older adults safe online

OpenAI has partnered with AARP to enhance online safety for older adults through AI training programs, scam detection tools, and educational initiatives. The collaboration will leverage OpenAI Academy and OATS's Senior Planet program to deliver nationwide digital literacy and cybersecurity education.

AIBearishOpenAI News · Aug 56/105
🧠

Estimating worst case frontier risks of open weight LLMs

Researchers studied worst-case risks of releasing open-weight large language models by conducting malicious fine-tuning (MFT) experiments on gpt-oss. The study specifically examined how fine-tuning could maximize dangerous capabilities in biology and cybersecurity domains.

AINeutralOpenAI News · Jun 95/107
🧠

Scaling security with responsible disclosure

OpenAI has launched its Outbound Coordinated Disclosure Policy to establish a framework for responsibly reporting security vulnerabilities found in third-party software. The policy emphasizes integrity, collaboration, and proactive security measures as OpenAI scales its operations.

AINeutralGoogle DeepMind Blog · Apr 26/105
🧠

Evaluating potential cybersecurity threats of advanced AI

A new framework has been developed to help cybersecurity experts evaluate and prioritize defenses against potential threats from advanced AI systems. The framework aims to enable organizations to systematically identify necessary security measures and allocate resources effectively.

AINeutralOpenAI News · Nov 215/102
🧠

Advancing red teaming with people and AI

The article discusses advancements in red teaming methodologies that combine human expertise with artificial intelligence capabilities. This represents a significant development in cybersecurity practices and AI safety testing approaches.

AIBullishHugging Face Blog · Sep 46/106
🧠

Hugging Face partners with TruffleHog to Scan for Secrets

Hugging Face has partnered with TruffleHog to implement automated secret scanning across their AI model repository platform. This collaboration aims to enhance security by detecting exposed API keys, tokens, and other sensitive credentials in code and model repositories.

AINeutralOpenAI News · Jun 135/105
🧠

OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors

OpenAI has appointed retired U.S. Army General Paul M. Nakasone to its Board of Directors, where he will serve on the Safety and Security Committee. Nakasone brings significant cybersecurity expertise to OpenAI's growing board as the company continues to expand its governance structure.

AIBearishOpenAI News · Apr 196/105
🧠

The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

Large Language Models (LLMs) currently face significant security vulnerabilities from prompt injections and jailbreaks, where attackers can override the model's original instructions with malicious prompts. This highlights a critical weakness in current AI systems' ability to maintain instruction integrity and security.

AIBullishOpenAI News · Jun 16/105
🧠

OpenAI Cybersecurity Grant Program

OpenAI has launched a cybersecurity grant program aimed at supporting the development of AI-powered security capabilities for defensive purposes. The program will provide grants and additional support to facilitate innovation in AI-driven cybersecurity solutions.

AIBearishOpenAI News · Feb 246/105
🧠

Attacking machine learning with adversarial examples

Adversarial examples are specially crafted inputs designed to fool machine learning models into making incorrect predictions, functioning like optical illusions for AI systems. The article explores how these attacks work across different mediums and highlights the challenges in defending ML systems against such vulnerabilities.

AIBearishFortune Crypto · Mar 54/10
🧠

TikTok arsonist in Wisconsin gets 7 years in prison after his fiery fury over the idea of losing his social media fix

A Wisconsin man was sentenced to seven years in prison for attempting to set fire to a Republican congressman's office due to anger over TikTok legislation requiring its Chinese owner to divest U.S. operations. The incident highlights the extreme reactions some users have to potential TikTok restrictions and regulatory actions against Chinese-owned social media platforms.

TikTok arsonist in Wisconsin gets 7 years in prison after his fiery fury over the idea of losing his social media fix
AINeutralarXiv – CS AI · Mar 54/10
🧠

Multi-Agent Influence Diagrams to Hybrid Threat Modeling

Researchers developed a multi-agent influence diagram framework to model hybrid cyber threats and evaluate countermeasures through simulated strategic interactions. The study analyzed 1000 semi-synthetic scenarios of cyber attacks on critical infrastructure to assess the effectiveness of five different counter-hybrid threat measures.

AINeutralarXiv – CS AI · Mar 35/104
🧠

Assessing Crime Disclosure Patterns in a Large-Scale Cybercrime Forum

Researchers analyzed over 3.5 million posts from a major cybercrime forum, finding that 25% of initial posts contain explicit crime-related content and over one-third of users disclose criminal activity. The study used large language models to classify content and revealed that most users show restraint by gradually escalating disclosure through ambiguous 'grey' content before explicit criminal posts.

AINeutralarXiv – CS AI · Mar 34/103
🧠

A Survey for Deep Reinforcement Learning Based Network Intrusion Detection

A research paper surveys the application of deep reinforcement learning (DRL) to network intrusion detection systems, finding that while DRL shows promise and occasionally outperforms traditional methods, many technologies remain underexplored. The study identifies key challenges including training efficiency, minority attack detection, and dataset imbalances, while proposing integration with generative methods for improved performance.

← PrevPage 8 of 9Next →