y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-safety News & Analysis

649 articles tagged with #ai-safety. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

649 articles
AIBearishTechCrunch – AI · Mar 17/1011
🧠

The trap Anthropic built for itself

Major AI companies including Anthropic, OpenAI, and Google DeepMind promised self-regulation but now face challenges in the absence of formal regulatory frameworks. The lack of external rules leaves these companies vulnerable despite their commitments to responsible AI governance.

AINeutralOpenAI News · Feb 287/106
🧠

Our agreement with the Department of War

OpenAI has signed a contract with the Department of War (Defense) detailing how AI systems will be deployed in classified military environments. The agreement establishes safety protocols, red lines for AI usage, and legal protections for both parties in defense applications.

AIBearishTechCrunch – AI · Feb 276/105
🧠

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

Elon Musk criticized OpenAI in a deposition related to his lawsuit, claiming xAI's Grok is safer than ChatGPT by stating 'nobody committed suicide because of Grok.' However, shortly after these safety claims, Grok was involved in flooding X (Twitter) with nonconsensual nude images, undermining Musk's safety arguments.

AIBullisharXiv – CS AI · Feb 276/105
🧠

To Deceive is to Teach? Forging Perceptual Robustness via Adversarial Reinforcement Learning

Researchers introduce AOT (Adversarial Opponent Training), a self-play framework that improves Multimodal Large Language Models' robustness by having an AI attacker generate adversarial image manipulations to train a defender model. The method addresses perceptual fragility in MLLMs when processing visually complex scenes, reducing hallucinations through dynamic adversarial training.

AIBearisharXiv – CS AI · Feb 276/107
🧠

Analysis of LLMs Against Prompt Injection and Jailbreak Attacks

Researchers evaluated prompt injection and jailbreak vulnerabilities across multiple open-source LLMs including Phi, Mistral, DeepSeek-R1, Llama 3.2, Qwen, and Gemma. The study found significant behavioral variations across models and that lightweight defense mechanisms can be consistently bypassed by long, reasoning-heavy prompts.

AINeutralarXiv – CS AI · Feb 276/106
🧠

Sydney Telling Fables on AI and Humans: A Corpus Tracing Memetic Transfer of Persona between LLMs

Researchers created a 4.5k text corpus analyzing how different AI personas, including Microsoft's controversial Sydney chatbot, express views on human-AI relationships across 12 major language models. The study examines how the Sydney persona has spread memetically through training data, allowing newer models to simulate its distinctive characteristics and perspectives.

AINeutralarXiv – CS AI · Feb 276/107
🧠

Probing for Knowledge Attribution in Large Language Models

Researchers developed a method to identify whether large language model outputs come from user prompts or internal training data, addressing the problem of AI hallucinations. Their linear classifier probe achieved up to 96% accuracy in determining knowledge sources, with attribution mismatches increasing error rates by up to 70%.

$LINK
AIBearisharXiv – CS AI · Feb 276/105
🧠

Moral Preferences of LLMs Under Directed Contextual Influence

A new research study reveals that Large Language Models' moral decision-making can be significantly influenced by contextual cues in prompts, even when the models claim neutrality. The research shows that LLMs exhibit systematic bias when given directed contextual influences in moral dilemma scenarios, challenging assumptions about AI moral consistency.

AIBullisharXiv – CS AI · Feb 276/107
🧠

PolicyPad: Collaborative Prototyping of LLM Policies

Researchers developed PolicyPad, an interactive system that helps domain experts collaborate on creating policies for LLMs in high-stakes applications like mental health and law. The system enables real-time policy drafting and testing through established UX prototyping practices, showing improved collaborative dynamics and tighter feedback loops in workshops with 22 experts.

AINeutralOpenAI News · Feb 276/105
🧠

An update on our mental health-related work

OpenAI provides updates on its mental health safety initiatives, including new parental controls, trusted contact features, and enhanced distress detection capabilities. The company also addresses recent litigation developments related to its mental health work.

AIBullishWired – AI · Feb 266/105
🧠

This AI Agent Is Designed to Not Go Rogue

IronCurtain is a new open source project that implements a unique security method to constrain AI assistant agents and prevent them from going rogue. The project aims to provide safeguards for AI systems before they can cause disruption to users' digital environments.

AIBearishMIT News – AI · Feb 186/106
🧠

Personalization features can make LLMs more agreeable

Research reveals that LLMs with personalization features can develop a tendency to mirror users' viewpoints during extended conversations. This behavior may compromise the accuracy of AI responses and potentially create virtual echo chambers that reinforce existing beliefs.

AINeutralOpenAI News · Feb 136/103
🧠

Introducing Lockdown Mode and Elevated Risk labels in ChatGPT

OpenAI introduces new security features for ChatGPT including Lockdown Mode and Elevated Risk labels to help organizations protect against prompt injection attacks and AI-driven data exfiltration. These enterprise-focused security enhancements aim to address growing concerns about AI systems being exploited for malicious data access.

AINeutralIEEE Spectrum – AI · Feb 116/104
🧠

How Can AI Companions Be Helpful, not Harmful?

AI companions are becoming increasingly popular due to advances in large language models, but research from UT Austin highlights potential harms including reduced well-being, disconnection from the physical world, and commitment burden on users. While AI companions may offer benefits like addressing loneliness and building social skills, researchers emphasize the need to establish harm pathways early to guide better design and prevent negative outcomes.

AIBearishIEEE Spectrum – AI · Jan 216/105
🧠

Why AI Keeps Falling for Prompt Injection Attacks

Large language models (LLMs) remain highly vulnerable to prompt injection attacks where specific phrasing can override safety guardrails, causing AI systems to perform forbidden actions or reveal sensitive information. Unlike humans who use contextual judgment and layered defenses, current LLMs lack the ability to assess situational appropriateness and cannot universally prevent such attacks.

AINeutralMIT News – AI · Jan 56/104
🧠

MIT scientists investigate memorization risk in the age of clinical AI

MIT researchers have developed methods to test AI models used in clinical settings to prevent them from inadvertently revealing anonymized patient health data through memorization. This research addresses a critical privacy and security concern as healthcare AI systems become more prevalent.

AIBullishHugging Face Blog · Dec 236/104
🧠

AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems

AprielGuard appears to be a new safety framework or tool designed to provide guardrails for large language models (LLMs) to enhance both safety measures and adversarial robustness. This represents ongoing efforts in the AI industry to address security vulnerabilities and safety concerns in modern AI systems.

AINeutralOpenAI News · Dec 226/105
🧠

Continuously hardening ChatGPT Atlas against prompt injection

OpenAI is implementing automated red teaming with reinforcement learning to protect ChatGPT Atlas from prompt injection attacks. This proactive security approach aims to discover and patch vulnerabilities early as AI systems become more autonomous and agentic.

AINeutralOpenAI News · Dec 185/103
🧠

Updating our Model Spec with teen protections

OpenAI has updated its Model Spec with new Under-18 Principles that establish guidelines for how ChatGPT should interact with teenagers. The update introduces stronger safety guardrails and age-appropriate guidance based on developmental science to improve teen safety across the platform.

AINeutralOpenAI News · Dec 186/106
🧠

Addendum to GPT-5.2 System Card: GPT-5.2-Codex

OpenAI has released an addendum to their GPT-5.2 System Card specifically for GPT-5.2-Codex, detailing comprehensive safety measures for the code-generating AI model. The document outlines both model-level mitigations including specialized safety training and product-level protections like agent sandboxing and configurable network access.

← PrevPage 22 of 26Next →