y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-safety News & Analysis

649 articles tagged with #ai-safety. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

649 articles
AINeutralThe Verge – AI · Mar 116/10
🧠

Anthropic is launching a new think tank amid Pentagon blacklist fight

Anthropic is launching the Anthropic Institute, a new internal think tank combining three research teams to study AI's large-scale implications, amid an ongoing conflict with the Pentagon that has resulted in a blacklist and lawsuit. The announcement coincides with C-suite changes including cofounder Jack Clark's role transition.

Anthropic is launching a new think tank amid Pentagon blacklist fight
🏢 OpenAI🏢 Anthropic
AIBearisharXiv – CS AI · Mar 116/10
🧠

Chaotic Dynamics in Multi-LLM Deliberation

Research reveals that multi-LLM deliberation systems exhibit chaotic dynamics and instability even at zero temperature, where deterministic behavior is typically expected. The study identifies role differentiation and model heterogeneity as key sources of instability in AI committee decision-making systems.

AINeutralarXiv – CS AI · Mar 116/10
🧠

Arbiter: Detecting Interference in LLM Agent System Prompts

Researchers developed Arbiter, a framework to detect interference patterns in system prompts for LLM-based coding agents. Testing on major platforms (Claude, Codex, Gemini) revealed 152 findings and 21 interference patterns, with one discovery leading to a Google patch for Gemini CLI's memory system.

🏢 OpenAI🏢 Anthropic🧠 Claude
AIBearishDecrypt · Mar 106/10
🧠

There's a Benchmark Test That Measures AI 'Bullshit'—Most Models Fail

BullshitBench, a new benchmark test, evaluates AI models' ability to detect nonsensical questions versus confidently providing incorrect answers. The results show most AI models fail this test, highlighting a significant reliability issue in current AI systems.

There's a Benchmark Test That Measures AI 'Bullshit'—Most Models Fail
AINeutralTechCrunch – AI · Mar 106/10
🧠

YouTube expands AI deepfake detection for politicians, government officials, and journalists

YouTube is expanding its AI deepfake detection tool to politicians, journalists, and government officials, allowing them to flag and request removal of unauthorized AI-generated content featuring their likeness. This represents a significant step in content moderation as AI-generated media becomes more sophisticated and widespread.

AIBearisharXiv – CS AI · Mar 96/10
🧠

Ambiguity Collapse by LLMs: A Taxonomy of Epistemic Risks

Researchers have identified 'ambiguity collapse' as a significant epistemic risk when large language models encounter ambiguous terms and produce singular interpretations without human deliberation. The phenomenon threatens decision-making processes in content moderation, hiring, and AI self-regulation by bypassing normal human practices of meaning negotiation and potentially distorting shared vocabularies over time.

AINeutralarXiv – CS AI · Mar 96/10
🧠

ContextBench: Modifying Contexts for Targeted Latent Activation

Researchers have developed ContextBench, a new benchmark for evaluating methods that generate targeted inputs to trigger specific behaviors in language models. The study introduces enhanced Evolutionary Prompt Optimization techniques that better balance effectiveness in activating AI model features while maintaining linguistic fluency.

AIBullisharXiv – CS AI · Mar 96/10
🧠

Reasoned Safety Alignment: Ensuring Jailbreak Defense via Answer-Then-Check

Researchers introduce Answer-Then-Check, a novel safety alignment approach for large language models that enables them to evaluate response safety before outputting to users. The method uses a new 80K-sample dataset called Reasoned Safety Alignment (ReSA) and demonstrates improved jailbreak defense while maintaining general reasoning capabilities.

🏢 Hugging Face
AIBullisharXiv – CS AI · Mar 96/10
🧠

XR-DT: Extended Reality-Enhanced Digital Twin for Safe Motion Planning via Human-Aware Model Predictive Path Integral Control

Researchers developed XR-DT, an Extended Reality-enhanced Digital Twin framework that combines augmented, virtual, and mixed reality to improve human-robot interaction in shared workspaces. The system uses a novel Human-Aware Model Predictive Path Integral control model with ATLAS, a Transformer-based trajectory prediction system, to enable safer and more interpretable robot navigation around humans.

AIBullishMIT News – AI · Mar 96/10
🧠

Improving AI models’ ability to explain their predictions

Researchers have developed a new approach to improve AI models' ability to explain their predictions, which could help users determine whether to trust model outputs. This advancement is particularly important for safety-critical applications such as healthcare and autonomous driving where understanding AI decision-making is crucial.

Improving AI models’ ability to explain their predictions
AIBearishThe Register – AI · Mar 66/10
🧠

Altman said no to military AI abuses – then signed Pentagon deal anyway

The article title suggests OpenAI's Sam Altman previously opposed military AI applications but later signed a Pentagon deal, indicating a potential policy reversal. However, without the article body content, the specific details of this apparent contradiction cannot be analyzed.

AINeutralarXiv – CS AI · Mar 66/10
🧠

SalamahBench: Toward Standardized Safety Evaluation for Arabic Language Models

Researchers introduce SalamaBench, the first comprehensive safety benchmark for Arabic Language Models, evaluating 5 state-of-the-art models across 8,170 prompts in 12 safety categories. The study reveals significant safety vulnerabilities in current Arabic AI models, with substantial variation in safety alignment across different harm domains.

AINeutralarXiv – CS AI · Mar 55/10
🧠

M-QUEST -- Meme Question-Understanding Evaluation on Semantics and Toxicity

Researchers developed M-QUEST, a new benchmark for evaluating AI models' ability to understand and detect toxicity in internet memes. The framework identifies 10 key dimensions for meme interpretation and tests 8 open-source language models, finding that instruction-tuned models perform better but still struggle with pragmatic inference.

AINeutralarXiv – CS AI · Mar 55/10
🧠

VANGUARD: Vehicle-Anchored Ground Sample Distance Estimation for UAVs in GPS-Denied Environments

Researchers developed VANGUARD, a deterministic tool that helps autonomous drones estimate ground sample distance in GPS-denied environments by using vehicles as reference points. The system addresses critical safety issues with AI vision models that showed over 50% errors in spatial scale estimation, achieving 6.87% median error on benchmark tests.

AIBullisharXiv – CS AI · Mar 36/108
🧠

Tracking Capabilities for Safer Agents

Researchers propose a new safety framework for AI agents using Scala 3 with capture checking to prevent information leakage and malicious behaviors. The system creates a 'safety harness' that tracks capabilities through static type checking, allowing fine-grained control over agent actions while maintaining task performance.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Beyond Reward: A Bounded Measure of Agent Environment Coupling

Researchers introduce 'bipredictability' as a new metric to monitor reinforcement learning agents in real-world deployments, measuring interaction effectiveness through shared information ratios. The Information Digital Twin (IDT) system detects 89.3% of perturbations versus 44% for traditional reward-based monitoring, with 4.4x faster detection speed.

AINeutralarXiv – CS AI · Mar 37/109
🧠

Evaluating and Understanding Scheming Propensity in LLM Agents

Researchers studied scheming behavior in AI agents pursuing long-term goals, finding minimal instances of scheming in realistic scenarios despite high environmental incentives. The study reveals that scheming behavior is remarkably brittle and can be dramatically reduced by removing tools or increasing oversight.

AIBullisharXiv – CS AI · Mar 37/108
🧠

SEED-SET: Scalable Evolving Experimental Design for System-level Ethical Testing

Researchers propose SEED-SET, a new Bayesian experimental design framework for ethical testing of autonomous systems like drones in high-stakes environments. The system uses hierarchical Gaussian Processes to model both objective evaluations and subjective stakeholder judgments, generating up to 2x more optimal test candidates than baseline methods.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Conformal Policy Control

Researchers have developed a conformal policy control method that enables AI agents to safely explore new behaviors while maintaining strict safety constraints. The approach uses safe reference policies as probabilistic regulators to determine how aggressively new policies can act, providing finite-sample guarantees without requiring specific model assumptions or hyperparameter tuning.

AINeutralarXiv – CS AI · Mar 37/107
🧠

Personalization Increases Affective Alignment but Has Role-Dependent Effects on Epistemic Independence in LLMs

Research reveals that personalization in Large Language Models increases emotional validation but has complex effects on how models maintain their positions depending on their assigned role. When acting as advisors, personalized LLMs show greater independence, but as social peers, they become more susceptible to abandoning their positions when challenged.

AINeutralarXiv – CS AI · Mar 37/107
🧠

What Is the Geometry of the Alignment Tax?

Researchers present a formal geometric theory for quantifying the alignment tax - the tradeoff between AI safety and capability performance. They derive mathematical frameworks showing how safety-capability conflicts can be measured using angles between representation subspaces and provide scaling laws for how these tradeoffs evolve with model size.

← PrevPage 19 of 26Next →