171 articles tagged with #ai-governance. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv – CS AI · Mar 36/107
🧠A new study evaluates how 78 industrial practitioners apply the EU AI Act's Risk Classification Scheme using a web-based tool, revealing challenges in interpreting legal definitions and regulatory scope. The research shows that targeted support with clear explanations can significantly improve the AI risk classification process for compliance.
AINeutralarXiv – CS AI · Mar 36/107
🧠A research study evaluated how four major large language models (GPT-5.2, Claude 4.5 Sonnet, Gemini 3 Pro, and DeepSeek-R1) respond to patient preferences in clinical decision-making scenarios. While all models acknowledged patient values, they showed modest actual recommendation shifting with value sensitivity indices ranging from 0.13 to 0.27, revealing gaps in how AI systems incorporate patient preferences into medical recommendations.
AINeutralarXiv – CS AI · Mar 36/107
🧠Researchers propose a new framework called Relate for evaluating AI moral consideration based on relational capacity rather than consciousness verification. The framework addresses the governance gap as millions form emotional bonds with AI systems, but current regulations treat all AI interactions as simple tool use.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers introduce LiaisonAgent, an autonomous multi-agent cybersecurity system built on the QWQ-32B reasoning model that automates risk investigation and governance for Security Operations Centers. The system achieves 97.8% success rate in tool-calling and 95% accuracy in risk judgment while reducing manual investigation overhead by 92.7%.
AINeutralarXiv – CS AI · Mar 27/1012
🧠Researchers propose CIRCLE, a six-stage framework for evaluating AI systems through real-world deployment outcomes rather than abstract model performance metrics. The framework aims to bridge the gap between theoretical AI capabilities and actual materialized effects by providing systematic evidence for decision-makers outside the AI development stack.
AINeutralarXiv – CS AI · Mar 27/1021
🧠A research paper analyzes how leading AGI companies OpenAI and Anthropic use similar rhetorical strategies to construct sociotechnical imaginaries that position themselves as indispensable to AI's future development. The study identifies four shared rhetorical operations that help these firms project corporate authority over technological futures despite their different public approaches.
AINeutralCoinTelegraph · Mar 17/1017
🧠The US military used Anthropic's Claude AI for intelligence analysis and targeting in an Iran strike, reportedly just hours after President Trump ordered a ban on the company's AI systems. This highlights potential conflicts between executive orders and military operational needs regarding AI technology usage.
AIBullisharXiv – CS AI · Feb 276/106
🧠Researchers propose an Evaluation Agent framework to assess AI agent decision-making in AutoML pipelines, moving beyond outcome-focused metrics to evaluate intermediate decisions. The system can detect faulty decisions with 91.9% F1 score and reveals impacts ranging from -4.9% to +8.3% in final performance metrics.
AINeutralarXiv – CS AI · Feb 276/105
🧠Researchers propose Natural Language Declarative Prompting (NLD-P) as a governance framework to manage prompt engineering challenges as large language models evolve. The method separates different control elements into modular components to maintain stable AI system behavior despite model updates and drift.
AIBullisharXiv – CS AI · Feb 276/107
🧠Researchers developed PolicyPad, an interactive system that helps domain experts collaborate on creating policies for LLMs in high-stakes applications like mental health and law. The system enables real-time policy drafting and testing through established UX prototyping practices, showing improved collaborative dynamics and tighter feedback loops in workshops with 22 experts.
AINeutralImport AI (Jack Clark) · Feb 236/105
🧠Import AI newsletter issue 446 covers nuclear-powered LLMs, China's major AI benchmark developments, and the importance of measurement in AI policy. The article emphasizes the need for better AI measurement frameworks to guide effective policy interventions.
AINeutralOpenAI News · Jan 125/106
🧠OpenAI has published its Raising Concerns Policy, establishing formal protections for employees who make protected disclosures. This policy represents the company's effort to create safe channels for internal whistleblowing and transparency amid growing scrutiny of AI companies.
AINeutralOpenAI News · Nov 66/107
🧠OpenAI has introduced the Teen Safety Blueprint, a comprehensive framework designed to guide responsible AI development with specific protections for young users. The blueprint emphasizes age-appropriate design principles, built-in safeguards, and collaborative approaches to ensure AI systems protect and empower teenagers in digital environments.
AINeutralGoogle DeepMind Blog · Oct 236/107
🧠An organization is enhancing its Frontier Safety Framework (FSF) to better identify and mitigate severe risks associated with advanced AI models. This represents ongoing efforts to strengthen AI safety protocols as models become more sophisticated.
AINeutralOpenAI News · Aug 276/108
🧠OpenAI conducted a survey of over 1,000 people globally to gather public input on AI behavior standards and compared these responses to their Model Spec guidelines. The initiative represents OpenAI's effort toward collective alignment, aiming to incorporate diverse human values and perspectives into AI system defaults.
AINeutralOpenAI News · Jul 175/106
🧠OpenAI's Board of Directors issued a brief statement acknowledging and thanking the independent OpenAI Nonprofit Commission for their extensive work. The statement provides minimal detail about the Commission's findings or recommendations.
AINeutralOpenAI News · Jun 55/105
🧠An organization released its June 2025 update detailing efforts to combat malicious AI uses through safety detection tools and responsible deployment practices. The initiative focuses on supporting democratic values and countering AI abuse for societal benefit.
AIBullishOpenAI News · Apr 26/107
🧠OpenAI is establishing a new commission to provide oversight as the company aims to build the world's best-equipped nonprofit organization. The initiative combines significant financial resources with AI technology designed to scale human ingenuity and help solve complex global problems.
AINeutralOpenAI News · Feb 216/102
🧠The article discusses efforts to ensure AI serves humanity's benefit by promoting democratic AI development, preventing malicious use cases, and defending against authoritarian threats. The focus is on establishing safeguards and governance frameworks to prevent AI misuse while maintaining beneficial applications.
AINeutralOpenAI News · Dec 136/105
🧠Elon Musk has filed his fourth legal challenge against OpenAI in less than a year, attempting to reframe his claims about the company's structure. OpenAI counters that Musk himself proposed and created a for-profit structure for the organization back in 2017.
AINeutralOpenAI News · Oct 225/105
🧠OpenAI has appointed Scott Schools as its new Chief Compliance Officer, signaling the company's focus on regulatory compliance and governance as it continues to scale its AI operations. This executive appointment comes as AI companies face increasing regulatory scrutiny globally.
AINeutralOpenAI News · Aug 86/103
🧠OpenAI released a system card detailing the comprehensive safety work conducted before launching GPT-4o, including external red team testing and frontier risk evaluations. The report covers safety mitigations built into the model to address key risk areas according to their Preparedness Framework.
AINeutralOpenAI News · Jan 156/106
🧠OpenAI is implementing measures to address potential misuse of its AI technology during the 2024 global election cycle. The company is focusing on three key areas: preventing platform abuse, ensuring transparency around AI-generated content, and facilitating access to reliable voting information.
AINeutralOpenAI News · Oct 266/106
🧠OpenAI provides an update on their approach to managing frontier AI risks ahead of the UK AI Safety Summit. The article outlines their framework for identifying and mitigating potential risks from advanced AI systems.
AINeutralOpenAI News · Feb 166/107
🧠OpenAI is clarifying how ChatGPT's behavior is determined and announcing plans to improve the system's behavior while allowing more user customization. The company also plans to increase public input in decision-making processes around AI system behavior.