y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-governance News & Analysis

154 articles tagged with #ai-governance. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

154 articles
AIBullishOpenAI News · Oct 27/106
🧠

OpenAI announces strategic collaboration with Japan’s Digital Agency

OpenAI has announced a strategic partnership with Japan's Digital Agency to integrate generative AI into public services and support international AI governance frameworks. The collaboration aims to promote safe and trustworthy AI adoption globally while advancing AI implementation in government operations.

AIBullishOpenAI News · Jul 117/105
🧠

The EU Code of Practice and future of AI in Europe

OpenAI has joined the EU Code of Practice for responsible AI development, marking a significant step in AI governance within Europe. The company is also partnering with European governments to foster innovation, develop infrastructure, and promote economic growth in the AI sector.

AIBullishOpenAI News · May 77/105
🧠

Introducing OpenAI for Countries

OpenAI has launched a new initiative called 'OpenAI for Countries' aimed at supporting nations worldwide that want to develop AI infrastructure based on democratic principles. The program appears to focus on providing resources and guidance for countries seeking to build AI systems aligned with democratic values and governance structures.

AINeutralOpenAI News · May 57/106
🧠

Evolving OpenAI’s structure

OpenAI's board announced plans to transition its for-profit entity to a Public Benefit Corporation structure. This change aims to maintain mission-driven operations under nonprofit oversight while enabling greater impact and long-term public benefit alignment.

AINeutralOpenAI News · Apr 157/108
🧠

Our updated Preparedness Framework

An organization has released an updated Preparedness Framework designed to measure and protect against severe harm from frontier AI capabilities. The framework appears to be a safety mechanism for addressing potential risks associated with advanced AI systems.

AINeutralOpenAI News · Dec 277/106
🧠

Why OpenAI’s structure must evolve to advance our mission

OpenAI announces plans to evolve its organizational structure to better advance its mission. The company proposes strengthening its non-profit arm through support from its for-profit operations' success.

AINeutralOpenAI News · Jul 307/107
🧠

A Primer on the EU AI Act: What It Means for AI Providers and Deployers

This article provides an overview of the EU AI Act, detailing upcoming compliance deadlines and requirements for AI providers and deployers. The analysis focuses particularly on prohibited AI applications and high-risk use cases that will face stringent regulatory oversight.

AINeutralOpenAI News · Oct 267/106
🧠

Frontier risk and preparedness

OpenAI is developing its approach to catastrophic risk preparedness for highly-capable AI systems. The company is building a dedicated Preparedness team and launching a challenge to address frontier AI safety risks.

AIBullishOpenAI News · Oct 257/106
🧠

Frontier Model Forum updates

The Frontier Model Forum, comprising major tech companies including Anthropic, Google, and Microsoft, has announced a new Executive Director and established a $10 million AI Safety Fund. This initiative represents a significant collaborative effort among leading AI companies to address safety concerns in frontier AI model development.

AIBullishOpenAI News · Jul 267/106
🧠

Frontier Model Forum

A new industry body called the Frontier Model Forum is being established to promote safe and responsible development of advanced AI systems. The organization will focus on advancing AI safety research, establishing best practices and standards, and facilitating communication between policymakers and industry stakeholders.

AIBullishOpenAI News · Jul 217/105
🧠

Moving AI governance forward

OpenAI and other leading AI laboratories are strengthening AI governance through voluntary commitments focused on safety, security, and trustworthiness. This represents a proactive industry approach to self-regulation in AI development.

AINeutralOpenAI News · Jun 127/105
🧠

Comment on NTIA AI Accountability Policy

The National Telecommunications and Information Administration (NTIA) has issued a request for comments on AI accountability policy. This represents a regulatory initiative to gather public input on how artificial intelligence systems should be governed and held accountable.

AINeutralOpenAI News · May 257/106
🧠

Democratic inputs to AI

OpenAI Inc. is launching a grant program offering ten $100,000 awards to fund experiments in establishing democratic processes for determining AI system governance rules. The initiative aims to create frameworks for public input on AI regulation within existing legal boundaries.

AINeutralOpenAI News · May 227/103
🧠

Governance of superintelligence

The article discusses the need to begin planning governance frameworks for superintelligence - AI systems that will surpass even Artificial General Intelligence (AGI) in capability. It emphasizes the importance of addressing governance challenges proactively rather than waiting for these advanced systems to emerge.

AINeutralOpenAI News · May 37/106
🧠

Will Hurd joins OpenAI’s board of directors

Former Congressman Will Hurd has joined OpenAI's board of directors to bring public policy expertise to the company. OpenAI states this addition supports their mission to develop general-purpose artificial intelligence that benefits all humanity by combining technology and policy knowledge.

AINeutralarXiv – CS AI · 16h ago6/10
🧠

Memory as Metabolism: A Design for Companion Knowledge Systems

A new research paper proposes a governance framework for personal AI memory systems designed to function as 'companion' knowledge wikis that mirror user knowledge while compensating for epistemic failures like entrenchment and evidence suppression. The work addresses an emerging 2026 landscape of memory architectures for large language models through five operational mechanisms (TRIAGE, DECAY, CONTEXTUALIZE, CONSOLIDATE, AUDIT) aimed at preventing user-coupled drift in single-user knowledge systems.

AINeutralTechCrunch – AI · 1d ago6/10
🧠

Anthropic co-founder confirms the company briefed the Trump adminstration on Mythos

Anthropic co-founder confirmed the company briefed the Trump administration on Mythos, its latest AI model featuring advanced cybersecurity capabilities. This government engagement signals growing alignment between major AI developers and the new administration on AI policy and national security applications.

🏢 Anthropic
AINeutralWired – AI · 1d ago6/10
🧠

Silicon Valley Is Spending Millions to Stop One of Its Own

Alex Bores, a former Palantir employee who championed strict AI regulation legislation, is running for Congress and facing significant financial opposition from major Silicon Valley tech leaders. The funding disparity highlights a fundamental conflict between pro-regulation and anti-regulation factions within the tech industry.

Silicon Valley Is Spending Millions to Stop One of Its Own
AINeutralarXiv – CS AI · 1d ago6/10
🧠

AI Integrity: A New Paradigm for Verifiable AI Governance

Researchers introduce AI Integrity, a new governance framework that verifies the reasoning processes of AI systems rather than just evaluating outcomes. The approach defines an Authority Stack—a four-layer model of values, epistemological standards, source preferences, and data criteria—and proposes the PRISM framework to measure integrity through six core metrics, addressing a critical gap in existing AI Ethics, Safety, and Alignment paradigms.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

PRISM Risk Signal Framework: Hierarchy-Based Red Lines for AI Behavioral Risk

Researchers introduce PRISM, a framework that detects AI behavioral risks by analyzing underlying reasoning hierarchies rather than individual harmful outputs. The system identifies 27 risk signals across value prioritization, evidence weighting, and information source trust, using forced-choice data from 7 AI models to distinguish between structurally dangerous, context-dependent, and balanced AI reasoning patterns.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Measuring the Authority Stack of AI Systems: Empirical Analysis of 366,120 Forced-Choice Responses Across 8 AI Models

Researchers conducted the first large-scale empirical analysis of AI decision-making across 366,120 responses from 8 major models, revealing measurable but inconsistent value hierarchies, evidence preferences, and source trust patterns. The study found significant framing sensitivity and domain-specific value shifts, with critical implications for deploying AI systems in professional contexts.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Inspectable AI for Science: A Research Object Approach to Generative AI Governance

Researchers propose AI as a Research Object (AI-RO), a governance framework that treats generative AI interactions as inspectable, documented components of scientific research rather than debating authorship. The framework combines interaction logs, metadata packaging, and provenance records to ensure accountability, particularly for security and privacy research where confidentiality and auditability are critical.

🏢 Meta
AINeutralarXiv – CS AI · 1d ago6/10
🧠

Explainability and Certification of AI-Generated Educational Assessments

Researchers propose a comprehensive framework for making AI-generated educational assessments transparent, explainable, and certifiable through self-rationalization, attribution analysis, and post-hoc verification. The framework introduces a metadata schema and traffic-light certification workflow designed to meet institutional accreditation standards, with proof-of-concept testing on 500 computer science questions demonstrating improved transparency and reduced instructor workload.

AINeutralarXiv – CS AI · 1d ago6/10
🧠

Assessing Model-Agnostic XAI Methods against EU AI Act Explainability Requirements

Researchers have developed a framework to assess how well existing explainable AI (XAI) methods comply with the EU AI Act's transparency requirements. The study bridges the gap between current XAI techniques and regulatory mandates by proposing a scoring system that translates expert qualitative assessments into quantitative compliance metrics, helping practitioners navigate AI regulation in European markets.

← PrevPage 3 of 7Next →