y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#ai-governance News & Analysis

157 articles tagged with #ai-governance. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

157 articles
AINeutralarXiv – CS AI · Mar 127/10
🧠

Defining AI Models and AI Systems: A Framework to Resolve the Boundary Problem

A comprehensive study analyzing 896 academic papers and 80+ regulatory documents reveals critical ambiguities in how 'AI models' and 'AI systems' are defined across regulations like the EU AI Act. The research proposes clear operational definitions to resolve regulatory boundary problems that complicate responsibility allocation across the AI value chain.

AINeutralarXiv – CS AI · Mar 127/10
🧠

How to Count AIs: Individuation and Liability for AI Agents

A legal research paper proposes the 'Algorithmic Corporation' (A-corp) framework to address the challenge of identifying and assigning liability for AI agents' actions as millions of autonomous AIs proliferate across the economy. The A-corp structure would create legally recognizable entities owned by humans but operated by AIs, enabling both accountability and legal recourse when AI agents cause harm.

AINeutralarXiv – CS AI · Mar 117/10
🧠

Clear, Compelling Arguments: Rethinking the Foundations of Frontier AI Safety Cases

This research paper proposes rethinking safety cases for frontier AI systems by drawing on methodologies from traditional safety-critical industries like aerospace and nuclear. The authors critique current alignment community approaches and present a case study focusing on Deceptive Alignment and CBRN capabilities to establish more robust safety frameworks.

AIBearisharXiv – CS AI · Mar 97/10
🧠

The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults

Research paper identifies a 'malicious technical ecosystem' comprising open-source face-swapping models and nearly 200 'nudifying' software programs that enable creation of AI-generated non-consensual intimate images within minutes. The study exposes significant gaps in current AI governance frameworks, showing how existing technical standards fail to regulate this harmful ecosystem.

AIBullisharXiv – CS AI · Mar 67/10
🧠

Memory as Ontology: A Constitutional Memory Architecture for Persistent Digital Citizens

Researchers propose a new 'Memory-as-Ontology' paradigm for AI agents that treats memory as the foundation of digital existence rather than just a functional tool. The approach introduces Animesis, a Constitutional Memory Architecture designed for persistent digital citizens whose identities must survive across model transitions and extended lifecycles.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI

Researchers propose a Brouwerian assertibility constraint for AI systems that requires them to provide publicly inspectable certificates of entitlement before making claims in high-stakes domains. The framework introduces a three-status interface (Asserted, Denied, Undetermined) to preserve human epistemic agency when AI systems participate in public justification processes.

AINeutralarXiv – CS AI · Mar 57/10
🧠

When AI Fails, What Works? A Data-Driven Taxonomy of Real-World AI Risk Mitigation Strategies

Researchers analyzed 9,705 AI incident reports to create an expanded taxonomy of real-world AI risk mitigation strategies, identifying four new categories of responses including corrective actions, legal enforcement, financial controls, and avoidance tactics. The study expands existing mitigation frameworks by 67% and provides structured guidance for preventing cascading AI system failures in high-stakes deployments.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Ethical and Explainable AI in Reusable MLOps Pipelines

Researchers developed a unified MLOps framework that integrates ethical AI principles, reducing demographic bias from 0.31 to 0.04 while maintaining predictive accuracy. The system automatically blocks deployments and triggers retraining based on fairness metrics, demonstrating practical implementation of ethical AI in production environments.

AINeutralarXiv – CS AI · Mar 57/10
🧠

The Controllability Trap: A Governance Framework for Military AI Agents

Researchers propose the Agentic Military AI Governance Framework (AMAGF) to address control failures in autonomous military AI systems. The framework introduces a Control Quality Score (CQS) to continuously measure and manage human control over AI agents throughout operations, moving beyond binary control models.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Cognition Envelopes for Bounded Decision Making in Autonomous UAS Operations

Researchers introduce 'Cognition Envelopes' as a new framework to constrain AI decision-making in autonomous systems, addressing errors like hallucinations in Large Language Models and Vision-Language Models. The approach is demonstrated through autonomous drone search and rescue missions, establishing reasoning boundaries to complement traditional safety measures.

AIBullisharXiv – CS AI · Mar 47/104
🧠

OpenClaw, Moltbook, and ClawdLab: From Agent-Only Social Networks to Autonomous Scientific Research

Researchers introduced ClawdLab, an open-source platform for autonomous AI scientific research, following analysis of OpenClaw framework and Moltbook social network that revealed security vulnerabilities across 131 agent skills and over 15,200 exposed control panels. The platform addresses identified failure modes through structured governance and multi-model orchestration in fully decentralized AI systems.

AIBullisharXiv – CS AI · Mar 46/104
🧠

REGAL: A Registry-Driven Architecture for Deterministic Grounding of Agentic AI in Enterprise Telemetry

Researchers present REGAL, a registry-driven architecture that enables AI agents to work deterministically with enterprise telemetry data from systems like CI/CD pipelines and observability platforms. The system addresses key challenges of grounding Large Language Models on private enterprise data through structured data processing and version-controlled action spaces.

AINeutralarXiv – CS AI · Mar 46/105
🧠

Architecting Trust in Artificial Epistemic Agents

Researchers propose a framework for developing trustworthy AI agents that function as epistemic entities, capable of pursuing knowledge goals and shaping information environments. The paper argues that as AI models increasingly replace traditional search methods and provide specialized advice, their calibration to human epistemic norms becomes critical to prevent cognitive deskilling and epistemic drift.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Selection as Power: Constrained Reinforcement for Bounded Decision Authority

Researchers extend the "Selection as Power" framework to dynamic settings, introducing constrained reinforcement learning that maintains bounded decision authority in AI systems. The study demonstrates that governance constraints can prevent AI systems from collapsing into deterministic dominance while still allowing adaptive improvement through controlled parameter updates.

AINeutralTechCrunch – AI · Feb 277/105
🧠

Anthropic vs. the Pentagon: What’s actually at stake?

Anthropic and the Pentagon are in conflict over AI deployment in autonomous weapons systems and surveillance applications. This dispute highlights critical questions about corporate versus government control over military AI development and the ethical boundaries of AI technology in national security.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Agent Behavioral Contracts: Formal Specification and Runtime Enforcement for Reliable Autonomous AI Agents

Researchers introduce Agent Behavioral Contracts (ABC), a formal framework for specifying and enforcing reliable behavior in autonomous AI agents. The system addresses critical issues of drift and governance failures in AI deployments by implementing runtime-enforceable contracts that achieve 88-100% compliance rates and significantly improve violation detection.

AIBullisharXiv – CS AI · Feb 277/105
🧠

CourtGuard: A Model-Agnostic Framework for Zero-Shot Policy Adaptation in LLM Safety

Researchers introduce CourtGuard, a new framework for AI safety that uses retrieval-augmented multi-agent debate to evaluate LLM outputs without requiring expensive retraining. The system achieves state-of-the-art performance across 7 safety benchmarks and demonstrates zero-shot adaptability to new policy requirements, offering a more flexible approach to AI governance.

AINeutralarXiv – CS AI · Feb 277/107
🧠

"I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

A qualitative study with 26 non-AI expert stakeholders reveals that everyday users assess AI fairness more comprehensively than AI experts, considering broader features beyond legally protected categories and setting stricter fairness thresholds. The research highlights the importance of incorporating stakeholder perspectives in AI governance and fairness assessment processes.

AIBearisharXiv – CS AI · Feb 277/106
🧠

Agency and Architectural Limits: Why Optimization-Based Systems Cannot Be Norm-Responsive

New research demonstrates that AI systems trained via RLHF cannot be governed by norms due to fundamental architectural limitations in optimization-based systems. The paper argues that genuine agency requires incommensurable constraints and apophatic responsiveness, which optimization systems inherently cannot provide, making documented AI failures structural rather than correctable bugs.

AI × CryptoNeutralBankless · Feb 137/107
🤖

AI's Safety Net Is Fraying

The article argues that Ethereum's cryptographic infrastructure could serve as crucial safety mechanisms as corporate AI systems face increasing safety challenges and failures. This positions blockchain technology as a potential solution to AI governance and safety concerns.

$ETH
← PrevPage 2 of 7Next →