y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#safety-critical News & Analysis

10 articles tagged with #safety-critical. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

10 articles
AI ร— CryptoNeutralarXiv โ€“ CS AI ยท Apr 77/10
๐Ÿค–

Governance-Constrained Agentic AI: Blockchain-Enforced Human Oversight for Safety-Critical Wildfire Monitoring

Researchers propose a blockchain-based AI system for wildfire monitoring that requires mandatory human authorization before issuing alerts. The system uses smart contracts to enforce governance constraints on autonomous AI agents, combining UAV monitoring with cryptographic verification to prevent false alarms and ensure accountability.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Explainable Planning for Hybrid Systems

A new thesis examines explainable AI planning (XAIP) for hybrid systems, addressing the critical challenge of making autonomous planning decisions interpretable in safety-critical applications. As AI automation expands into domains like autonomous vehicles, energy grids, and healthcare, the ability to explain system reasoning becomes essential for trust and regulatory compliance.

AIBullisharXiv โ€“ CS AI ยท Apr 76/10
๐Ÿง 

Compliance-by-Construction Argument Graphs: Using Generative AI to Produce Evidence-Linked Formal Arguments for Certification-Grade Accountability

Researchers propose a compliance-by-construction architecture that integrates Generative AI with structured formal argument representations to ensure accountability in high-stakes decision systems. The approach uses typed Argument Graphs, retrieval-augmented generation, validation constraints, and provenance ledgers to prevent AI hallucinations while maintaining traceability for regulatory compliance.

AIBullisharXiv โ€“ CS AI ยท Mar 266/10
๐Ÿง 

Safe Reinforcement Learning with Preference-based Constraint Inference

Researchers propose Preference-based Constrained Reinforcement Learning (PbCRL), a new approach for safe AI decision-making that learns safety constraints from human preferences rather than requiring extensive expert demonstrations. The method addresses limitations in existing Bradley-Terry models by introducing a dead zone mechanism and Signal-to-Noise Ratio loss to better capture asymmetric safety costs and improve constraint alignment.

AINeutralarXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

Monotropic Artificial Intelligence: Toward a Cognitive Taxonomy of Domain-Specialized Language Models

Researchers introduce 'Monotropic Artificial Intelligence,' a new paradigm that deliberately creates highly specialized AI models with extraordinary precision in narrow domains rather than pursuing general-purpose capabilities. The concept challenges the current trend of scaling AI models broadly, proposing instead that domain-specialized models could offer advantages for safety-critical applications.

$NEAR
AIBullisharXiv โ€“ CS AI ยท Mar 26/1017
๐Ÿง 

VISTA: Knowledge-Driven Vessel Trajectory Imputation with Repair Provenance

Researchers introduce VISTA, a framework for vessel trajectory imputation that uses knowledge-driven LLM reasoning to repair incomplete maritime tracking data. The system provides 'repair provenance' - documented reasoning behind data repairs - achieving 5-91% accuracy improvements over existing methods while reducing inference time by 51-93%.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1019
๐Ÿง 

Provably Safe Generative Sampling with Constricting Barrier Functions

Researchers have developed a safety filtering framework that ensures AI generative models like diffusion models produce outputs that satisfy hard constraints without requiring model retraining. The approach uses Control Barrier Functions to create a 'constricting safety tube' that progressively tightens constraints during the generation process, achieving 100% constraint satisfaction across image generation, trajectory sampling, and robotic manipulation tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 174/10
๐Ÿง 

Safe Flow Q-Learning: Offline Safe Reinforcement Learning with Reachability-Based Flow Policies

Researchers introduce Safe Flow Q-Learning (SafeFQL), a new offline safe reinforcement learning method that combines Hamilton-Jacobi reachability with flow policies for safety-critical real-time control. The method achieves better safety performance with lower inference latency compared to existing diffusion-based approaches, making it more suitable for real-time deployment.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

AnchorDrive: LLM Scenario Rollout with Anchor-Guided Diffusion Regeneration for Safety-Critical Scenario Generation

Researchers have developed AnchorDrive, a two-stage AI framework that combines large language models with diffusion models to generate realistic safety-critical scenarios for autonomous driving systems. The system uses LLMs for controllable scenario generation based on natural language instructions, then employs diffusion models to create realistic driving trajectories.