10 articles tagged with #safety-critical. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AI ร CryptoNeutralarXiv โ CS AI ยท Apr 77/10
๐คResearchers propose a blockchain-based AI system for wildfire monitoring that requires mandatory human authorization before issuing alerts. The system uses smart contracts to enforce governance constraints on autonomous AI agents, combining UAV monitoring with cryptographic verification to prevent false alarms and ensure accountability.
AIBullisharXiv โ CS AI ยท Mar 57/10
๐ง Researchers propose Feature Mixing, a novel method for multimodal out-of-distribution detection that achieves 10x to 370x speedup over existing approaches. The technique addresses safety-critical applications like autonomous driving by better detecting anomalous data across multiple sensor modalities.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง A new thesis examines explainable AI planning (XAIP) for hybrid systems, addressing the critical challenge of making autonomous planning decisions interpretable in safety-critical applications. As AI automation expands into domains like autonomous vehicles, energy grids, and healthcare, the ability to explain system reasoning becomes essential for trust and regulatory compliance.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers propose a compliance-by-construction architecture that integrates Generative AI with structured formal argument representations to ensure accountability in high-stakes decision systems. The approach uses typed Argument Graphs, retrieval-augmented generation, validation constraints, and provenance ledgers to prevent AI hallucinations while maintaining traceability for regulatory compliance.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers propose Preference-based Constrained Reinforcement Learning (PbCRL), a new approach for safe AI decision-making that learns safety constraints from human preferences rather than requiring extensive expert demonstrations. The method addresses limitations in existing Bradley-Terry models by introducing a dead zone mechanism and Signal-to-Noise Ratio loss to better capture asymmetric safety costs and improve constraint alignment.
AINeutralarXiv โ CS AI ยท Mar 36/108
๐ง Researchers introduce 'Monotropic Artificial Intelligence,' a new paradigm that deliberately creates highly specialized AI models with extraordinary precision in narrow domains rather than pursuing general-purpose capabilities. The concept challenges the current trend of scaling AI models broadly, proposing instead that domain-specialized models could offer advantages for safety-critical applications.
$NEAR
AIBullisharXiv โ CS AI ยท Mar 26/1017
๐ง Researchers introduce VISTA, a framework for vessel trajectory imputation that uses knowledge-driven LLM reasoning to repair incomplete maritime tracking data. The system provides 'repair provenance' - documented reasoning behind data repairs - achieving 5-91% accuracy improvements over existing methods while reducing inference time by 51-93%.
AIBullisharXiv โ CS AI ยท Mar 27/1019
๐ง Researchers have developed a safety filtering framework that ensures AI generative models like diffusion models produce outputs that satisfy hard constraints without requiring model retraining. The approach uses Control Barrier Functions to create a 'constricting safety tube' that progressively tightens constraints during the generation process, achieving 100% constraint satisfaction across image generation, trajectory sampling, and robotic manipulation tasks.
AINeutralarXiv โ CS AI ยท Mar 174/10
๐ง Researchers introduce Safe Flow Q-Learning (SafeFQL), a new offline safe reinforcement learning method that combines Hamilton-Jacobi reachability with flow policies for safety-critical real-time control. The method achieves better safety performance with lower inference latency compared to existing diffusion-based approaches, making it more suitable for real-time deployment.
AINeutralarXiv โ CS AI ยท Mar 44/103
๐ง Researchers have developed AnchorDrive, a two-stage AI framework that combines large language models with diffusion models to generate realistic safety-critical scenarios for autonomous driving systems. The system uses LLMs for controllable scenario generation based on natural language instructions, then employs diffusion models to create realistic driving trajectories.