y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#autonomous-systems News & Analysis

71 articles tagged with #autonomous-systems. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

71 articles
AINeutralArs Technica – AI · 1d ago🔥 8/10
🧠

Ukraine’s military robot surge aims to offset drone risks to humans

Ukraine is accelerating its deployment of military robots on the battlefield to reduce human casualties and mitigate risks from drone warfare. This shift reflects broader geopolitical trends where autonomous systems are becoming critical force multipliers in modern conflict zones.

Ukraine’s military robot surge aims to offset drone risks to humans
AIBullishArs Technica – AI · 22h ago7/10
🧠

Robot dogs now read gauges and thermometers using Google Gemini

Google has integrated its Gemini AI model into robotic systems that can autonomously read industrial gauges and thermometers during facility inspections. This advancement combines computer vision with large language models to enable robots to interpret analog instruments, improving automation capabilities in industrial monitoring and maintenance operations.

Robot dogs now read gauges and thermometers using Google Gemini
🧠 Gemini
AIBullishTechCrunch – AI · 23h ago7/10
🧠

OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

OpenAI has enhanced its Agents SDK to enable enterprises to build AI agents with improved safety and capabilities. The update reflects the growing adoption of agentic AI systems in enterprise environments and OpenAI's commitment to providing developers with robust tools for deploying autonomous AI systems.

🏢 OpenAI
AINeutralarXiv – CS AI · 1d ago7/10
🧠

Policy-Invisible Violations in LLM-Based Agents

Researchers identified a critical failure mode in LLM-based agents called policy-invisible violations, where agents execute actions that appear compliant but breach organizational policies due to missing contextual information. They introduced PhantomPolicy, a benchmark with 600 test cases, and Sentinel, an enforcement framework using counterfactual graph simulation that achieved 93% accuracy in detecting violations compared to 68.8% for baseline approaches.

AIBullisharXiv – CS AI · 2d ago7/10
🧠

Escaping the Context Bottleneck: Active Context Curation for LLM Agents via Reinforcement Learning

Researchers introduce ContextCurator, a reinforcement learning-based framework that decouples context management from task execution in LLM agents, addressing the context bottleneck problem. The approach pairs a lightweight specialized policy model with a frozen foundation model, achieving significant improvements in success rates and token efficiency across benchmark tasks.

🧠 GPT-4🧠 Gemini
AINeutralImport AI (Jack Clark) · 3d ago7/10
🧠

Import AI 453: Breaking AI agents; MirrorCode; and ten views on gradual disempowerment

Import AI 453 examines three major developments in artificial intelligence: breakthrough research on AI agents that can reverse-engineer complex software, the emergence of MirrorCode technology, and a framework exploring gradual AI disempowerment strategies. The newsletter analyzes implications for AI safety, capabilities, and governance as autonomous systems become more sophisticated.

Import AI 453: Breaking AI agents; MirrorCode; and ten views on gradual disempowerment
AIBullisharXiv – CS AI · 3d ago7/10
🧠

SafeAdapt: Provably Safe Policy Updates in Deep Reinforcement Learning

Researchers introduce SafeAdapt, a novel framework for updating reinforcement learning policies while maintaining provable safety guarantees across changing environments. The approach uses a 'Rashomon set' to identify safe parameter regions and projects policy updates onto this certified space, addressing the critical challenge of deploying RL agents in safety-critical applications where dynamics and objectives evolve over time.

AIBullisharXiv – CS AI · 6d ago7/10
🧠

Towards provable probabilistic safety for scalable embodied AI systems

Researchers propose a shift from deterministic to probabilistic safety verification for embodied AI systems, arguing that provable probabilistic guarantees offer a more practical path to large-scale deployment in safety-critical applications like autonomous vehicles and robotics than the infeasible goal of absolute safety across all scenarios.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Springdrift: An Auditable Persistent Runtime for LLM Agents with Case-Based Memory, Normative Safety, and Ambient Self-Perception

Researchers have developed Springdrift, a persistent runtime system for long-lived AI agents that maintains memory across sessions and provides auditable decision-making capabilities. The system was successfully deployed for 23 days, during which the AI agent autonomously diagnosed infrastructure problems and maintained context across multiple communication channels without explicit instructions.

AIBullisharXiv – CS AI · Mar 267/10
🧠

AI-Supervisor: Autonomous AI Research Supervision via a Persistent Research World Model

Researchers have developed AI-Supervisor, a multi-agent framework that maintains a persistent Research World Model to autonomously conduct end-to-end AI research supervision. Unlike traditional linear pipelines, the system uses specialized agents with structured gap discovery, self-correcting loops, and consensus mechanisms to continuously evolve research understanding.

AIBullisharXiv – CS AI · Mar 177/10
🧠

Position: Agentic Evolution is the Path to Evolving LLMs

Researchers propose 'agentic evolution' as a new paradigm for adapting Large Language Models in real-world deployment environments. The A-Evolve framework treats adaptation as an autonomous, goal-directed optimization process that can continuously improve LLMs beyond static training limitations.

AIBullisharXiv – CS AI · Mar 167/10
🧠

Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents

Researchers introduce the Darwin Gödel Machine (DGM), a self-improving AI system that can iteratively modify its own code and validate changes through benchmarks. The system demonstrated significant performance improvements, increasing coding capabilities from 20.0% to 50.0% on SWE-bench and from 14.2% to 30.7% on Polyglot benchmarks.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Cognition Envelopes for Bounded Decision Making in Autonomous UAS Operations

Researchers introduce 'Cognition Envelopes' as a new framework to constrain AI decision-making in autonomous systems, addressing errors like hallucinations in Large Language Models and Vision-Language Models. The approach is demonstrated through autonomous drone search and rescue missions, establishing reasoning boundaries to complement traditional safety measures.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Adaptive Quantized Planetary Crater Detection System for Autonomous Space Exploration

Researchers propose an Adaptive Quantized Planetary Crater Detection System (AQ-PCDSys) that uses quantized neural networks and multi-sensor fusion to enable real-time AI-powered crater detection on resource-constrained space exploration hardware. The system addresses the critical bottleneck of deploying sophisticated deep learning models on power-limited, radiation-hardened space computers.

AINeutralarXiv – CS AI · Mar 57/10
🧠

The Controllability Trap: A Governance Framework for Military AI Agents

Researchers propose the Agentic Military AI Governance Framework (AMAGF) to address control failures in autonomous military AI systems. The framework introduces a Control Quality Score (CQS) to continuously measure and manage human control over AI agents throughout operations, moving beyond binary control models.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Robustness of Agentic AI Systems via Adversarially-Aligned Jacobian Regularization

Researchers introduce Adversarially-Aligned Jacobian Regularization (AAJR), a new method to improve the robustness of autonomous AI agent systems by controlling sensitivity along adversarial directions rather than globally. This approach maintains better performance while ensuring stability in multi-agent AI ecosystems compared to existing methods.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Self-Improving Loops for Visual Robotic Planning

Researchers developed SILVR, a self-improving system for visual robotic planning that uses video generative models to continuously enhance robot performance through self-collected data. The system demonstrates improved task performance across MetaWorld simulations and real robot manipulations without requiring human-provided rewards or expert demonstrations.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Learning Contextual Runtime Monitors for Safe AI-Based Autonomy

Researchers introduce a novel framework for learning context-aware runtime monitors for AI-based control systems in autonomous vehicles. The approach uses contextual multi-armed bandits to select the best controller for current conditions rather than averaging outputs, providing theoretical safety guarantees and improved performance in simulated driving scenarios.

AIBullisharXiv – CS AI · Mar 46/103
🧠

CoFL: Continuous Flow Fields for Language-Conditioned Navigation

Researchers present CoFL, a new AI navigation system that uses continuous flow fields to enable robots to navigate based on language commands. The system outperforms existing modular approaches by directly mapping bird's-eye view observations and instructions to smooth navigation trajectories, demonstrating successful zero-shot deployment in real-world experiments.

AI × CryptoBullishThe Block · Mar 47/107
🤖

What is Coinbase’s x402 protocol?

Coinbase has developed the x402 protocol to address payment challenges faced by AI agents in financial operations. The protocol aims to provide autonomous bots with access to fast, cheap, high-volume transactions that traditional payment systems cannot offer, eliminating the need for human intervention in setting up payment methods.

What is Coinbase’s x402 protocol?
Page 1 of 3Next →