A new thesis examines explainable AI planning (XAIP) for hybrid systems, addressing the critical challenge of making autonomous planning decisions interpretable in safety-critical applications. As AI automation expands into domains like autonomous vehicles, energy grids, and healthcare, the ability to explain system reasoning becomes essential for trust and regulatory compliance.
The advancement of automated planning systems has accelerated deployment across high-stakes domains where understanding decision-making processes is non-negotiable. This research tackles a fundamental gap in the AI planning community: while planning algorithms have become increasingly sophisticated, their decision-making remains largely opaque to users, regulators, and stakeholders who need transparency. The proliferation of autonomous systems in critical infrastructure—from smart grids managing power distribution to autonomous vehicles making life-or-death navigation choices—has exposed the inadequacy of black-box planning approaches. Hybrid systems, which integrate discrete symbolic planning with continuous control mechanisms, represent a closer approximation of real-world complexity than traditional planning models. This thesis addresses why explainability matters: when autonomous systems fail or make controversial decisions, stakeholders require comprehensible explanations to assign responsibility, improve systems, and maintain public trust. For developers and deploying organizations, explainable planning directly impacts adoption timelines and regulatory approval pathways. Healthcare systems, autonomous transportation providers, and grid operators all face scrutiny regarding AI decision transparency. The research contributes methodologically to bridging the gap between what planning systems do and what humans can understand about why they do it. Looking ahead, regulatory frameworks will likely mandate explainability requirements, making this research timely for organizations investing in autonomous solutions. The ability to generate human-understandable explanations for complex planning decisions will become a competitive differentiator and compliance necessity.
- →Explainable AI planning addresses critical transparency needs in autonomous systems across safety-critical domains.
- →Hybrid systems better represent real-world complexity by combining discrete planning with continuous control mechanisms.
- →Growing regulatory and stakeholder pressure requires AI systems to justify their decisions transparently.
- →Lack of explainability currently limits deployment of advanced planning systems in healthcare, transportation, and infrastructure.
- →Research in XAIP will likely become foundational for future autonomous system standards and regulations.