Building Intelligent and Intelligible AI: A Framework for Human-like Autonomy and Explainability in Critical Infrastructure
Abstract: This paper presents a novel AI system architecture designed for critical infrastructure, emphasizing the importance of human-robot interaction (HRI) in improving the explainability of AI decision-making. The proposed approach combines Case-Based Reasoning (CBR) and ontology to create a dynamic AI framework that ensures transparency and trustworthiness in autonomous operations. In which the Case-Based Reasoning (CBR) component employs prototype and exemplar methods from cognitive psychology to mimic human decision-making, while the ontology component organizes the knowledge base, enhancing clarity and comprehension. The proposed system architecture consists of three interconnected components that work together to improve the AI’s explainability and adaptability in dynamic environments. These components include: 1) a Case-Based Reasoning (CBR)-powered decision-making module that enables the AI to learn from past experiences and justify its actions in a human-understandable format; 2) an ontology-guided knowledge framework that provides a structured and semantic representation of information to guide AI operations; and 3) an HRI mechanism that facilitates effective collaboration between humans and the AI system, ensuring that autonomous decisions are transparent and aligned with human oversight. We evaluate the proposed architecture in a simulated environment, demonstrating improved AI explainability and reliability. Our findings show that the integration of CBR and ontology links decision-making with transparency, highlighting the role of HRI in accountable AI for critical infrastructure. The paper discusses the design and outcomes of the study, paving the way for the development of more transparent and trustworthy AI systems for critical infrastructure.
Loading