Abstract: With the rapid advancements in Artificial Intelligence (AI), autonomous agents are increasingly expected to manage complex situations where learning-enabled algorithms are vital. However, the integration of these advanced algorithms poses significant challenges, especially concerning safety and reliability. This research emphasizes the importance of incorporating human-machine collaboration into the systems engineering process to design learning-enabled increasingly autonomous systems (LEIAS). Our proposed LEIAS architecture emphasizes communication representation and pilot preference learning to boost operational safety. Leveraging the Soar cognitive architecture, the system merges symbolic decision logic with numeric decision preferences enhanced through reinforcement learning. A core aspect of this approach is transparency; the LEIAS provides pilots with a comprehensive, interpretable view of the system's state, encompassing detailed evaluations of sensor reliability, including GPS, IMU, and LIDAR data. This multi-sensor assessment is critical for diagnosing discrepancies and maintaining trust. Additionally, the system learns and adapts to pilot preferences, enabling responsive, context-driven decision-making. Autonomy is incrementally escalated based on necessity, ensuring pilots retain control in standard scenarios and receive assistance only when required. Simulation studies conducted in Microsoft's XPlane simulation environment to validate this architecture's efficacy, showcasing its performance in managing sensor anomalies and enhancing human-machine collaboration, ultimately advancing safety in complex operational environments.
Loading