Abstract: In the field of Advanced Driver Assistance Systems (ADAS), car navigation systems have become an essential part of modern driving. However, the guidance provided by existing car navigation systems is often difficult to understand, making it difficult for drivers to understand solely through voice instructions. This challenge has led to growing interest in Human-like Guidance (HLG), a task focused on delivering intuitive navigation instructions that mimic the way a passenger would guide a driver. Despite this, previous studies have used rule-based systems to generate HLG datasets, which have resulted in inflexible and low-quality due to limited textual representation. In contrast, high-quality datasets are crucial for improving model performance. In this study, we propose a method to automatically generate high-quality navigation sentences from image data using a Large Language Model with a novel prompting approach. Additionally, we introduce a Mixture of Experts (MoE) framework for d
Loading