Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns

Published: 22 Apr 2024, Last Modified: 30 Apr 2024VLADR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: VQA models, autonomous driving, filter integration, object detection
TL;DR: We propose an approach integrating filters to optimize a VQA model’s attention mechanisms, prioritizing relevant objects in a driving context.
Abstract: Visual Question Answering (VQA) models play a critical role in enhancing the perception capabilities of autonomous driving systems by allowing vehicles to analyze visual inputs alongside textual queries, fostering natural interaction and trust between the vehicle and its occupants or other road users. This study investigates the attention patterns of humans compared to a VQA model when answering driving-related questions, revealing disparities in the objects observed. We propose an approach integrating filters to optimize the model's attention mechanisms, prioritizing relevant objects and improving accuracy. Utilizing the LXMERT model for a case study, we compare attention patterns of the pre-trained and Filter Integrated models, alongside human answers using images from the NuImages dataset, gaining insights into feature prioritization. We evaluated the models using a Subjective scoring framework which shows that the integration of the feature encoder filter has enhanced the performance of the VQA model by refining its attention mechanisms.
Supplementary Material: zip
Submission Number: 6
Loading