Abstract: In order to advance the research on AI-assisted efficient damage assessment during a natural disaster, we present in this study a large-scale visual question answering (VQA) dataset on remote sensing images, namely RescueNet-VQA. Visual question answering is the task of getting query-based scene information from images. The main advantage of this approach is that it can provide high-level scene information while interacting with users. For this merit, VQA has the potential to be considered in the decision-making processes for rapid response and recovery during any disaster. To conduct substantial research in this context, we present a novel VQA dataset for damage assessment on remote sensing imagery. Images in our dataset were collected after hurricane Michael. We have generated 1,03,192 image-question-answer triplets from 4,375 images. This dataset is the only large-scale remote-sensed imagery-based visual question-answering dataset for damage assessment purposes. We have presented image collection and question generation procedures along with dataset statistics in this work.
Loading