Abstract: Automated image captioning has the potential to
be a useful tool for people with vision impairments. Images taken by this user group are often noisy, which leads to incorrect and even unsafe model predictions. In this paper, we propose
a quality-agnostic framework to improve the performance and robustness of image captioning models for visually impaired people. We address this
problem from three angles: data, model, and evaluation. First, we show how data augmentation
techniques for generating synthetic noise can address data sparsity in this domain. Second, we enhance the robustness of the model by expanding a
state-of-the-art model to a dual network architecture, using the augmented data and leveraging different consistency losses. Our results demonstrate
increased performance, e.g. an absolute improvement of 2.15 on CIDEr, compared to state-of-theart image captioning networks, as well as increased
robustness to noise with up to 3 points improvement on CIDEr in more noisy settings. Finally,
we evaluate the prediction reliability using confdence calibration on images with different diffculty / noise levels, showing that our models perform more reliably in safety-critical situations. The
improved model is part of an assisted living application, which we develop in partnership with the
Royal National Institute of Blind People.
Loading