CAESAR: An Embodied Simulator for Generating Multimodal Referring Expression DatasetsDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: Embodied Simulator, Referring Expression, Multimodal Spatial Relation Grounding
Abstract: Humans naturally use verbal utterances and nonverbal gestures to refer to various objects (known as $\textit{referring expressions}$) in different interactional scenarios. As collecting real human interaction datasets are costly and laborious, synthetic datasets are often used to train models to unambiguously detect relationships among objects. However, existing synthetic data generation tools that provide referring expressions generally neglect nonverbal gestures. Additionally, while a few small-scale datasets contain multimodal cues (verbal and nonverbal), these datasets only capture the nonverbal gestures from an exo-centric perspective (observer). As models can use complementary information from multimodal cues to recognize referring expressions, generating multimodal data from multiple views can help to develop robust models. To address these critical issues, in this paper, we present a novel embodied simulator, CAESAR, to generate multimodal referring expressions containing both verbal utterances and nonverbal cues captured from multiple views. Using our simulator, we have generated two large-scale embodied referring expression datasets, which we have released publicly. We have conducted experimental analyses on embodied spatial relation grounding using various state-of-the-art baseline models. Our experimental results suggest that visual perspective affects the models' performance; and that nonverbal cues improve spatial relation grounding accuracy. Finally, we will release the simulator publicly to allow researchers to generate new embodied interaction datasets.
Dataset Embargo: We released the simulator, datasets, and source code of baselines: https://caesar-simulator.github.io
Author Statement: Yes
TL;DR: A novel embodied simulator to generate multimodal referring expressions containing both verbal utterances and non-verbal gestures captured from multiple views.
Supplementary Material: zip
License: Our datasets can be accessed using the CC BY-NC-SA license (https://creativecommons.org/licenses/by-nc-sa/4.0/). Moreover, our simulator source code will be released under the BSD 3-Clause license (https://opensource.org/licenses/BSD-3-Clause).
URL: https://caesar-simulator.github.io
Dataset Url: Simulator, datasets, and source code of baselines can be accessed here: https://caesar-simulator.github.io
Contribution Process Agreement: Yes
In Person Attendance: Yes
33 Replies

Loading