EAT -: The ICMI 2018 Eating Analysis and Tracking Challenge

Published: 01 Jan 2018, Last Modified: 27 Sept 2024ICMI 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The multimodal recognition of eating condition - whether a person is eating or not - and if yes, which food type, is a new research domain in the area of speech and video processing that has many promising applications for future multimodal interfaces such as adapting speech recognition or lip reading systems to different eating conditions. We herein describe the ICMI 2018 Eating Analysis and Tracking (EAT) Challenge and address - for the first time in research competitions under well-defined conditions - new classification tasks in the area of user data analysis, namely audio-visual classifications of user eating conditions. We define three Sub-Challenges based on classification tasks in which participants are encouraged to use speech and/or video recordings of the audio-visual iHEARu-EAT database. In this paper, we describe the dataset, the Sub-Challenges, their conditions, and the baseline feature extraction and performance measures as provided to the participants.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview