GRU-M: A Joint Impute and Learn Approach for Sequential Prediction under Missing Data

Published: 05 Sept 2024, Last Modified: 16 Oct 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sequential Prediction, Missing Data, RNN, GRU, Time-series
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
TL;DR: The paper introduces an RNN unit which is an extension of GRU to intelligently tackle missing data during sequence (time-series) prediction.
Abstract: Sequential Prediction in presence of missing data is an old research problem. Classically, researchers have tackled this by imputing data first and then building predictive models. This 2-stage process is typically prone to errors and to circumvent this, researchers have provided a variety of techniques which employ a joint impute and learn approach before prediction. Among these, Recurrent Neural Networks (RNNs) have been very popular given their natural ability to tackle sequential data efficiently. Existing state-of-art approaches either (i)do not impute (ii) do not completely factor the available information around a gap, (iii)ignore position information within a gap and so on. Our approach intelligently addresses these gaps by proposing a novel architecture which jointly imputes and learns by taking into account (i)information from either end of the gap (ii)proximity to the left/right-end of a gap (iii)the length of the gap. In context of this work, prediction means either sequence classification or forecasting. In this paper, we demonstrate the utility of the proposed architecture on forecasting tasks. We benchmark against a range of state-of-art baselines and in scenarios where data is either (a)naturally missing or (b)synthetically masked.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Deep Learning (architectures, deep reinforcement learning, generative models, deep learning theory, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: No
Submission Number: 243
Loading