[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-AttentionDownload PDF

Published: 01 Apr 2021, Last Modified: 05 May 2023RC2020Readers: Everyone
Abstract: The presented study evaluates “Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention” by Garnot et al. (2020) within the scope of the ML Reproducibility Challenge 2020. Our work focuses on both aspects constituting the paper: the method itself and the validity of the stated results. We show that, despite some unforeseen design choices, the investigated method is coherent in itself and performs the expected way. Scope of Reproducibility The evaluated paper presents a method to classify crop types from multispectral satellite image time series with a newly developed pixel-set encoder and an adaption of the Transformer (Vaswani et al., 2017), called temporal attention encoder. Methodology In order to assess both the architecture and the performance of the approach, we first attempted to implement the method from scratch, followed by a study of the authors’ openly provided code. Additionally, we also compiled an alternative dataset similar to the one presented in the paper and evaluated the methodology on it. Results During the study, we were not able to reproduce the method due to a conceptual misinterpretation of ours regarding the authors’ adaption of the Transformer (Vaswani et al., 2017). However, the publicly available implementation helped us answering our questions and proved its validity during our experiments on different datasets. Additionally, we compared the papers’ temporal attention encoder to our adaption of it, which we came across while we were trying to reimplement and grasp the authors’ ideas. What was easy Running the provided code and obtaining the presented dataset turned out to be easily possible. Even adapting the method to our own ideas did not cause issues, due to a well documented and clear implementation. What was difficult Reimplementing the approach from scratch turned out to be harder than expected, especially because we had a certain type of architecture in mind that did not fit the dimensions of the layers mentioned in the paper. Besides, knowing how the dataset was exactly assembled would have been beneficial for us, as we tried to retrace these steps and therefore, making the results on our dataset easier to compare to the ones from the paper. Communication with original authors While working on the challenge, we stood in E-mail contact with the first and second author, had two online meetings and got feedback to our implementation on GitHub . Additionally, one of the authors of the Transformer paper (Vaswani et al., 2017) provided us with further answers regarding their models’ architecture.
Paper Url: https://openreview.net/forum?id=p2EWH_x7QIc&noteId=yZYq-yavLgy
4 Replies

Loading