Learning Representations with Seq2Seq Models for Damage Detection

Anonymous

10 Mar 2022 (modified: 05 May 2023)Submitted to ICLR 2022 DGM4HSD workshopReaders: Everyone
Keywords: Representation learning, Seq2Seq model, Damage detection
Abstract: Natural hazards have incurred damages to buildings and economic losses worldwide. Post-hazard response requires an accurate and fast damage detection and assessment. The data-driven damage detection approach has been emerged as an alternative to the conventional human vision inspection. We use a Seq2Seq model to learn damage representations by training with only undamaged signals. We test the validity of our Seq2Seq model with a signal dataset that is collected from a 2-story timber building. Results show that our Seq2Seq model has a strong capability of distinguishing damage representations for different damage states. Our code is available at the repository: \url{https://github.com/qryang/Damage-representation}.
Main Submission And Supplementary Material: pdf
3 Replies

Loading