Contrastive Representation Learning for Multi-scale Spatial ScenesDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Spatial scenes, which are composed by spatial objects and their spatial relations, are the basis of geographic information retrieval, spatial cognition, and spatial search. Despite the wide usage of spatial scenes, representation learning on spatial scenes that contain complex composition of spatial objects remains a challenge, since the spatial data types of geographic objects (e.g., points, polylines, and polygons) and the geographical scales vary across different spatial scenes. Inspired by recently proposed multi-scale location encoding models such as Space2Vec, we propose a multi-scale spatial scene encoding model called Scene2Vec to solve these representational challenges. In Scene2Vec, a location encoder is used to model the spatial relationships among spatial objects and a feature encoder is used for objects' semantic feature encoding. A scene encoder is developed to integrate the representations of spatial objects into a single scene embedding. Moreover, we propose a spatial scene augmentation method to sample additional points based on the shapes of polyline/polygon-based spatial objects in all scales of spatial scenes. The whole model is trained in a self-supervised manner with a contrastive loss. We conduct experiments on real world datasets for spatial scene retrieval task 1) purely based on points, e.g., points of interest (POIs), and 2) based on multi-structured spatial objects. Results show that Scene2Vec outperforms well-established encoding methods such as Space2Vec and multi-layer perceptrons due to the advantages of the integrated multi-scale representations and the proposed spatial scene augmentation method. Moreover, detailed analysis shows that Scene2Vec has the ability to generate representations of all the three types of spatial objects in a multi-scale manner.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
10 Replies

Loading