GeoVeX: Geospatial Vectors with Hexagonal Convolutional AutoencodersDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Representation learning, Geospatial Embedding, Convolutional Autoencoders on hexagonal grids, OpenStreetMap, H3 hexagons
TL;DR: We introduce a new geospatial representation model called GeoVeX to learn global vectors for all geographical locations on Earth land cover (200+ million embeddings).
Abstract: We introduce a new geospatial representation model called GeoVeX to learn global vectors for all geographical locations on Earth land cover (200+ million embeddings). GeoVeX is built on a novel model architecture named Hexagonal Convolutional Autoencoders (HCAE) combined with a Zero-Inflated Poisson (ZIP) reconstruction layer, applied to a grid of Uber's H3 hexagons, each one described by the histogram of OpenStreetMap (OSM) geographical tags occurrences. GeoVeX is novel on three aspects: 1) it produces the first geospatial vectors trained on worldwide open data, enabling wide adoption on every downstream tasks which may benefit from enriched geographical information, requiring only location coordinates; 2) it represents the first use of hexagonal convolutions within autoencoder architectures, to learn latent representations of an hexagonal grid; and 3) it introduces a spatial-contextual Poisson reconstruction loss function for autoencoder architectures suitable for training on sparse geographical count data. Experiments demonstrate that GeoVeX embeddings can improve upon state-of-the-art geospatial location representations on two different downstream tasks: price prediction in the travel industry and hyperlocal interpolation of climate data from weather stations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
21 Replies

Loading