Abstract: Pre-trained Foundation Models (PFMs) have ushered in a paradigm-shift in AI, due to their ability to learn general-purpose representations that can be readily employed in downstream tasks. While PFMs have been successfully adopted in various fields such as NLP and Computer Vision, their capacity in handling geospatial data remains limited. This can be attributed to the intrinsic heterogeneity of such data, which encompasses different types, including points, segments and regions, as well as multiple information modalities. The proliferation of Volunteered Geographic Information initiatives, like OpenStreetMap, unveils a promising opportunity to bridge this gap. In this paper, we present CityFM, a self-supervised framework to train a foundation model within a selected geographical area. CityFM relies solely on open data from OSM, and produces multimodal representations, incorporating spatial, visual, and textual information. We analyse the entity representations generated by our foundation models from a qualitative perspective, and conduct experiments on road, building, and region-level downstream tasks. In all the experiments, CityFM achieves performance superior to, or on par with, application-specific algorithms.
Loading