Marvin Mc Cutchan, Alexis J. Comber, Ioannis Giannopoulos, and and Manuela Canestrini
Land use and land cover (LULC) classification traditionally relies on remote sensing imagery. It models land cover classes based on their electromagnetic reflectance aggregated in pixels. This classification depends on the definition of land cover classes and only indirectly infers information on the land use. In this paper, we introduce a new method that includes information on the types of geographic objects into LULC classification. We show how geospatial semantics (which describes the land use) can be fused with imagery (which describes the land cover) to improve LULC classification.
For the analysis, we use the following datasets covering the area of Austria: remote sensing imagery from Sentinel-2, CORINE land cover (level 2) data (with 100m x 100m resolution), and geospatial semantics vector data from the LinkedGeoData platform. LinkedGeoData provides OpenStreetMap data in a linked format using classes defined in the Web Ontology Language (OWL). From these semantics data, a Geospatial Configuration Matrix (GSCM) is computed with a feature vector for each grid cell of the CORINE dataset. Sentinel-2 image information within CORINE grid cells are then appended to the GSCM. We perform LULC classification using a multilayer perceptron model with CORINE data as ground truth. The results show that LULC classification can be performed using geospatial semantics only and that fusing geospatial semantics with remote sensing imagery increases classification accuracy.
Remote Sensing, 13, 16, 3197, 2021-08-12.