Standardized benchmark sets

A standardized data set for benchmarking deep generative geographical process models

A challenging question remains unanswered: Given the multitude and diversity of real-world geographical processes – and the according sensor data – how can such models be benchmarked in an efficient way? For this, we advocate the need for standardized training data sets, for which the ground truth known (comparable to MNIST, CIFAR100, or ImageNet for image processing tasks), where different degrees of stochastic deviations or noise can provide additional challenges and levels of realism. We can base these on Conway’s popular Game of Life (GoL) populated by Cellular Automata. Despite its seemingly simple underlying rules, it allows powerful abstractions directly relevant to geographical processes, such as the conceptualization of objects, states, processes and events, properties related to process dynamics such as initiation, cessation and constancy, or systemic attributes such as location, topology, spatial interaction, or emergence. By altering one or multiple of the underlying specifications such as the spatial neighborhood definition, the Markov property, or the local transition rules, gradually more complex versions of the game can be designed, corresponding to more elaborate real-world geographical processes.

Fig. 2: Game of Life

We have recently show-cased the value of this approach for assessing Models of Geographical Processes from Generative Adversarial Networks (GANs).

©2021 IARAI - INSTITUTE OF ADVANCED RESEARCH IN ARTIFICIAL INTELLIGENCE

Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)
Loading

Log in with your credentials

or    

Forgot your details?

Create Account