\maketitle
This work investigates the simulations of spatial random fields with deep generative adversarial networks (GANs). GANs, well known to reproduce images, are parametrized functions. Parameters are found after minimizing a statistical distance such as the Wasserstein-1 distance. This leads to the Wasserstein GANs (WGANs) which are the ones used.
A GAN architecture is the fully convolutional architecture. Convolutional layers are spatially stationary functions. A fully convolutional architecture allows the GAN to simulate stationary spatial phenomena.
To be effectively stationary, convolutional layers are required to be zero-padded.
To generate a spatial phenomenon of arbitrary size, an efficient way is the patch-by-patch approach. Neighboring patches of a latent grid are generated independently with the GAN and then merged to constitute the spatial phenomena.
We propose to train these GANs on discretized random fields.
They are trained on three datasets during over three experiments.
In a first experiment, they are trained on a dataset of Gaussian random fields (GRF) with the same covariance function. This experiment validates the ability of GANs to reproduce GRF.
In a second experiment, a dataset of GRF with four different covariance functions is defined. To allow the GAN to have a long range consistency, i. e. a single covariance function describes the spatial properties of a simulation, a global noise is added to the latent grid. This global noise has the same value at any location. This experience highlights the ability of a single GAN to simulate different spatial properties depending on the context.
Finally, in a last experiment, a dataset of random fields that are not Gaussian is used to train a GAN. These random fields are the derivative of a scalar field that represents a structural geological model. This last experiment demonstrates the ability of the GAN to reproduce spatial phenomena known through examples but in which this is not a Gaussian phenomenon and where there are different spatial properties depending on a context.
Simulations produced by GANs are unconditional simulations. To obtain conditional simulations, the GAN distribution is used as a prior distribution in a Bayesian framework. The data are integrated in a likelihood that takes into account their uncertainties. The prior and the likelihood define the posterior distribution. To obtain samples of the posterior distribution, the Metropolis-adjusted Langevin algorithm (MALA) is used, which is a Markov chain Monte Carlo (MCMC) method.