GNGTS 2022 - Atti del 40° Convegno Nazionale

480 GNGTS 2022 Sessione 3.3 Fig. 1 - Schematic representation of the GB-MCMC inversion procedure. Green and blue rectangles refer to steps performed in the reduced and full spaces. If m k+1 is accepted, m k = m k+1 . Otherwise, m k is repeated in the chain and another state is generated as a random deviate from m k . Usually, several Markov chains are used, and so multiple random walks are performed starting from different regions of the model space, in order to increase the exploration of the parameters space and to prevent sampling to be trapped in local maxima of the PPD. The ensemble of sampled states after the burn-in period (the first iterations, corresponding to the beginning of the chains where the algorithm moves towards a promising portion of the search space, are discarded from the computation of the PPD) is used to numerically compute the statistical properties (e.g. mean, mode, standard deviations) of the target posterior probability. We can formulate the Bayesian inversion framework in terms of E ( m ), H and g , under Gaussian assumptions for data, noise and model parameter distributions: (7) (8) We obtain the approximation of the posterior around m k : (9) After constructing a local Gaussian approximation of the posterior density, we can define a sampling method that uses the following proposal density: (10) Each proposed model is accepted according to the Metropolis-Hasting rule explained before. The α and β 2 values are tunable parameters which determine how far to go along the negative gradient direction and how much the model is randomly perturbed.

RkJQdWJsaXNoZXIy MjQ4NzI=