GNGTS 2018 - 37° Convegno Nazionale

712 GNGTS 2018 S essione 3.3 (2) If H represents parameterizations with just a different number of unknowns n , the Bayes theorem can be written as: (3) The left-hand side term of equation 3 is our target posterior probability density that can be estimated through a MCMC optimization procedure. The main advantage of MCMC methods is that they correctly sample the target PPD even if the a-priori distribution is not defined in a closed form or if the inverse problem is non-linear. Roughly speaking, a MCMC algorithm performs a random walk in the model space by applying a simple two-step procedure: in the first step a candidate model is drawn from the prior distribution, while in the second step this model is accepted or rejected with a probability that depends on the prior information and on its fit with the observed data. The ensemble of accepted models is the final output of the algorithm that can be used to numerically compute the final PPD. In a transdimensional MCMC, the acceptance probability α, that is the probability to move from a model m to a model m´ at a given step of the chain, can be written as: (4) where J is the Jacobian of the transformation from m to m´ and is needed to account for the scale changes involved when the transformation involves a jump between dimensions, whereas q () is the proposal that is made by drawing the new model m´ as a random deviate from a probability distribution q ( m´  ǀ m ) conditional only on the current model m . The proposal ratio terms in equation 4 vanishes in case of symmetric proposals (for example a Gaussian proposal) and for fixed dimensional model spaces (i.e. m and m´ have the same dimensions). In case of fixed-dimensional model spaces or for peculiar recipes of transdimensional algorithms even the determinant of the Jacobian is equal to 1 and can be neglected. In our application we follow the rjMCMC recipe discussed in Bodin et al. (2012) where the Jacobian term is equal to 1 and can be conveniently ignored. In the following we set: (5) where N are the possible interface locations (in our applications the time samples), and Δ n = n max − n min + 1. That is a uniform prior between n min and n max is considered to set p ( n ). The prior for the vertical location of n -1 interface is given by: (6) where Z represents the vertical location of layer interfaces. Since, we consider uniform distributions for the subsurface parameters we can write: (7) where e is a given subsurface property (i.e. Ip , Vp , Vs , or density), and Δ e = e max − e min . In this context the full prior probability can be written as:

RkJQdWJsaXNoZXIy MjQ4NzI=