GNGTS 2022 - Atti del 40° Convegno Nazionale

GNGTS 2022 Sessione 3.3 479 to reduce the number of data points and model parameters and hence the dimensions of H a and g . In this work, we propose a GB-MCMC FWI method combined with the compression of data and model space through a discrete cosine transform ( DCT ) and we apply this strategy to a small portion of the 2D Marmousi V p model. Method. Gradient-based deterministic inversions are aimed at minimizing a previously defined misfit function, which is usually a linear combination of data error and a model regularization term. For Gaussian-distributed noise and model parameters, the error function can be written as follows: (1) where the vectors m and d identify the model parameters and the observed data; C d and C m are the data and prior model covariance matrices; m prior is the prior model vector and G is the forward modelling operator which maps the model into the corresponding data. Theminimumof E ( m ) can be iteratively approached through a local quadratic approximation of the error function around the current model m k : (2) where Δ m = m - m k. In particular, it results that (3) and (4) where Δ d ( m k )= G ( m k )- d is the data misfit vector and J denotes the Jacobian matrix expressing the partial derivative of the data with respect to model parameters. A Bayesian inversion aims to estimate the full posterior distribution in the model space given by: (5) where p ( m | d ) is the posterior probability density (PPD), p ( d | m ) is the so-called data likelihood function that measures the fitting between observed and predicted data, whereas p ( m ) and p ( d ) are the a priori distributions of model and data parameters, respectively. For problems in which the p ( m | d ) cannot be expressed in a closed form, a Markov Chain Monte Carlo (MCMC) algorithm can be used for a numerical assessment of the posterior model ( Fig . 1 ). In this context, the probability to move from the current state of the chain m k to the next proposed state m k+1 is determined according to the Metropolis-Hasting ( M - H ) rule: (6) where q (•) is the proposal distribution that defines the new state m k+1 as a random deviate from a probability distribution q ( m k+1 | m k ) conditioned only on the current state m k (Hastings, 1970).

RkJQdWJsaXNoZXIy MjQ4NzI=