GNGTS 2019 - Atti del 38° Convegno Nazionale

750 GNGTS 2019 S essione 3.3 MCMC recipe calibrated for the problem at hand is crucial to ensure the convergence of the probabilistic sampling toward a stable posterior distribution. References Sambridge, M., and Mosegaard, K. (2002). Monte Carlo methods in geophysical inverse problems. Reviews of Geophysics, 40(3), 3-1. Vrugt, J. A. (2016). Markov chain Monte Carlo simulation using the DREAM software package: Theory, concepts, and MATLAB implementation. Environmental Modelling & Software, 75, 273-316. SYNTHETIC SEISMIC DATA GENERATION USING DEEP LEARNING G. Roncoroni, L. Bortolussi, M. Pipan Università di Trieste, Dipartimento di Matematica e Geoscienze, Trieste, Italy We study the applicability of deep learning [3] methods to the resolution of complex physical problems, in particular to generate 1-D seismic data in the case of acoustic media. The main task is to compute synthetic seismograms without simulating wave propagation, that is without directly solving the wave equation. This is done using deep learning techniques on the purpose of learning how to predict a multi-offset seismograms from a 1-D velocity model. The wave equation for 1-D case is a non-linear (with respect to velocity) partial differential equation that describes the propagation through time of the displacement of the field. The classical solution, obtained with Finite Differences (FD) [2], is computationally expensive, sensitive to grid spacing and affected by boundary reflections. The methodology proposed in this work starts from the idea of considering seismograms as time series (as in speech generation) instead as an image, i.e. as a matrix of pixels. This allows a more accurate reproduction of both signals (primaries) and interferences (multiples). After extensive tests of geometries and approaches, we selected Recurrent Neural Networks (RNN) [4]. Given the lack of literature or previous examples, addressed the problem by starting from the simplest possible case and by extending the implementation to cases of increasing complexity. The final custom net is consists two Long Short Term Memory (LSTM) [1] layers, a Convolutive layer, and a final LSTM layer. All of the LSTM layers have a number of neurons equal to the number of output offsets. In the Convolutive layer the kernel size is [64,4] and it has 4 filters. This gives the net the flexibility to get values in the window [t - 4 : t] and to store it in 4 different filters. We generate a synthetic dataset through a FD simulation, which is made of 10,000 samples, and split it into training and validation datasets (80% former and 20% the latter). Afterwards, the net is trained in order to adjust weights on the training dataset. After each training epoch, the loss between the real output and the prediction is computed for the training dataset and the validation one. After training the net, we made some predictions to find out how good the results were. Fig. 1 shows a prediction on a blind velocity profile with all the 257 offsets. Primary reflections are highlighted over both the real and predicted data, so that we can easily identify non-primary reflections. We can see that primary reflections are well predicted and we have also a multiple reflection predicted from the model at 600 ms.

RkJQdWJsaXNoZXIy MjQ4NzI=