GNGTS 2021 - Atti del 39° Convegno Nazionale

GNGTS 2021 S essione 3.3 500 EFFICIENT AUTOMATIC EXTRACTION OF SEISMIC HORIZONS WITH DEEP LEARNING G. Roncoroni¹, E. Forte¹, L. Bortolussi¹, M. Pipan¹ ¹ Università di Trieste, Dipartimento di Matematica e Geoscienze Introduction We propose a procedure for the interpretation of horizons in seismic reflection data based on a Neural Network (NN) approach. The implemented algorithm is based on a fully 1-D approach, which does not require any input besides the seismic data, because the necessary thresholds are automatically estimated. An added benefit is that the prediction has an associated probability, which automatically quantifies the reliability of the results. We tested the proposed procedure on 2-D and 3-D synthetic and field seismic datasets. We have successfully tested the procedure also to Ground Penetrating Radar data, verifying its versatility and potential. Methods We implemented an algorithm based on a Long Short-Term Memory (LSTM) architecture because we want to keep the causality of the data and the Long-Term memory better fits the physics behind the wave propagation (Hughes et al., 2019). In fact, the Bi-Directional LSTM is a strategy able to improve the accuracy of NN classification (Guo et al., 2019) and in the present case it can help the NN to find the correct shape of the wavelet by working on both sides of it. The output is driven by a Dense layer with a SoftMax activation function that outputs a pro- bability value equal to 1 on the maximum phase of a reflection. We used the CuDNNLSTM, a fast approximation of LSTM (Hochreiter and Schmidhuber, 1997) that works on Nvidia CUDA (Chetlur et al., 2014). As optimizer we used AdaMax (Kingma and Ba, 2014), a modified version of Adam with infinity norm and as loss function we used Categorical Crossentropy (Mannor et al., 2005). The training is fully performed on synthetic data obtained from a convolutional model-based scheme, while the subsequent horizon extraction step can be applied to any type of field seismic dataset. The training dataset is often a crucial issue for the performances of the algorithm to field da- tasets: we train the NN on synthetic data to avoid any link to a specific field dataset and to have a complete control over the NN performance through the knowledge of the subsurface model that generates the training data. The training phase was split in two steps (Kavzoglu, 2009) namely: an initial training on noi- seless data and a subsequent training on a noisy dataset. This choice was due to the unbalanced output solutions: a direct training on the noisy trace would lead to a huge local minimum where the NN outputs only 0s. After the training on the noiseless dataset, we found out that the best way to simulate field data for NN is to apply both a pure random noise added to the convolved trace and a random noise added to the reflection coefficient series before the convolution: f ( t ) = ( w ( t ) ∗ ( n 2 ( t ) + r ( t ) ) ) + n 1 ( t ) rediction uncertainty, we use the ensemble learning technique, which uses ms to obtain better predictive inferences. In detail, we tested two solutions, ifferent NN trained on a dataset with the same characteristics, and prediction single trace and on its time-inverted version. The two approaches produced decided to use a single NN to reduce the required training effort. semble learning, we get two different predictions: one on the single trace and In order to reduce the prediction uncertainty, we use the ensemble learning technique, which uses multiple learning algorithms to obtain better predictive inferences. In detail, we tested two solutions, namely: prediction with differ nt NN trained on a dataset with the same characteri- stics, an prediction with the same NN on single trace on its me-inverted version. The two approaches produced similar results and e thus decided to use a single NN to reduce the required training effort.

RkJQdWJsaXNoZXIy MjQ4NzI=