GNGTS 2019 - Atti del 38° Convegno Nazionale

GNGTS 2019 S essione 3.1 563 3. Rank reduction methods recast interpolation (and denoising) problems as rank reduction or matrix/tensor completion (Trickett and Burroughs, 2009; Adamo et al. , 2015): due to their repetitive features, seismic data are low rank; conversely, noise and missing traces increase the rank. 4. Transform based methods exploit a transform domain where the clean signal can be represented only by few non-zero coefficients and therefore clean data and artifacts due to noise and missing traces are more easily separable. Referring to this last family, several fixed-basis sparsity-promoting transforms have been used: Fourier transform, Hilbert-Huang transform and different curvelet-like transforms. These methods implicitly describe data as a linear combination of atoms taken from a fixed dictionary. However, fixed dictionaries define a subset of the transforms methods. Alternatively, sparse dictionaries can be learned directly from data and usually better match its complex characteristics. For instance, denoising results outperforming standard transforms are reported in Zhu et al. (2015). Recently, outstanding advancements brought by deep learning and Convolutional Neural Networks (CNNs) greatly impacted the signal processing community and innovative strategies based on CNNs have been explored also by the geophysical community. Here we leverage a CNN both to interpolate and to denoise irregular shot-gathers: in particular, we exploit a properly trained U-net (Ronneberger et al. , 2015). Examples on synthetic and field data show promising performances on either denoising, interpolation, or joint denoising/interpolation problems and outperform some recent techniques taken as references. Problem statement and background on autoencoders. We define as Ī the corrupted version of clean and densely sampled shot gather I . Our goal is to estimate, from Ī , a dense gather Î , as similar as possible to the original gather I . We use a Convolutional Autoencoder (CA), motivated by its capability in learning compact representations of the data. In particular, we exploit a U-net. Originally designed for image segmentation problems, the U-net, named after its shape, shares a large part of the architecture with a standard CA but it has direct connections where the representations obtained at different encoder layers are directly concatenated to the corresponding decoder layers. By analogy with transform-based methods, we can think the trained U-net as a tool implicitly providing a multi-scale/multi-resolution compact representation, able to describe the complex features of clean seismic data where noise and missing data are not modelled. Since we work by dividing each shot gather into patches of size N×N. The overall architecture scales according to the patch dimension and can be described as follows: 1. The encoder part contains a number of layers where a 2D Convolution, followed by Batch Normalization (BN) and Leaky ReLu, is performed. These stages lead to the hidden representation. The number of filters increases as we go deep into the network. 2. The decoder part contains the same number of layers as the encoder, where a ReLu, a 2D Transposed Convolution and a 2D cropping, possibly followed by BN and Dropout are performed. The number of filters is gradually diminished as we go up in the decoding path. The last output of the last stage has the same size of the input patch. 3. In each decoding stage we concatenate the result of the corresponding encoding stage. As in any supervised learning problem, we need training and validation datasets, each one composed by pairs of corrupted/uncorrupted gathers ( Ī , I ). For denoising only, and joint denoising/interpolation, model weights are estimated by minimizing a loss defined as the mean squared error (MSE) over all patches belonging to gathers in the training set. Since for the task of interpolation only, only missing samples need to be reconstructed by the network, then the MSE is evaluated only on reconstructed samples. After training, any new corrupted gather Ī is split into a set of patches, each patch is processed by the U-net, and finally all the estimated patches are reassembled together.

RkJQdWJsaXNoZXIy MjQ4NzI=