GNGTS 2021 - Atti del 39° Convegno Nazionale

397 GNGTS 2021 S essione 3.1 The input tensor e this problem through the deep image prior paradigm (Ulyanov et al., 2018), re itself can capture enough low-level features (i.e., our prior) from a single ume, mainly exploiting the self-similarities in the data. Through this paradigm, parametric function ^ X = F θ ( Z ) mapping a noise realization Z to the data space. ly initialized and updated by minimizing a loss function on the known traces, in een the generated data ^ X and the original data X 0 : θ = argmin θ ‖ M ⨀ F θ ( Z ) − X 0 ‖ 1 only on the actual acquisition geometry, the output of the CNN ranges over the twork learns how to transform a fixed noise volume Z into the decimated data self; the missing traces are restored at the same time. ET WITH SYMMETRIC OUTPUT nneberger et al., 2015), which has been designed for medical image processing, as a deep prior for seismic shot gathers interpolation (Kong et al., 2020; Liu et modified architecture based the rationale that seismic data exhibit self-similarity at the output of the network (i.e. the reconstructed time slice) should be ts of the modified architecture are the following: he features of seismic data at different scales we use the multi-resolution block. convolutional layers extracts features at different scales, and their outputs are er. Moreover, a residual connection with 1×1 convolution, which proved to be lation problems, is added; in the sequence is gradually increased, in order to efficiently manage memory; ection, distinctive feature of standard U-Net, is replaced by the path residual es the 3×3 and 1×1 convolutions. Its non-linear transformations on encoder semantic gap introduced by deeper decoder stages; erformed by 3×3 convolutions with stride 2×2, upsampling is performed by s instead of deconvolutions, improved the results; is Gaussi noise realization wh se size is the same as the desired output data. We used Adam optimizer with learning rate of 0.001 in each experiment. Noise- based regularization is applied by perturbing ecover X starting from X 0 , we can invert for X the equation X 0 = M ⨀ X , which is an ill-posed rse problem. We solve this problem through the deep image prior paradigm (Ulyanov et al., 2018), re the CNN architecture itself can capture enough low-level features (i.e., our prior) from a single pted seismic data volume, mainly exploiting the self-similarities in the data. Through this paradigm, NN is modeled as a parametric function ^ X = F θ ( Z ) mapping a noise realization Z to the data space. weights θ are randomly initialized and updated by minimizing a loss function on the known traces, in case the distance between the generated data ^ X and the original data X 0 : θ = argmin θ ‖ M ⨀ F θ ( Z ) − X 0 ‖ 1 le the loss is computed only on the actual acquisition geometry, the output of the CNN ranges over the le ideal survey: the network learns how to transform a fixed noise volume Z into the decimated data the decimated data itself; the missing traces are restored at the same time. TI - RESOLUTION U-N ET WITH SYMMETRIC OUTPUT U-net architecture (Ronneberger et al., 2015), which has been designed for medical image processing, een successfully used as a deep prior for seismic shot gathers interpolation (Kong et al., 2020; Liu et 019). Here, we use a modified architecture based the rationale that seismic data exhibit self-similarity ifferent scales and that the output of the network (i.e. the reconstructed time slice) should be metric. The main po nts of the modified architecture are the following: in order to capture the features of seismic data at different scales we use the multi-resolution block. A sequence of 3×3 convolutional layers extracts features at different scales, and their outputs are concatenated together. Moreover, a residual connection with 1×1 convolution, which proved to be effective for interpolation problems, is added; the number of filters in the sequence is gradually increased, in order to efficiently manage memory; the direct skip-connection, distinctive feature of standard U-Net, is replaced by the path residual block which includes the 3×3 and 1×1 convolutions. Its non-linear transformations on encoder features balance the semantic gap introduced by deeper decoder stages; downsampling is performed by 3×3 convolutions with stride 2×2, upsampling is performed by bilinear interpolations instead of deconvolutions, improved the results; with additive Gaussian noise with zero me n and a standard deviation of 0.03. Moreover, we exploit source-receiver reciprocity for data augmentation by adding to the actual input the missing traces of the common receiver gathers corresponding to actual sources. Finally, to limit aliasing issues, we decrease the input regularity by randomly erasing a fixed rate of input samples. Therefore, the proposed workflow for dataset regularization can be summarized as follows: The coarse dataset Any acquisition geometry G ( x S , x R ) i defined by a N ×N binary matrix M ( x S , x R ) (i.e., a mask follows: M ( x S , x R ) = { 1 if ( x S , x R ) ∈ G 0 otherwise Therefore, the dataset X 0 acquired with the actual acquisition geometry G can be modeled as t application of the mas M to the ideal survey X for each time slice, i.e., X 0 = M ⨀ X . We consider t case in which th ac ual number of sources is sm ller th n the number of sources in the ideal den dataset. To recover X starting from X 0 , we can invert for X the equation X 0 = M ⨀ X , which is an ill-pos inverse problem. We solve this problem through the deep image prior paradigm (Ulyanov et al., 201 where the CNN architecture itself can capture enough low-level features (i.e., our prior) from a sin corrupted seismic data volume, mainly exploiting the self-similarities in the data. Through this paradig the CNN s m deled as a parametric function ^ X = F θ ( Z ) mapping a n ise realization Z to the data spa The weights θ ar randomly initialized and updated by minimizing a loss function on the known traces, this case the distance between the generated data ^ X and the original data X 0 : θ = argmin θ ‖ M ⨀ F θ ( Z ) − X 0 ‖ 1 While the loss is comput d only on the actual acquisition geom try, the output of the CNN ranges over whole ideal survey: the network learns how to transform a fixed noise volume Z into the decimated d from the decimated data itself; the missing traces are restored at the same time. M ULTI - RESOLUTION U-N ET WITH SYMMETRIC OUTPUT The U-net architecture (Ronneberger et al., 2015), which has been designed for medical image processi has been successfully used as a deep prior for seismic shot gathers interpolation (Kong et al., 2020; Liu al., 2019). Here, we use a modified architecture based the rationale that seismic data exhibit self-similar at different scales and that the output of the network (i.e. the reconstructed time slice) should symmetric. The main points of the modified architecture are the following: • in order to capture the features of seismic data at different scales we use the multi-resolution blo A sequence of 3×3 convolutional layers extracts features at different scales, and their outputs concatenated together. Moreover, a residual connection with 1×1 convolution, which proved to effective for interpolation problems, is added; • the number of filters in the sequence is gradually increased, in order to efficiently manage memor • the direct skip-connection, distinctive feature of standard U-Net, is replaced by the path resid block which includes the 3×3 and 1×1 convolutions. Its non-linear transformations on enco features balance the semantic gap introduced by deeper decoder stages; • downsampling is performed by 3×3 convolutions with stride 2×2, upsampling is performed is augmented exploiting source-receiver reciprocit , the mask onvolutional filters but the last one, responsible for the generation of fully sampled time slices. he input tensor Z s a Gau sian no se realization whose size is the same as the desired output data. We sed Adam optimizer with learning rate of 0.001 in each experiment. Noise-based regularization is pplied by per urbing Z with ad itive Gaussian nois with zero mean and a standard deviation of 0.03. oreover, we exploit source-receiver reciprocity for data augmentation by adding to the actual input the issing traces of the common receiver gathers corresponding to actual sources. Finally, to limit aliasing ssues, we decrease the input regularity by randomly erasing a fixed rate of input samples. Therefore, the roposed workflow for dataset regularization can be summarized as follows: 1. The c arse dataset X 0 is augment exploiting so rce-r c iver reciprocity, the as M is updated consequently. 2. An additional random mask R is created to reduc the data regularity, limiting the output aliasing. 3. Th op imization scheme is fed with the rand miz d mask nd random zed augmented data. UMERICAL EXAMPLES n this section, we show a synthetic example build upon the central portion of the Marmousi model, ampled every 20m. The acquisition geometry is made of 256 sources placed every 20m; in the very same ositions we placed 256 receivers whose signals are modeled through a 2D FD propagation of a Ricker avelet centered at 15Hz; the recording time is 9.6s sampled every 8ms. Finally, the direct arrivals have ee removed from the data. The coarse acquisition is obtained by regularly taking 1 every 4 shots to imulate a more realistic source spacing of 80m. We first restored the 3D data time slice by time slice, then he 3D co voluti nal filters were introduced. Finally, we added the source-receiver symmetry constraint to he output of the network. ime slice interpolation igure 1 shows the application of the 2D multi-res U-Net as deep prior to solve the regularization problem. n particular, Figure 1a displays the regularized time slice, Figure 1c displays a reconstructed common eceiver gather a d Figure 1d the corresponding synthesized shot gather. Whilst the result can be onsider d quite acceptable, it is possible to notice some alias in the r construction and some leakage in the esiduals shown in Figure 1b. The achieved SNR is 11 . 5dB. mploying 3D convolution kernels he introduction of 3D convolutions in the multi-res U-Net allows to exploit the self-similarity and the orrelations between different time slices making the network more capable of capturing useful features in is updated consequently. An additional random mask The input tensor Z is a Gaussian noise realization whose size is the same as the desired output data. W used Adam optimizer with learning rate of 0.001 in each experiment. Noise-based regularization i appl ed by pert rbing Z with dd tive Gau ian noise with zero mean and a standard deviation of 0.03. More v r, we exploit sou e-rece ver reciprocity for ata ugment tion by adding to the ctual input th mi sing traces f the common receiver gathers corresponding to actual sources. Finally, to limit aliasin issues, we decrease the input regularity by randomly erasing a fixed rate of input samples. Therefore, th proposed workflow for dataset regularization can be summarized s f llows: 1. The coarse dataset X 0 is augmented exploiting source-receiver reciprocity, the mask M is upda consequently. 2. An additional mask R is created to reduce the data regularity, limiting the output aliasing. 3. The optimization scheme is fed with the randomized mask and randomized augmented data. NUMERICAL EXAMPLES In this section, we show a synthetic example build upon the central ortion of the Marmousi mo sampl d every 20m. Th acquisition geometry is made of 256 sourc s placed every 20m; in the very sa positions we placed 256 receivers whos sign ls are modeled through a 2D FD propagation of a Ric wavelet centered at 15Hz; the recording time is 9.6s sampled every 8ms. Finally, the direct arrivals h b en removed from the data. The coarse acquisition is obtained by regularly taking 1 every 4 shots simula a mor realistic source pacing of 80m. We first restored the 3D dat im slice b time slice, t the 3D convolutional filters were introduced. Finally, we ad ed the source-receiver s m etry onstraint the output of th network. Time slice interpolation Figure 1 shows th application of the 2D multi-res U-Net as d ep prior to solve the regularization probl In particular, Figure 1a di p ays the regula ized time lice, Figur 1c displays a reconstructed co receiver gather and Figure 1d the corresponding synthesized shot gather. Whilst th result can considered quite acceptable, it is possible to notice some alias in the reconstruction and some leakage in residuals shown in Figure 1b. Th achi ved SNR is 11 . 5dB. Employing 3D convolution kernels The introduction of 3D convolutions in the multi-res U-Net allows to exploit the self-similarity and correlations between different time slices making the network more ca able of capturing useful features is o d t d t regularity, limiting the output aliasing. The optimization scheme is fed with the randomized mask and randomized augmented data.

RkJQdWJsaXNoZXIy MjQ4NzI=