GNGTS 2023 - Atti del 41° Convegno Nazionale
Session 3.3 ______ ___ GNGTS 2023 Equivariant imaging for self-supervised regular seismic data interpolation W. Xu, V. Lipari, P. Bestagini, S. Tubaro Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy SUMMARY Due to complex field conditions and economic circumstances, seismic data are usually subsampled in the spatial domain and must be interpolated to meet the requirements of subsequent seismic data processing, such as seismic imaging. To address this problem, we present a method for interpolating seismic data using a self-supervised deep learning framework. Specifically, a CNN is trained using only the observed seismic data itself, which has not been sampled. Furthermore, based on the equivariance of seismic data in terms of displacement and subsampling, a training strategy is used to enforce both measurement consistency and equivalence. Experiments on interpolation of regularly subsampled field data demonstrate the effectiveness of our presented method. INTRODUCTION While ideal high quality seismic data should be acquired with dense and regular spatial sampling of sources and receivers, in a real scenario, due to the limitations of various subjective and objective conditions, it is inevitable to acquire spatially insufficiently sampled seismic data, leading to negative consequences in subsequent processing and seismic imaging. Therefore, it is important to develop effective interpolation techniques. In the last few years, many interpolation methods have been developed. The main traditional interpolation methods can be broadly classified into four families: wave-equation methods (Fomel, 2003), prediction filtering methods (Naghizadeh and Sacchi, 2008), sparse representation methods (Hennenfent and Herrmann, 2008; Yu et al., 2015) and rank reduction methods (Chen et al., 2019). Recently, many deep learning-based interpolation methods have been proposed showing excellent performances. Different network architectures with different effective loss functions, such as Convolutional Autoencoder with mean squared error (MSE) loss (Mandelli et al., 2018) and U-net with a texture loss (Fang et al., 2021), have been applied in a supervised strategy by collecting or generating proper datasets. Besides the network architectures and the loss functions, the performance of supervised deep learning techniques is largely determined by the training datasets, usually made of pair of original and corrupted gathers. However, it is not easy to collect appropriate dataset or simulate synthetic
Made with FlippingBook
RkJQdWJsaXNoZXIy MjQ4NzI=