GNGTS 2019 - Atti del 38° Convegno Nazionale

GNGTS 2019 S essione 3.1 559 space. The strategy based on interpolation consists in selecting a series of reference snapshots by subsampling the wavefield in the temporal dimension. We leverage a U-net CNN to interpolate reference snapshots in time and estimate the decimated ones. The preliminary results obtained on 2D propagation suggest to further explore CNNs for seismic wavefield compression in FWI and RTM. Wavefield compression via convolutional neural networks. In order to compute the gradient of the objective function, in adjoint state methods we first perform forward propagation and store the propagating wavefield. The resulting synthetic data is combined with the acquired data to build an adjoint source which is back propagated.At each time step, the back-propagating snapshot is cross-correlated with the corresponding saved snapshot of the forward wavefield. As ideal goal, all these operations should not require access to mass storage. Therefore, the compressed forward wavefield and the decompressed snapshot should fit into the memory. This means that a compression scheme suitable for this task should allow decompression of a single snapshot or at least of a limited group of snapshots. In particular, let u(x,y,t) be the wavefield to compress, where t indicates the temporal dimension, while x and y are spatial coordinates; we can describe the wavefield as a set of snapshots { u t }, with t ∈ { 0, . . . , N }. The strategies studied in this paper are snapshot compression via convolutional autoencoder and snapshot decimation and interpolation via CNN. Snapshots compression via convolutional autoencoders Autoencoders are neural networks that attempt to copy their inputs to their outputs through learning (Goodfellow et al. , 2016). They can be logically split in the following components: 1. Encoder: is a CNN which maps the input u t into the hidden representation, θ t =f(u t ) . This is the component that performs data compression. 2. Bottleneck: is the central layer of the autoencoder which contains the hidden representation, i.e. the encoded version θ t of the input. 3. Decoder: is a CNN which reconstructs the input from the hidden representation, i.e. û t =g(θ t ). This component performs the decoding. Both the encoder and decoder operators are composed by series of linear filters (i.e., convolutions), generally followed by non-linear activation functions (e.g., sigmoid, hyperbolic tangent, etc.) and, optionally, other linear and non-linear layers. We divided the snapshot in patches: this allows to perform compression of snapshots of any dimension and enlarges the training dataset. In particular, in our preliminary experiments we set a patch size of 128×128 samples for the autoencoder studied in this work. The first and last layers consists of a 2D convolutions layers with filter size 6×6 and stride 1×1 . The remaining encoding and decoding layers consists of four 2D convolution and four 2D deconvolution layers with decreasing filter size and stride 2×2 . To train the autoencoder, we use as loss function a weighted average between the Mean Squared Error (MSE) (forcing similarity between input and output) and the L1 norm of the encoded snapshot (increasing the sparsity of the hidden representation): By using this architecture, we achieve a compression ratio of 32 just considering dimensionality reduction from input to hidden representation. Moreover, we introduce a threshold Γ to cut-off small hidden representation values. Finally, we further increase the compression ratio via the lossless Lempel-Ziv-Markov (Zip) algorithm. Snapshots interpolation via U-net. The U-net is similar to a standard convolutional autoencoder architecture but adds shortcuts between each layer of the encoder and the corresponding layer of the decoder. This architecture, originally designed for the image segmentation task, shows good performances also for image denoising and interpolation. Here again we divide the input snapshots in patches, of size 128×128 . We feed the CNN

RkJQdWJsaXNoZXIy MjQ4NzI=