GNGTS 2019 - Atti del 38° Convegno Nazionale

560 GNGTS 2019 S essione 3.1 with patches extracted, at the same location, from a pair of triplets of snapshots, { u nk , u nk+1 , u nk+2 } and { u (n+1)k , u (n+1)k+1 , u (n+1)k+2 }. Each patch is normalized between 0 and 1 before being fed into the network. The input layer takes the two triplets of patches. The encoding path consists of seven stages where 2D convolutions with filter size 4×4 and stride 2×2 , followed by batch normalization and Leaky ReLu, are performed. The decoding path consists of seven stages where a ReLu, an upsampling with factor 2 , and a 2D convolution with filter size 4×4 and stride 1×1 followed by batch normalization are performed. In each stage we concatenate the result coming from the corresponding encoding stage. As we go up in the decoding path of the network, the number of filters is gradually diminished from 512 to 64 . The output layer is comprised of k−3 reconstructed patches. The network is trained exploiting the original k−3 snapshots for estimating the U-net parameters. The model weights are estimated by minimizing the MSE between reconstructed and original snapshots. The rationale behind the input choice is that the ideal interpolation should simulate the behaviour of the wave equation, which is driven by spatial and temporal 2 nd derivatives: feeding the U-net with triplets of snapshots should implicitly allow the CNN to learn from 2 nd order time derivatives Joint compression/interpolation. The aforementioned CNN based procedures outline a potential workflow allowing to jointly exploit both techniques for wavefield compression: Compression: 1. The wavefield is decimated keeping three ( u nk , u nk+1 , u nk+2 ) neighbouring snapshots every k . 2. The resulting triplets { u nk , u nk+1 , u nk+2 } are compressed through the encoding branch of the autoencoder obtaining θ nk θ nk+1 , θ nk+2 . 3. Thresholding and lossless compression are then applied. Decompression: 1. Lossless decompression is applied to output the encoded snapshots. 2. The encoded snapshots θ nk, θ nk+1 , θ nk+2 .are decompressed through the decoding branch of the autoencoder, obtaining the corresponding decompressed snapshots û nk , û nk+1 , û nk+2 . 3. By feeding the trained U-net with a couple of adjacent triplets of decompressed snapshots ({ û nk , û nk+1 , û nk+2 } and { û (n+1)k , û (n+1)k+1 , û (n+1)k+2 }), the k−3 snapshots between them are reconstructed. The rationale behind the outlined strategy is that a properly designed autoencoder allows effective compression of the snapshots whereas the U-net guarantees an additional k/3 compression ratio at the cost of a minimum decompressed atom of k+3 snapshots. Experimental results. In this initial study we investigated the two proposed compression procedures separately. In order to build the dataset used to test both strategies we proceeded as follows: 1. We extracted 7 velocity models of 1024×1024 samples extracted from the well-known BP-2004 and Marmousi models with spatial sampling rates ∆x=10m and ∆z=5m ). The models were selected to mimic different scenarios (smooth models, layering, faults, salt bodies, etc.). 2. For each model we simulated 5 equispaced sources at depth z=0m and 5 equispaced sources at depth z=2560m (at the center of the velocity model in depth). 3. For each source we computed 4 seconds of propagation through a finite differences code based on Devito (Luporini et al. , 2018) and we stored a snapshot every dt=0.01s . 4. In this way we produced a dataset made of 70 volumes of propagating wavefield, with 400 snapshots each, for a total of 28000 snapshots. To train the U-net for wavefield interpolation we selected 15 volumes of propagating wavefield which we further split in a 75% testing dataset and a 25% validation dataset. The training was performed through 50 epochs of the Adam optimization algorithm on batches of 64

RkJQdWJsaXNoZXIy MjQ4NzI=