GNGTS 2024 - Atti del 42° Convegno Nazionale

Session 3.3 GNGTS 2024 InterPACIFIC data example: Figure 3 presents the network's predicton on a real dataset. To align the data with the training simulatons, a low-pass flter with a cutof frequency of 30 Hz was applied. The predicted data was computed using an estmated wavelet derived from the observed data afer fltering. In Figure 3a, the predicted model is depicted alongside a mean 1D Vs-model profle (black curve) derived from multple borehole measurements available from the InterPACIFIC project. Note that a signifcant high-low-high velocity contrast is observed in the borehole between 15-18 meters depth, aligning well with the corresponding features in the predicted model at that positon. The network proposes a layered model with a velocity range consistent with those obtained from the borehole measurements. In Figure 3b it is shown the comparison between observed and predicted data. It is noteworthy that the data exhibits excellent agreement, except for traces located between the distances 30-40 meters. Despite this, the data does not present cycle-skipping, meaning that the predicted model is a very good startng model proposal for an FWI framework. Fig. 3 – Field data example: (a) S-velocity model predicted using the NN at epoch 1300. (b) Observed and predicted data comparison . Conclusions We introduced a tme-efcient neural network training in DCT-domain. The constructon of the training and validaton datasets was completed in parallel within 3.8 hours. The training process to reach epoch 1300 took 7.6 hours, and the required tme to propose a model using the trained network and conduct the inverse DCT is 0.2 seconds. All these algorithms were performed in a computer system powered by a 12th Gen Intel(R) Core (TM) i9-12900KF equipped with NVIDIA GeForce RTX 3080 Ti graphics card and the system runs with CUDA Version: 11.8. The use of DCT compression is an optmal strategy in neural network training, ofering signifcant advantages. This approach notably reduces the memory requirement from 21 to 1.2 gigabytes, resultng in a 94% reducton in memory usage. Moreover, the computatonal cost during training is decreased by 74% with respect a full-domain training. Finally, the DCT compression enables

RkJQdWJsaXNoZXIy MjQ4NzI=