GNGTS 2023 - Atti del 41° Convegno Nazionale
Session 3.1 GNGTS 2023 Figure 2: Synthetic Image logs building up the training set. For each pair we have the input on the left and the desired output mask on the right. Note that in a) we just have one interface for each training example. b) We no longer have just one interface for each training example; the number of edges is one of the factors through which we addressed complexity of the training instances. Results The results obtained are not considered independent of the nature of the synthetic data generated for the training. Compared to Bengio's work, we can state that a convergence towards a local minimum occurs faster in the pre-trained model in our case as well. Given the pre-training on a large number of relatively simple examples, we also justify the lower initial value of the loss function in the CL case. In absolute terms, however, we observe a slightly slower convergence to a lower local minimum in the single-step training case than in the CL case, but with a lower final loss. In this regard, some considerations have been made regarding the quality of the predictions of the two models on poor-quality real (due to disturbances and artefacts related to the LWD) acquired in a difficult geological setting, due to the continental margin environment, where various depositional and tectonic phenomena can occur and mutually interfere, to the point that very complex structural and sedimentary configurations can take place, as in the case of Mass Transport Deposits(MTD). The execution time of the models on real data was recorded both using CPU only and GPU only. For both models, with both single training strategy and CL, these times were 0.527 seconds (CPU) and 0.105 seconds (GPU) for a full image log of 624 m. Prediction results are shown in Fig.3 where we display the results of the segmentation conducted on real data, on which a moving window of 1 m depth is scrolled. The scrolling of the moving window takes place from sample to sample in depth, thus admitting an overlap of 95\%. This results in a redundancy of the information contained in the output, which must be obliterated appropriately before the results are displayed. To do this, the original size of the data was restored by iteratively summing the predictions contained in the overlapping portions of sequential windows, in order to assign a higher prediction value to those pixels in the image activated in multiple sequential windows.
Made with FlippingBook
RkJQdWJsaXNoZXIy MjQ4NzI=