GNGTS 2023 - Atti del 41° Convegno Nazionale

Session 3.3 ______ ___ GNGTS 2023 Deep Attributes 2.0: Extraction of Seismic features with Deep Learning G. Roncoroni, E. Forte, M. Pipan Università di Trieste, Dipartimento di Matematica e Geoscienze Introduction We propose a new way of using Hidden Layer (HL) predictions for the extraction of features, i.e. attributes, from active seismic data, using Long Short Term Memory (LSTM) neurons (Hochreiter and Schmidhuber, 1997). The main idea is based on the inference process of a LSTM-based Neural Network: in hidden layers the Neural Network (NN) extracts information related to attributes of input data such as amplitude, phase and frequency and uses them to get the required inference. Deep Learning applications typically ignore the information from the intermediate steps because the main interest lies in the final results. We propose a totally new way of using Hidden Layer (HL) predictions, which are usually the “transparent” steps of any NN lying in between data input and the expected output. Furthermore, deep attributes can be used to classify and correlate features within even large and complex datasets. In this work we study the possible exploitation of intermediate prediction steps to characterize features and signatures embedded in seismic data. Following the most popular definitions of seismic attributes (see e.g., Chopra and Marfurt, 2005 and Anstey et al., 2007) we define the new extracted features as " Deep attributes " testing their effectiveness on synthetic and field seismic data. Methods In this paper we focus on LSTM-based NN, however, the proposed approach could be applied, in principle, to any NN geometry. The choice of LSTM is motivated by the causal nature of the Recurrent Neural Network (RNN) which can provide a more reliable representation of the different signal components embedded in the seismic data. To fully exploit the methodology a crucial point is the NN geometry: the use of an encoder-decoder structure is directly linked to the typical Encoder-Decoder Convolutive NN (ED-CNN). A classical geometry of ED-CNN is made of a chain of couples of CNN layers, linked with pooling layers in the encoder and with up-sampling layers in the decoder. A pooling layer takes values in an interval (defined as kernel) and gives out a single value, i.e., the maximum values in this case; doing this we can perform a reduction of the length of the trace equal to the kernel size. An up-sampling layer makes exactly the opposite: it takes a single value at and replicates it kernel-size times. The mix

RkJQdWJsaXNoZXIy MjQ4NzI=