GNGTS 2022 - Atti del 40° Convegno Nazionale

GNGTS 2022 Sessione 3.3 511 DEEP ATTRIBUTES: EXTRACTION OF GPR STATISTICAL FEATURES WITH DEEP LEARNING G. Roncoroni, E. Forte, M. Pipan Università di Trieste, Dipartimento di Matematica e Geoscienze, Italy Introduction. We propose a new way of using Hidden Layer (HL) predictions for the extraction of features from Ground Penetrating Radar (GPR) data, using Long Short Term Memory (LSTM) neurons. The main idea is based on the inference process of a LSTM-based Neural Network: in hidden layers the Neural Network (NN) extracts information related to attributes of input data such as amplitude, phase and frequency and uses them to get the required inference. Deep Learning applications typically ignore the information from the intermediate steps because the main interest lies in the final results. In this work we study the possible exploitation of intermediate prediction steps to characterize features and signatures embedded in GPR data. Following the most popular definitions of seismic attributes (see e.g. Chopra and Marfurt, 2005) and Anstey et al. , 2007) we define the new extracted features as “ Deep attributes ” testing their effectiveness on GPR datasets. We find that deep attributes are helpful to get improved and more constrained data interpretation, highlighting zones with the same signature which are not apparent on usual reflection amplitude profiles. Furthermore, deep attributes can be used to classify and correlate features within even large and complex datasets. Methods. In this paper we focus on LSTM-based NN, but the proposed approach can be applied in principle to any NN geometry. The choice of LSTM is motivated by the causal nature of the Recurrent Neural Network (RNN) which can provide a more reliable representation of the different signal components embedded in the GPR data. The simplest RNN is made up of a single neuron that receives input, produces an output, and sends that output to itself and to the output vector. At each time step t (often called frame ), the recurrent neuron receives the inputs x(t) as well as its own output from the previous time step, y(t-1). Each recurrent neuron has two sets of weights: one for the inputs x(t) and the other for the outputs of the previous time step y(t-1). If we consider the whole recurrent layer, instead of just one recurrent neuron, we can place all the weight vectors in two weight matrices Wx and Wy, while Ø is an activation function. Therefore: One of the problems RNNs have is to handle long-term dependencies: to solve this problem some special neurons have been introduced: one of the most used is the Long Short-Term memory (LSTM), introduced by Hochreiter and Schmidhuber (1997). LSTMs are in fact explicitly designed to avoid the long-term dependency problem: they have a chain-like structure, as RNN, but the repeating module has a different structure (see Fig. 1), which is able to keep on information even from t<t-1 . The ability to spot long time dependencies in a signal is the reason for the choice of this type of layers for the proposed extraction method. The final aim of the procedure is to automatically extract (i.e. interpret, Roncoroni et al. , 2021) horizons, and we train the NN to perform this task (Fig. 2). The proposed workflow for this application is: I. Create a LSTM-based NN that fits a specific problem II. Train the NN, for a specific task, on a synthetic dataset (see e.g. Roncoroni et al., 2021). III. Apply the trained NN on real data IV. Use the HL predictions as a set of additional information (i.e. deep attributes) for improved GPR interpretation.

RkJQdWJsaXNoZXIy MjQ4NzI=