GNGTS 2024 - Atti del 42° Convegno Nazionale
Session 2.2 GNGTS 2024 The funcConal encoding and decoding are performed using raConal quadraCc kernels based on the great distances between the input and output points: with and being two learnable parameters called length scale and the scale- mixture, respecCvely. The backbone neural network has a custom architecture, shown in Fig. 2, consisCng of a common sequenCal network that leads into two different branches for the evaluaCon of the mean IM values and the associated standard deviaCons, respecCvely. Fig. 2 – SchemaCc diagram of the ConvCNP backbone neural network architecture. Model training The model has been trained with a combinaCon of syntheCc and recorded data. Numerical simulaCons provide a cost-effecCve way to acquire more data to train neural networks that allows building datasets whose dimension can meet the actual requirements for training and in which the distribuCon of the events can be balanced, generaCng more scenarios for rare events with high magnitudes (generally under-represented in recorded data). Furthermore, different scenarios can be generated including different noise levels in the input data leading to models more robust to input noise. A syntheCc dataset has been created by simulaCng mulCple events over the Italian territory: the source characterisCcs have been taken within the ranges provided by the Database of Individual and Composite Seismogenic Sources, considering for each source mulCple scenarios for different magnitudes. The ShakeMap® INGV catalogue has been considered as the source for recorded data: specifically, the database considered contains 4925 events whose magnitude ranges between M3.0 and M6.5. The model is trained to learn a condiConal log-normal distribuCon over the expected GMPE output in two stages: first, a new model has been pre-trained on the syntheCc dataset; then, the pre-trained model has been fine-tuned using the real data. For each event (both syntheCc or recorded), a variable number of context points (i.e., the IM values at the staCons) and a fixed number of target points have been considered: the context points are corrected for the site effects using the amplificaCon factors by Falcone et al. (2020) evaluated at the staCon locaCons. k rq d ij k rq ( d ij ) = (1 + d ij 2 2 αλ 2 ) − α λ > 0 α > 0
Made with FlippingBook
RkJQdWJsaXNoZXIy MjQ4NzI=