GNGTS 2015 - Atti del 34° Convegno Nazionale

GNGTS 2015 S essione 3.3 127 Fortunately, in many practical situations we have a priori information about the velocity model that complements the information contained in the seismic data. The combination of these two sets of information (the information contained in the data and the a priori one) are often sufficient to constraint adequately the problem. From an historical point of view, the conventional velocity analysis for each commom-mid- point (CMP) section is defined by the following steps (Yilmaz, 2001): 1) semblance analysis is performed in the time and root-mean-square (RMS) velocity domain by computing a normalized coherency coefficient of the data along each hyperbolic time trajectories, this produce a map ( ) RMS VtS , 0 ; 2) manual picking is performed on the semblance map for each specific stacking time 0 t ; 3) interval velocities ( V i ) are computed based on the picked RMS V to construct earth velocity time model; 4) time-to-depth conversion is performed to get the earth velocity model in depth. In more recent years, the main effort in velocity model building techniques has been directed towards the improvement of depth migrated images. In particular, we can affirm that tomographic inversion forms the basis of all contemporary methods of updating velocity models for depth imaging. Over the past 15 years, tomographic inversion has evolved (from ray based travel time tomography in data domain and image depth domain to full waveform techniques – a quite complete reference can be found for example in Jones (2010) to support the production of reasonably reliable images for data from complex environments, taking advantage of both the increased quality and dimensionality of seismic data (from millions of traces to currently billions of traces) and the increased computing power dedicated to seismic processing. Less appealing novelties have appeared in the “time processing world”, especially for processing steps related to time domain imaging. Nonetheless, data images obtained in the time domain (from the simplest volume stack to a more accurate pre-stack time migration) are still the most powerful QC -Quality Check- tools that any geophysicist can apply to evaluate the results of any time processing tool (i.e., denoising, multiple subtraction, …). Indeed, as the current dataset magnitude of seismic data prevents anyone from a direct and complete inspection of pre-stack gathers, time domain images are a truly unique opportunity for data QC. Nonetheless, the “resolving power” of such time domain processes demands for a high quality velocity model, that should be available from the very early processing steps, and that should not require any intensive human interaction to be built. Since its introduction by Taner and Koehler (1969), the semblance measure has been an indispensable tool for velocity analysis of seismic records. The drawback of this methodology is that a visual interpretation of the semblance map is necessary for all (or many of) the common- mid-points sections. Moreover a localized mispick along a velocity trend can yield anomalous interval velocities. In particular, picks at too close time intervals can yield physically implausible interval velocity values; therefore, they should not be taken into account. The problem of automatic velocity picking has been studied by many authors. Toldi (1989) and Viera et al. (2011) built an a-priori constrained interval-velocity model such that the stacking velocities calculated from that model give the most powerful stack. In Fomel (2009) an algorithm based on the AB semblance proposed by Sarkar et al. (2001, 2002) is derived while Li and Biondi (2009) developed an automatic velocity picking technique based on the Simulated Annealing Algorithm. The present study is to propose a total or partial elimination of the manual picking step, by setting up a data driven V RMS model building strategy. The developed model building technique, uses the time migration and/or normal moveout outputs with an additional parameter scan, trading currently available computing power for intensive human interaction, while taking advantage of the compressive sensing framework (e.g. Candès et al., 2006). Indeed, a non-linear inversion, constrained by interval velocity bounds, provides a sparse and robust solution.

RkJQdWJsaXNoZXIy MjQ4NzI=