0 vistas

Cargado por geophenry

- 01530224
- Borehole seismic applications.docx
- 1 PAPR351 LanfengLiu Han
- Modul 01 Seismik Refleksi
- Xie Jin Wu 2006 Wave Equation Illumination Analysis
- c3-4
- 1seismicBSR.pdf
- CEE9533A - 2017 Assignment _1
- Estimating Rock Porosity and Fluid Saturation Using Only Seismic
- AVO
- Design and Construction of Levees
- Introduction Seismic
- Assignment 1
- 2006-08
- Seismic Imaging in and Around Salt Bodies
- Farmer 1994
- Review of the Effects of Seismic and Oceanographic Sonar_2010
- Seismic attributes
- Free Softwares
- How to Use Kriging

Está en la página 1de 29

SCHOOL OF

INFORMATION TECHNOLOGY

TECHNICAL REPORTS

COMPUTING SERIES

A Comparative Study of Open Seismic

Data Processing Packages

Izzatdin A. Aziz, Andrzej M. Goscinski and Michael Hobbs

{ia, ang, mick }@deakin.edu.au

TR C11/2 May 2011

GEELONG, VIC 3220

AUSTRALIA

1

A Comparative Study of Open Seismic Data

Processing Packages

Izzatdin A. Aziz, Andrzej M. Goscinski and Michael Hobbs

{ia, ang, mick }@deakin.edu.au

Deakin University

Geelong

Abstract

New seismic computational functions are being actively developed by geophysicists and

computer experts for open seismic data processing packages, or in short open SDP packages.

However, vast contributions of seismic computational functions have caused redundancies

among open SDP packages in solving common seismic problems. Redundancies of seismic

functions have led to the uncertainty on which function to apply when dealing with a specific

problem. Therefore there is a need for a classification of seismic computational functions for

open SDP packages to guide the development of new seismic functions. In response,

presented in this paper, we have introduced a taxonomy that classifies seismic computational

functions into three distinct groups; Data Manipulation, Reflection Seismology and

Visualization. Each group consists of computational functions selected based on the

characteristic of seismic problem it is meant to solve. The taxonomy comprised of seismic

computational functions from three open SDP packages: Seismic UNIX or SU, Madagascar

and OpenDtect. To date, we have not seen any apparent comparative study between the

functionalities of the three open SDP packages. So, we have performed a functionality tests to

compare each open SDP package’s functional executions on a series of seismic data

processes, using a historical SEGY dataset of 122 Gigabytes in size. The execution was

conducted on a high performance cluster. The analysis of the tests was presented from the

view point of a system analysis, hence, structural geology such as identifying the Earth

subsurface’s faults and hydrocarbon reservoirs are not presented. The result of the tests is

significant: we discovered that it is possible to perform data format conversions between each

open SDP package. The original SEGY data size has been reduced when converted to the SU

format. This is due to the elimination of the header file that is not required in SU. The

original file has also been reduced to 115 GBytes when converted to Madagascar‘s format.

This is because Madagascar’s format uses a contemporary memory arrangement approach.

CPU Execution times for each open SDP package to complete the functionality test shows

that Madagascar performs faster by approximately 32 hours when compared to the other

packages.

1 Introduction

Hydrocarbon industry relies heavily on seismic data processing packages to facilitate the

discovery of new oil and gas traps. The richness and robustness of computational functions

which represent the reflection seismological methods determine the practicality of a software

package. Reflection seismology is defined as, the method of Exploration Geophysics using

2

the principle of signal wave propagation through the Earth subsurface, to estimate the

properties of Earth [1]. Reflection seismology is often used during geological mapping to

map structures underneath the Earth surface. Geological mapping often uses acoustic signals

or echoes to propagate deeply beneath the Earth surface. In a typical geological mapping

operation, artificial signals are produced from vibrating devices, called vibroseis, such as

ground hammer and water gun. Signals generated from vibroseis travel beneath the earth and

bounce back to the surface of the earth where they are gathered by receivers. The series of

signal-reflection-times produced from this event are recorded to construct an image of the

Earth subsurface. The visual representation of the geological structure of the Earth subsurface

is commonly known in Exploration Geophysics as seismic time images.

The goal of many seismic data processing packages is to produce clear seismic time images.

Seismic data processing packages perform a series of analysis and data manipulations, such

as noise filtering, validation and rectification of signal reflection point, to finally construct a

clear seismic time image. The image represents the Earth properties, such as seismic lines or

faults, contours and colour intensities, which will then assist geophysical experts to predict

the possible existence of hydrocarbon traps. However, the processing of large seismic

datasets to produce seismic images is often computationally exhaustive. A small scale

seismic dataset will usually contain nothing less than 24 million signal reflections. Intensive

mathematical calculation needs to be performed on each signal reflection to validate the

correctness of a signal reflection point. The initial geological exploration process followed by

seismic data processing and interpretation before even the actual drilling of an oil and gas

well starts would take up to almost a year [2]. The computational part to process seismic data,

takes an average of 6 months. Our literature study [2-6] shows that open seismic data

processing packages were initially designed to process seismic data in a sequential execution.

Thus, a natural way to expedite computational process is by using distributed and parallel

systems.

The goal of this research project is to assist geophysical experts in processing large volumes

of seismic datasets in an accelerated manner through a high performance computing facility,

such as cluster computing and cloud computing. The subjects of our study are three seismic

data processing packages; Seismic UNIX or in short SU, Madagascar and OpenDtect. The

three packages are popular among geophysicists to conduct industrial scale seismic data

manipulations and analysis [4-6].

According to [3], SU is one of the open seismic processing packages, which has been widely

used by academicians and researchers as a learning aid and lab tool [4-6]. It is also being used

by the hydrocarbon industry for actual field and onsite data processing and analysis [7]. This

open package has won several scientific awards throughout the years and is accredited by the

Society of Exploration Geophysicists (SEG). SU was initially developed by Colorado School

of Mines for the Centre for Wave Phenomena [7]. SU functions are executed by the user in

the form of a command execution line, via a command prompt console. SU was initially

written for sequential execution. Although there have been efforts to modify or enhance SU

to execute on a Parallel Virtual Machine (PVM) [8], there have been few enhancements

throughout the years on this work. SU was released in 1992 and since then there have been

many computational function contributions towards the enhancement of SU. The existence of

much recent open seismic data processing packages than SU, has shifted the interest of most

3

geophysical community toward new seismic data processing packages such as the

Madagascar.

1.2 Madagascar

community support. This package was released in 2006 under the General Public Release

(GPL) open source license, which makes it freely distributed, and imposes no restriction on

the usage and alteration of its source code [11]. Computer, as well as geophysics, experts

contribute actively to the development and enhancement of this package. Madagascar

comprises a collection of computational functions, which represent important reflection

seismological models, and uses its own .rsf format. These functions are standalone

programs for geophysical data processing and imaging. It is apparent that many

functionalities and concepts in Madagascar are reimplementations from previous seismic data

processing packages such as, the SU and SEPlib [12].

As a modern seismic data processing package, Madagascar has been written with flexibility

for enhancement. Although the computational functions were initially designed for sequential

executions, recent work has been carried out for the package to run on high performance

computing clusters using SCONS [11]. SCONS is a software construction tool which uses

python programming language and scripts. It is a high level wrapper behaving similar to a

makefile in UNIX systems. Technically, there are two means to execute Madagascar

functions. User can either opt to use the classic command execution line via a common shell

prompt in LINUX or use a software construction tool, such as SCONS. However, it is

recommended for users to execute Madagascar commands via SCONS, which gives the

flexibility to design a sequence of seismic operations on a dataset in a single program

construct.

1.3 OpenDtect

OpenDtect was not initially designed as a seismic data processing package; instead it is rather

a seismic visualization and interpretation package. To date, four versions of OpenDtect have

been released. OpenDtect is supported by open and proprietary modules. The modules are

seismic visualization and interpretation functions. The modules are contributed to by

geophysics computing communities and commercial seismic data processing companies.

OpenDtect version four is released under a three licensing scheme; the GPL license,

Commercial license and Academic license.

The GPL license is equipped with the source code written in C++ programming language.

The source codes are not protected by any license manager and can be used by anyone even

for development of commercial purposes. However, the GPL licensing scheme does not

provide the access to proprietary visualization and interpretation modules. The Commercial

license is equipped with open and proprietary seismic interpretation and visualization

modules. Proprietary modules are available to users in two means. Under this scheme, the

user can either lease or purchase the proprietary seismic visualization and interpretation

modules. The Academic license is intended for research and teaching institutes. Under this

licensing scheme, researchers and academicians are allowed accesses to both types of seismic

visualization and interpretation modules. However, the usage of OpenDtect with the

academic license scheme is restricted to only research and teaching purposes, and not for

commercial development. OpenDtect has managed to run on a multiprocessor system through

4

multiple machine batch processing [14]. Thus far, OpenDtect has been tested to execute on a

high performance cluster processing system using the Highly Scalable Resource Manager or

in short, known as the SLURM. SLURM is an open package design for LINUX clusters for

handling job prioritization and managing resource limits.

Seismic data processing functions and methods often overlap each other in terms of their

usage and purpose. According to [15-17], various taxonomies exist to classify seismic data

processes. The existing taxonomies of seismic data processes, which are emphasized by

geophysical experts often, refer to the core physics aspect of geology. The common existing

taxonomies are in the form of linear listings of essential reflection seismological models to

process seismic datasets. Other types of taxonomies of seismic processes are based on a

broader scope which include interpretation and prediction functions. The taxonomy of

seismic data processing functions that we present are intricate from the aspect of computer

open software packages. Common seismic functions exist in seismic data processing

packages, however none have classified the seismic computational functions from the view

point of software packages. New seismic data processing functions are actively being

developed by geophysics and computer experts. The significance of having taxonomy of

seismic data processing functions is to allow users to identify and recognize the seismic

function groups, which are available in the seismic data processing packages.

In this section, we categorized the common functions that exist in SU, Madagascar and

OpenDtect. All three packages consist of collections of computer functions, which are

commonly categorized into three distinct function groups; Data Manipulation, Reflection

Seismology and Visualization. The Data Manipulation function group deals with editing

datasets and performing mathematical operations. The Reflection Seismology function group

conducts core geophysical seismic processing and analysis. The Visualization function group

is a collection of programs to construct seismic images.

Seismic Data Processes

Conversion

Gain Control Wiggle

Gain Control

5

In figure 1, the Data Manipulation function group consists of functions to perform Data

Editing, Data Format Conversion between Madagascar, SU and OpenDtect, Noise Filtering

and Transformation.

The Reflection Seismology function group incorporates all of the important seismic data

processing functions. The functions categorized under this group involve complex

mathematical and geometrical calculations on seismic traces, amplitude, phase and energy.

The Visualization function group is foremost to display results of the processed seismic data.

Displaying seismic data is crucial to expert users in order to learn the geological structure of

an area, and to interpret and predict the existence of a hydrocarbon trap in a geologically

mapped area.

In sections 3, 4 and 5, we present the similarities and differences of functions with respect to

all three packages. We have assigned each package with an alphanumeric variable to

represent each package as follows; SU as P1, Madagascar as P2 and OpenDtect as P3 in order

to ease readers’ understanding, so as to remove the biasness and sensitivities when

performing the functional comparisons between the packages.

In this section, we list and discuss the group of functions that are available in the Data

Manipulation function group with respect to SU, Madagascar and OpenDtect. In subsection

3.1 we discuss the Data Editing Functions. Subsection 3.2 explains the data format

conversions between packages P1, P2 and P3. In subsection 3.3, we discuss the wavelet

transform functions prior to performing noise filtering in seismic datasets. Subsections 3.4

and 3.5 discuss Data Smoothing and Gain Control functions for all three packages,

respectively.

According to [9], seismic datasets are often manipulated to comply with individual package

formats. such as the Seismic UNIX format or the .su and the Madagascar format or the

.rsf. Table 1 shows the Data Editing functions and descriptions available for packages P1,

P2 and P3.

P1 P2 P3

Function Description Function Description Function Description

suabshw Replace header keyword

by absolute value

suchw Change header value sfsegyheader Create trace header Load-Seismic Edit bytes in header

using one or two header

fields

susort Sort SEGY header sfheadersort Sort data header

keywords

susorty Show geometrical values Dump-2D- Create ASCII file with

to visualize data Geometry geometry of one or all 2D

lines

sfget Output parameters from Load-Well Define seismic header

suedit Examine SEGY diskfiles header value

and edit headers

sfin Display basic information SEGY- Examine textual, line and

examiner 1st trace of header

sushw Input the header word sfput Input parameter into

values from a file header file

6

suhtmath Unary arithmetic sfadd Add, multiply, divide Mathematic- User defined expression

operation on SEGY datasets Attribute

traces with headers

values

suvcat Append one data set to sfcat Concatenate datasets Merge-File- Merging seismic datasets

another, with or without Window

an overlapping region

subset Select a subset of the sfimag Extract parts of dataset Put-Sample- Display sampling

samples from a 3D file In-File information 2D/3D

sukill Zero out traces sfcut Zero a portion of data Statistical- Muting or zero out trace

suzero Zero-out data within a Wavelet- and frequency

time window Extraction

sunull Create null (all zeroes)

traces

segyclean Zero out unassigned

portion of header

supickamp Pick amplitudes within sfmax Find max value in stack Stratal- Show maximum or

user defined and amplitude minimum

sfmin Find min value in stack

sampled window or average

for non-zero header

entries

sustrip Remove the SEGY sfrm Delete data

headers from the traces

suflip Flip a data set in various sfrotate Rotate portion of data Edit-Cube- Flip inline and crossline in

Ways Flip various way

suhrot Horizontal Rotation of

data

suwind Window traces by sfwindow Window portion of dataset

keyword

suspike Make a small spike data spike Create simple data

suplane Create simple data

suimp2d Generate shot records sfcmplx Create complex data Prestack- Create pre-stack data

for line scatterer seismic

embedded in 3D using manager

Born integral equation

susynlv Synthetic seismograms Velocity- Create velocity model data

for Linear Velocity Conversion

function

susynvxz Synthetic seismograms

of common offset V(X,Z)

via Kirchhoff model

susynlvcw Create synthetic Create- Create synthetic wave

seismograms for Linear Wavelet

Velocity function for

converted waves

supack1 Pack SEGY trace data

into chars

supack2 Pack SEGY trace into 2

byte shorts

Tab 1. Groups of functions to perform data editing for packages P1, P2 and P3

Seismic datasets consist of two major segments; the header and the binary data stream. The

header contains information of the seismic data itself, such as data size, data types, data

formats and dimensions. The binary data streams segment is the actual seismic traces itself.

Seismic traces are acoustic signal travelling time recordings [3] captured in a geological

mapping operation as discussed earlier in section 1. The main operations in the Data

Manipulation functions group are seismic data header manipulation, data sorting and to

remove, as well as to append, the binary data stream. Operations performed on the binary

data stream segments include the creation of synthetic simple and complex seismic traces,

rotation of data dimensions and rescaling of data based on time or depth axis. All three

packages P1, P2 and P3 provide similar functions to perform data editing operations.

In [9], we have explained that there is no standard convention when dealing with seismic data

formats. Each seismic data processing package uses its own specific format. Therefore

7

exchanging of data from different packages is made possible by having data conversion

functions as shown in table 2.

P1 P2 P3

Function Description Function Description Function Description

dt1tosu Convert ground-penetrating Import- (GPR) systems in the

radar (GPR) data in the export- 'DZT' format

GPR and SU sensors GPR

format

SU

segyread Convert SEGY to sfsegyread Convert SEGY or SU to Import- Convert SEG-Y format to

SU format RSF format export- an OpenDtect format vice

sfsuread Convert SU to RSF Seismic-SEGY versa

format

sfsu2rsf Convert SU to RSF

format

sfsegy2rsf Convert SEGY to RSF

format

segywrite convert SU to SEGY sfsegywrite Convert RSF to SU or

SEGY format

sfsuwrite Convert RSF to SU

format

segyhdrs Make SEG-Y ascii and sfdd ASCII and RSF format Import- Import ASCII or binary

binary headers for function conversion Seismic file, with or without header

segywrite ASCII-Binary

Import- Import or export 3D ASCII

export-3D- and Binary data

format

Import- Import or export 2D,3D

export- and ASCII faults

Faults

Madagascar- Convert from RSF to

Processing OpenDtect format

Import- Import or export well

export-well- depths in OpenDtect

Data

Tab 2. Groups of functions to perform format conversion for packages P1, P2 and P3

Table 2 shows that the .su format of package P1 can be converted into the .rsf format of

package P2, to allow exchange of seismic data between the two packages. Package P2

functions; sfsegyread, sfsuread and sfsu2rsf allows data format conversion from

packages; P1 to P2. However package P1 does not provide any function to perform format

conversion from P2 to P1. The package P3 supports format from P2 through its Madagascar

processing module, but does not provide any data format conversion function for package P1.

It is possible to exchange data from P1 to P3, through conversion of P1 format into P2 and

processed in P3.

Signal reflections are used in geophysics to approximate the properties of the Earth’s

subsurface [3]. Artificial acoustic signals are propagated downwards to reflect on layers of

media underneath the Earth’s surface. The signal reflections gathered by receivers at the

Earth surface yields valuable information. Information, such as signal reflection points,

signal’s varying velocity values and reflection times, are gathered and used as an input to the

seismic data processing software package. However, during this process, true signal

reflections are often distorted by unwanted noise or interference. The two types of noises that

manifest in seismic signals are Coherent noise and Random noise. Coherent noise in

reflection seismology originates from signal reflection reverberations [18], whereby Random

noise is produced by the scattering and the diffraction of signals due to near-surface

irregularities [19].

8

Noises overlap with the true signal reflection value when represented in the time and space

domain. A common technique to obtain clean signal reflection value is via the Frequency

Domain filter. By using the Frequency Domain filtering technique, the signal reflection value

needs to be transformed in a frequency domain to distinguish clearly noise and the true signal

reflection value. It is imperative for seismic data processing packages to be equipped with a

signal transformation function, prior to performing noise reduction or removal. Table 3 lists

and describes the available noise filtering and wave transformation functions for packages

P1, P2 and P3.

P1 P2 P3

Function Description Function Description Function Description

subflit Butterworth bandpass Bandpass Filter frequency within Frequency- Bandpass filter using

filter a range Filter FFT or Butterfly filter

sutvband Time-variant bandpass

filter

suband Trapezoid-like sine

squared tapered

bandpass filter

sudipfilter 2D dip or slope Filter sfdipfilter 2D and 3D dip or Velocity-Fan Filter energy in velocity

slope filter Filter dip within a specified

frequency range

suband Trapezoid-like sine sftrapez Trapezoidal filter

squared tapered

bandpass filter

sfintshow Interpolation filter

suxcor Correlation with user

supplied filter

suconv Convolution with user- GapDecon Attenuate repetition or

supplied filter multiples

Convolve Used with 3D specified

filter using Laplacian and

Prewitt technique

sufilter Zero-phase, sine-squared Azimuth Slope energy to pass

tapered filter Filter azimuth direction

supef Wiener predictive error

filtering

supofilt Polarization filter for

three-component data

sushape Wiener shaping filter

sukfilter Radial symmetric K-

domain, sine^2-tapered,

polygonal filter

sugabor Time-frequency filtering

via Gabor transform

sufft FFT real time traces to sfdwt 1D wave transform Spectral- FFT frequency resolution

complex frequency traces Decomposition and Continuous Wavelet

Transform

suifft FFT complex frequency sffft1 Fast Fourier

traces to real time traces Transform

suspecfx Fourier spectrum (T to F)

of traces domain data

suspecfk F-K Fourier spectrum sffft3 FFT transform on

extra axis.

suspeck1k2 2D (K1,K2) Fourier

spectrum of (x1,x2)

suradon Compute forward or sfveltran Hyperbolic Radon

reverse Radon transform transform

or remove multiples by

using the parabolic

Radon transform to

estimate multiples and

subtract

sutaup Forwarded and inverse T- sftaupfit Fitting tau-p

X and F-K global slant approximations

stacks

suharlan Signal-noise separation

invertible linear

transformation using

Harlan method

suhilb Hilbert transform Hilbert-Quad- Hilbert transform

Amp

suinterp Interpolate traces using sftri2reg Interpolate Between- Surface interpolation

automatic event picking triangulated shot Horizons

9

record triplets into grid

intsinc8 Interpolate uniformly- sflapfill Missing data

sampled data interpolation

sfshapebin1 1D inverse

interpolation

intcub Piecewise cubic sfspline 1D cubic interpolation

interpolation

intlin Evaluate y(x) via linear sfinttest1 1D interpolation

interpolation

inttable8 Interpolation of uniformly- sfremap1 1D essentially non

sampled complex function oscillatory

y(x) interpolation

mksinc Least-squares optimal sfenoint2 2D Essentially Non

sinc interpolation Oscillatory

interpolation

intl2b bilinear interpolation of a sfextract 2D Forward Derive-2D-or- Inverse distance

2-D array of bytes interpolation 3D-Horizon interpolation and

sfshapebin 2D inverse triangulation

interpolation

suresamp Re-sampling data via sfinttest2 2D interpolation Gridding Interpolate sparse

interpolation dataset and coarse

dataset

mrafxzwt Multi-Resolution Analysis

of a function F(X,Z) by

Wavelet Transform

suamp Output amplitude , phase,

real or image trace from

domain data

suattributes Trace attributes Instantaneous Instantiate amplitude,

instantaneous phase and frequency

amplitude, phase or determined from complex

frequency trace

sureduce Convert traces to display

in reduced time

entropy Compute the entropy of a Cosine Normalize amplitude

signal

wpccompress Compress a 2D section

using wavelet Packets

wpc1comp1 Compress 2D seismic

section trace-by-trace

using Wavelet Packets

dctcomp Compression by Discrete sfcostft Cosine transform Cosine-Phase Unravels seismic signal

Cosine Transform (DCT) Spectral- into its constituents

dctuncomp DCT Uncompress Decomposition frequencies

Tab 3. Groups of functions to perform Noise filtering and Wavelet Transform techniques for packages

P1,P2 and P3

functions. Signal transformation can be achieved through functions; sufft, suifft,

suspecfx and suspecfk to perform Fourier transformation. Package P2 also provides

wavelet Fourier transforms via functions, sffft1 and sffft3. Package P3 uses

Spectral-Decomposition function to perform a similar signal transformation method

as packages P1 and P2. All three packages are equipped with a bandpass filter to shape

seismic signal frequency within a desired user defined range. Filtering noise in a dipping1

Earth subsurface is also provided by packages P1, P2 and P3 using functions

sudipfilter, sfdipfilter and Velocity-Fan-Filter, respectively.

Data smoothing is a form of a low frequency pass filter. Data Smoothing obstructs high

frequency signals in order to accentuate low frequency signals [20]. Low frequency signals

travel longer distances and propagates even deeper into the Earth subsurface. With regards to

seismic data processing packages, high frequencies signals are often represented by short

wiggles and can be reduced through Data Smoothing functions as shown in table 4.

1

Dipping in Exploration Geophysics refers to slope based formations within the Earth subsurface.

10

P1 P2 P3

Function Description Function Description Function Description

unisam2 Uniformly sample 2D

function f(x1,x2)

smooth2 Uniformly sampled 2D

function with user- defined

window, using least

squares technique

smooth3d 3D velocity smoothing by sfgrad3 3D smooth gradient smoother 3D smoothing method

least squares technique

smoothint2 Non-uniformly sampled

Interfaces using least-

squares technique

unisam Uniformly sample function

y(x) specified as x and y

pair

sfsmoothder Smooth data of first

derivative on first axis

sfboxsmooth Multidimensional Lateral- 2D smoothing method

smoothing smoother

sfsmooth Multidimensional

triangle smoothing

Tab 4. Data Smoothing functions for packages P1, P2 and P3

All packages provide smoothing functions for 2 and 3 dimensional data. The smooth3d

function in package P1 allows velocity data smoothing using the Least Squares2 technique.

Similar smoothing functions are provided by packages P2 and P3 through functions

sfgrad3 and Smoother, respectively. Package P2 is equipped with smoothing

functionality for 3D and higher dimensional data via sfboxsmooth. The Lateral-

smoother function in package P3 performs similar task as sfboxsmooth.

Signal frequencies often suffer from attenuation when propagated through various media

underneath the Earth subsurface. Frequency attenuation happens due to the effect of

atmospheric absorption especially when obstructed with material containing large volume of

water [22]. Gain Control or GC, is a method to amplify signal amplitude and increasing

signal reflection energy.

P1 P2 P3

Function Description Function Description Function Description

suagc Perform AGC on SU sfagc Performs AGC on RSF Automatic- Adjust signal power and

dataset dataset gain- control amplitude level using

user defined size window

Tab 5. Gain Control functions for packages P1, P2 and P3

Table 5 shows the gain control function for all three packages. Each package provides a

similar method to manage signal amplitude. GC is commonly used in seismic processing to

improve visibility of late-arriving signalling events, which commonly suffers amplitude

decay the most. GC is applied in the earlier step of processing to prepare seismic data prior to

performing core reflection seismology methods.

2

Least Square is a statistical method used to find the solution that most closely approximates a set of data. It is

based on minimizing the difference between two or more signal travel time readings by adjusting depth and

velocity values [21].

11

4 Reflection Seismology Function

In this section we identify and group the Reflection Seismology functions, which are

commonly made available by seismic data processing packages P1, P2 and P3. In subsection

4.1 we describe the Velocity Analysis functions. Subsection 4.2 discusses the Moveout

method, which includes Normal Moveout and Dip Moveout. In subsection 4.3, we explain

the seismic trace stacking methods to improve signal to noise ratio and the available

functions. Subsection 4.4 explains the Time and Depth Stretch Conversion available cross all

three packages. Subsequently in subsection 4.5, we discuss the Migration methods to

geometrically correct signal reflection points.

According to [9], velocity in Geophysics is defined as the rate of a wave or signal that travels

through medium; it is commonly symbolized by ‘v’. In seismic data processing, the velocity

value that is obtained when analyzing signal reflection points or shot records, is called

stacking velocity. During a geological mapping operation, each seismic trace is recorded in

the form of signal travelling time. The constant movement of the signal transmission source

and its receivers often results in each seismic trace being stacked on top of each other when

the signals reflection points are recorded [9].

Performing velocity analysis via seismic data computing functions yields valuable

information to understand the material and the composition of the Earth subsurface. In

physics, acoustic signal velocity varies when travelling through media with contrasting

impedance level. Each medium that lies underneath the Earth surface is associated with a

unique velocity reading [23], which assist experts in predicting the types of media and gives

rough estimates on the geological formations underneath the Earth surface.

P1 P2 P3

Function Description Function Description Function Description

sffourvc Prestack velocity

continuation

sffourvc0 Velocity continuation

after NMO

suvelan Compute stacking velocity sffourvc2 Velocity continuation Velocity- Normal Moveout

semblance for CDP with semblance Correction Correction based on

gathers computation velocity volume

sfdsr Prestack 2-D v(z)

modelling and

migration by DSR

sfdsr2 2-D prestack modelling

and migration with split-

step DSR

sfpveltran Slope-based velocity

transforms

sfpveltran3 Slope-based tau-p 3D Loading- Gridding 3D scenes

velocity transform for anisotropy

elliptical anisotropy

sfvelmod Velocity transforms Velocity- Convert input velocity

Conversion RMS

sunmo NMO for an arbitrary sfvoft Analysis of V(t) function

velocity function of time for a linear

and CDP stack V(z) profile

Generate 2D sample sfvofz Analytical travelling

unif2 velocity profile from layered time in a linear V(z)

model model.

makevel Make velocity function sfvscan Velocity analysis

v(x,y,z) inverse of transform

Tab 6. Moveout and CMP stack functions for packages P1, P2 and P3

12

In table 6, packages P1, P2 and P3 are equipped with velocity stacking and semblance3

functions; suvelan, sffourvc2 and Velocity-Correction, respectively. Velocity

semblance acts as a guide to velocity picking4. In reality, seismic traces do not correspond

exactly to the energy level of the seismic traces, therefore velocity picking is necessary. The

seismic traces, which are signal reflection travelling time recordings, and signal velocity

energy value needs to be matched. In order to match the seismic traces with its corresponding

signal velocity energy value, the velocity with high energy value will need to be firstly

identified. The semblance of the signal velocity energy value will act as a guide to pick the

matching seismic trace. Seismic traces are matched with the signal velocity energy according

to heuristics. There is however, no exact seismological method to accurately pick the velocity

energy value to match the seismic traces thus far [2][24]. Package P1 performs velocity

picking through functions, unif2 and makevel. Package P2 and P3 are capable of

performing velocity picking and semblance for 3D seismic datasets through functions;

sfpveltran3 and Loading-anisotropy, respectively.

In [9], we have explained in detail how Moveout occur. Moveout is defined as the effect of

separation between the source of the signal transmission and the receiver [26]. There are two

types of Moveout; Normal Moveout, abbreviated as NMO and Dip Moveout or DMO. NMO

deals with signal reflection on flat geologically horizontal surfaces, while DMO refers to

signal reflection on dip or slope based geological formations.

The signal reflection times recorded during a seismic mapping operation are influenced by

the movement of the signal transmitter and receivers. Exploration vessels were used for a

marine-based survey and hammering trucks for a land-based survey. Both marine and land

based surveys carry devices known as vibroseis [3] as a source to generate signals and at the

same time heave a stream of receivers to gather the reflected signals. NMO or DMO is

caused by the constant horizontal movement of these vehicles during the geological mapping

activity. The movement of the transmitter source and receivers produces a significant

displacement for each common signal reflection point or CMP. A series of signal reflections

are obtained and stacked for a common reflection point known as the CMP stack. Table 7,

shows the NMO, DMO and CMP stack functions and their corresponding descriptions in the

Reflection Seismology Process class for packages P1, P2 and P3.

P1 P2 P3

Function Description Function Description Function Description

sunmo Moveout for signal velocity sfimospray Inversion of constant

and time. velocity nearest

neighbour inverse

NMO.

sfinmo Inverse Normal

Moveout.

sfinmo3 3-D Inverse normal

Moveout.

sfitaupmo Inverse normal

Moveout in tau-p

domain.

sfitaupmo2 Inverse normal

Moveout in tau-p-x

domain.

sfitaupmo3 3-D Inverse Tau-p

normal Moveout.

3

Semblance is often referred to as signal velocity energy level.

4

Velocity picking is defined as the picking of velocity and time pairs based on the coherency between multiple

seismic signals [25]

13

sudmovz DMO for V(Z) media sfpnmo Slope-based normal

Moveout

sfpnmo3d Slope-based normal

Moveout for 3-D CMP

geometry

sfnmo Normal Moveout

sfnmostretch Stretch of the time axis

suazimuth Compute trace Azimuth sffkamo Azimuth Moveout by Steered- Pre-Calculate slope/dip

log-stretch F-K Attribute azimuth

operator

sfptaupmo Slope-based tau-p

Moveout

sfptaupmo3 Slope-based tau-p 3D

Moveout

sudmofk DMO via FK log stretch sffkdmo Offset continuation by

log-stretch F-K

operator

sftaupmo Normal Moveout in tau-

p domain

sudmotx DMO via T-X domain sfdmo Kirchhoff DMO anti-

using Kirchhoff method for aliasing by re-

common offset gathers parameterization.

sfcmp2shot Convert CMP to shots

for regular 2D

geometry

sfpp2psang Transform PP angle Prestack Extract statistic on angle

gathers to PS angle gathers amplitude and

gathers AVO

sfpp2psang2 Transform PP angle

gathers to PS angle

gathers

sfpp2pstsic Compute angle gathers

for time-shift imaging

condition

sfshot2cmp Convert shot to CMP

for regular 2D

geometry

sftshift Compute angle gathers

for time-shift imaging

condition

sfshotholes Remove random shot

gathers from a 2-D

dataset

sfaastack Stack with anti-aliasing

sffinstack DMO and stack by

finite difference offset

continuation

Tab 7. Moveout and CMP stack functions for packages P1, P2 and P3

Table 7 shows that both packages P1 and P2 provide the ability to perform NMO and DMO

on seismic traces. Apparently, package P2 is equipped with various modifications of

Moveout functions which include antialiasing5 capability through functions sfaastack

and sfdmo. In Reflection Seismology, antialiasing is used to remove signal components

with higher frequency. This removal is conducted by sampling at a lower resolution to give a

clearer CMP stack image. All three packages; P1, P2 and P3 have the ability to calculate the

seismic stretch azimuth6 through functions; suazimuth, sffkamo and Steered-

Attribute, respectively.

In subsection 4.2, we explained that the signal reflection point is influenced by the constant

displacement of the signal transmission source and receivers. The movement of the signal

5

Antialiasing is a method of blurring the edges of a jagged image to give a smooth appearance.

6

Azimuth in seismology refers to the best fit plane (3D) between immediate neighbouring seismic traces on a

horizon, and outputs the direction of maximum slope (dip direction) measured in degrees, clockwise from north

[27].

14

source naturally induces latency in the signal reflection time to the receivers. The delay of

signal reflection time at a vertical depth, combined with the delay caused by the moving of

the receivers at a horizontal sea surface, will eventually result in a stacking of a signal

reflection point recording. Seismic stack is an important process to remove the offset7

dependence for each signal travelling time record. The result of removing the offset

dependence for each signal reflection point in a stack produces seismic traces with zero offset

dependence or value. Seismic stack with zero offset value will, in turn, produce a credible

construction of a seismic time image. Table 8 summarized the available seismic trace

stacking computational functions for packages P1, P2 and P3.

P1 P2 P3

Function Description Function Description Function Description

sustack Stack adjacent traces sftristack Re-sampling with

triangle weights

sfsmstack Stack a dataset over

the second dimensions

by smart stacking

sfsnrstack Stack a dataset over Vertical- Stack trace to increase

the second dimensions Stack SNR

by SNR weighted

method

sfbilstack Bilateral stacking

sfstack Stack a dataset over

one of the dimensions

surecip Sum opposing offsets

sudivstack Diversity Stacking using

either average power or

peak power

Tab 8. Seismic Stack functions for packages P1, P2 and P3

(SNR) ratio, which is the Vertical-Stack. The stacking of all signal travel time records

in a spatially coherent line increases the energy of the reflected signals because the reflected

waves are spatially consistent between each of the signal reflection point. Package P2 is

equipped with a function sfstack to perform stacking of traces for 3D and higher

dimension seismic datasets.

Temporal depth and signal velocity are important elements in seismic data processing. It

gives geologists the estimated depth of a medium or earth layer based on a signal’s velocity

and travelling time. Representing depth in time domain however, has been a classical

problem among geophysicists and drilling engineers [28]. For instance, targeting how deep

to drill a hydrocarbon trap by scaling the depth in time domain reduces the accuracy of a

drilling operation. Therefore depth conversion, or some may called it depth migration, is vital

to give an accurate prediction of a hydrocarbon trap.

P1 P2 P3

Function Description Function Description Function Description

sutsq Time axis stretch of sfdatstretch Stretch of the time Edit-Well- Stretching depth in Z-

seismic traces axis. Track Scale

sulog Log-stretch of seismic sflmostretch

traces

suilog Inverse log-stretch of sflogstretch

seismic traces

sft2chebstretch

sfstretch

7

Offset in seismology refers to the displacement of the signal source and receivers. The constant displacement

of both source and receiver causes a signal reflection point reading to overlap with its previous recording [3].

15

sft2stretch

sfdepth2time Conversion from Well-Track Conversion from depth

depth to time in a to time

velocity V(z) medium

suttoz Re-sample seismic trace sftime2depth Time to depth Well-Track Conversion from time to

from time to depth conversion in velocity depth

Re-sample variable V(z).

suvlength length traces to common

length

Tab 9. Seismic Stack functions for packages P1, P2 and P3

Table 9, shows the available depth to time and time to depth conversions for packages P1, P2

and P3. Functions suttoz and sutsq from package P1 are used to perform time to depth

conversion. Functions sftime2depth from package P2 and Well-Track from package

P3 performs similar time to depth conversion as in P1. Further explanation on the

technicalities behind how these functions perform time to depth and depth to time conversion

is as follows.

According to [9], the temporal depth is not the actual depth (z) in the vertical plane. It is an

area measurement of depth estimated in time (t) domain or vertical travel time [29].

Consequently, depth in seismic signal reflection is not measured in kilometres or metres but

instead in seconds (t). Therefore, equation 1 shows the approximation of temporal depth in

time domain in relation to the actual depth or distance in kilometre or metre.

t≈z (1)

where,

t is temporal depth in seconds (s)

z is distance in metres (m)

Modelling temporal depth (t) to the actual depth or distance (z) requires vertical amplification

as the signal travel time is measured in a two-way direction. In order to relate temporal depth

(t) to the effect of the two-way signal travelling time, equation 1 is further refined producing

equation 2.

t ≈ z1 + z2

t ≈ 2z (2)

where,

t is temporal depth in seconds (s)

z1 is distance of signal transmission from source to reflection point in metres (m)

z2 is distance of signal transmission from reflection point to receiver in metres (m)

In Equation 2, the transmitted signal propagates downwards having distance denoted as z1.

The signal then reflects upwards to the surface having the distance denoted as z2. Relatively

z1 and z2 are having the same distance as both signals travel through the same earth mediums

with similar velocity. The sum of z1 and z2 is 2z as shown in equation 2.

Subsequently, equation 3 shows that both signals distance z1 and z2 share a common velocity

(v), which is due to the fact that both signals travel through the same earth media while

propagating downwards then upwards to the surface.

(3)

where,

t is temporal depth in seconds (s)

16

2z is the sum of the two-way signal travelling distances in metres (m)

v is the signal velocity (ms-1) [29]

Therefore, temporal depth is estimated to be 2 times the actual depth divided by the velocity

of the signal, as shown in equation 3. Depth migration or depth conversion is essentially

important to estimate the true measurement of depth in metric scale.

4.5 Migration

various layers and material underneath the Earth surface as explained in section 1. However,

reflecting signals tend to diffract and bend due to the fact that each Earth layer is made from

composites of different densities and thickness. The migration process is a method of

geometrically correcting the signal reflection point into its true reflection point value.

Migration process was previously defined and discussed in [3]. Table 10, lists and describes

the computational functions to perform migration methods available for packages P1, P2 and

P3. There are two migration methods available when dealing with seismic data processing.

Time Migration, abbreviated as TM, and Depth Migration, in short DM. TM uses the average

signal velocity values and is computationally less complicated as compared to DM. TM is

normally used when dealing with less complex geological formation and far quicker when it

performs computation. However, TM has one disadvantage; it produces less accurate

migration results.

P1 P2 P3

Function Description Function Description Function Description

sfagmig Angle gather constant

velocity time migration

sfcascade Velocity partitioning for

cascaded migrations

sumigffd Fourier finite difference sfconstfdmig2 2D implicit finite-

migration for zero-offset difference migration in

sumigfd 45 and 60 degree Finite constant velocity

difference migration

zero-offset

sugazmig Gazdag's phase-shift Post-stack 2D v(z) time VMB- Velocity model building

migration for zero-offset sfgazdag modelling and migration module with velocity picking

data with Gazdag phase-shift

sumigps Migration by Phase Shift technique

with turning rays

sumigpspi Gazdag’s phase-shift

interpolation migration

for zero-offset data,

handle lateral velocity

variation.

sukdmig2d Kirchhoff Depth sfkirmig 2D Prestack Kirchhoff PSDM- Kirchhoff migration

Migration of 2D post- depth migration Kirchhoff

stack and prestack data

sumigtopo2d Kirchhoff Depth sfkirmig0 2- Post-stack Kirchhoff PSDM- Tomography prestack

Migration 2D post-stack depth migration Tomograph depth migration

and prestack y

sfkirchinv Kirchhoff 2D post-stack

least-squares time

migration with anti-

aliasing

sfkirchnew Kirchhoff 2D post-stack

time migration and

modelling with anti-

aliasing

sfmigsteep3 3D Kirchhoff time

migration for anti-aliased

steep dips

sudatumk2ds Kirchhoff datuming 2D sfpreconstkirch Prestack Kirchhoff

prestack for seismic modelling and migration

gathers constant velocity

17

sudatumk2dr Kirchhoff datuming of sfshotconstkirch Prestack shot-profile

receivers for 2D prestack Kirchhoff migration in

for shot gathers as the constant velocity

input

sumigtk Migration via T-K domain

for CMP stacked data

sfmig45 Migration for 15 and 45-

degree approximation

sfrwesrmig Riemannian Wave field

extrapolation of shot-

record migration

sustolt Stolt migration for sfprestolt Prestack Stolt modelling

stacked data or and migration

common-offset gathers sfstolt Post-stack Stolt

modelling migration over

lateral axis

sfstolt2 Post-stack Stolt

modelling and migration

sumigsplit Split-step depth sfzomig 3-D zero-offset

migration for zero-offset modelling and migration

data with extended split-step

sfsstep2 3-D post-stack modelling

and migration with

extended split step

Tab 10. Time and Depth Migration functions for packages P1, P2 and P3

DM on the other hand, is used when processing complex geological formations. DM uses the

full scale signal velocity model which makes computation more complicated and exhaustive.

Signal migration results produced from DM are far more accurate and much more reliable

[2]. Several Migration methods are available in table 10. Geometrical correction techniques

such as; Stolt8, Gazdag9 and Finite Difference10 are forms of TM methods, where as

Kirchhoff11 and Gaussian Beam12 are forms of DM methods. All packages provide functions

to perform both TM and DM methods. Package P2 is further equipped with functions to

perform migration on 3D seismic datasets.

Visualization is an important aspect of seismic data processing, prediction and interpretation.

The ultimate goal of all geophysical methods is to construct a clear and accurate seismic

image. The Visualization functions are aimed to present seismic images in various graphic

formats and display environments. Table 11 shows the functions for packages P1, P2 and P3

to visualize and plot seismic images.

P1 P2 P3

Function Description Function Description Function Description

suxmovie Xwindow frames plot for sfstdplot Setting up frames for Cross-Plot Plot 2D and 3D well

seismic data generic plot data

sfcubeplot Generate 3D cube plot. Generic-Mapping Cube plot

Tools

8

Stolt R.H developed the frequency wave-number migration or F-K. Today, F-K migration is still regarded as

the most efficient migration method for simple velocity models [30].

9

Gazdag introduced seismic migration for vertical varying signal velocity and constant signal velocity by phase

shift in the F-K domain [31].

10

Finite Difference method is efficient when dealing with signal reflection with lateral velocity variations with

great accuracy [30].

11

Gustav Robert Kirchhoff was a German physicist who developed the method for implementing seismic

modelling and depth migration, which can handle velocity variation.

12

The Gaussian-beam migration method has advantages for imaging complex structures. It is especially

compatible with lateral variations in velocity. Gaussian beam migration can image steep dip or slope and will

not produce unwanted reflections from structure in the velocity model [32].

18

sfplotrays Plot rays

sfthplot Hidden-line surface plot

suxcontour Xwindow seismic contour sfcontour Contour plot Generic-Mapping Create contour Map

plot Tools

supscontour PostScript contour plot sfcontour Generate 3D contour plot

suxgraph Xwindow graph plot SU data sfgraph Graph plot

supsgraph PostScript graph plot

supscube PostScript cube plot sfgraph3 Generate 3D cube plot for

seismic surfaces

sfgrey3 Generate 3D cube image

plot

sfplas Convert ascii to vplot

sfpldb Convert vplot to ascii

suxwigb Xwindow Wiggle-seismic sfwiggle Plot data with wiggly traces. Generic-Mapping Create postscript plot

trace plot via Bitmap Tools

supswigb PostScript Bit-mapped

wiggle

supswigp PostScript Polygon-filled sfpspen Vplot filter for Postscript. Polygon- Point data that can be

wiggle plot Pickset used for drawing

contour and faults

suximage Xwindow image plot of SU sfplsurf Generate a surface plot. Generic-Mapping Create postscript

dataset Tools image plot

supsimage PostScript image plot sfgrey Generate raster plot.

Tab 11. Visualization and Plotting functions for packages P1, P2 and P3

It has been identified in table 11 that visualization and plotting functions for all three

packages are set to support common purposes, which are to display seismic contours13,

seismic images in greyscale14 and seismic traces or wiggles15. Package P1 has an extension to

support seismic images in the Xwindow16 environment through functions; suxmovie,

suxcontour, suxgraph, suxwigb and suximage. P1 also supports seismic time

image construction in postscript and bitmap format via functions; supscontour,

supsgraph, supscube, supswigb, supswigp and supsimage.

Package P2 and P3 are suitable to construct seismic images in three dimensional form.

Functions; sfcubeplot, sfcontour, sfgraph3 and sfgrey3 from package P2 are

able to plot seismic contour and greyscale images in three dimension forms or higher. The

ability to display seismic images in greyscale and contour plot has become a requirement in

any seismic data processing packages. In section 4, we have explained how signal frequency

plays an important role to determine signal reflection points. In relation to signal frequency

and reflections; greyscale images in Reflection Seismology refer to the measuring of signal

frequency intensity. A white spot in a greyscale seismic time image reflects the high signal

frequency reading, while a dark spot indicates low signal frequency reading.

Package P3 has the advantage of displaying seismic images in the Windows operating system

platform. Package P3 is the most advanced package when dealing with seismic visualization

and graphical image generation. The advancement in graphics display by P3 is achieved

13

Contours are commonly drawn on maps to portray the structural configuration of the Earth's surface or

formations in the subsurface. For example, structure maps contain contours of constant elevation with respect to

a datum such as sea level [33].

14

Grayscale or greyscale is an image in which the value of each pixel is a single sample, that is, it carries only

intensity information. Images of this sort, also known as black-and-white, are composed exclusively of shades

of gray, varying from black at the weakest intensity to white at the strongest [34].

15

In reflection seismology, each time a signal-reflection occurs, a wiggle is recorded based on the two way

travel time taken for the signal to be collected back at the receiver. The collection of wiggles resembles an

estimation of earth structures based on signal-reflection-time at a geologically mapped area [3]

16

Xwindow is a computer software system and network protocol that provides a basis for graphical user

interfaces for networked computers via command execution line [35].

19

through the Generic Mapping Tool abbreviated as GMT. GMT is an open-source

collection of computer software tools for processing and displaying pixel in a XYZ plane

coordinate system. Package P3 uses GMT to perform image rastering, filtering and other

image processing operations, including various kinds of map projections [36]. Package P3 is

able to perform all visual and plotting functions by packages P1 and P2.

The seismic data processes were carried out on both P1 and P2 for a common seismic dataset.

We refer to the processes as a workflow, WF, which describes nine main steps, s, of seismic

data functions and operations that we have carried out on a historical seismic dataset. The

seismic data processing workflow is presented in table 12. Package P3 was not included in

the overall test due to the fact that it is a visualization and interpretation tool. Many of its

specific Reflection Seismology functions are proprietary based, hence limiting our capability

to perform tests on the package.

s1 Automatic Gain Control (AGC)

s2 Muting

s3 Noise Filtering

s4 Static Correction

s5 Velocity Filter

s6 Normal Moveout (NMO) Correction

s7 Velocity Analysis

s8 Seismic Trace Stacking

s9 Post-Stack Depth Migration

Tab 12. Seismic Data Processing Workflow

The historical seismic dataset that we have obtained is 11 Gigabytes in size and consists of 27

million signal reflection points. The initial data format is in SEGY. Format conversions were

carried out to accommodate both P1 and P2 data formats prior to performing the seismic data

processing workflow. The technical detail of each seismic data processing step in the

workflow is described as follows.

AGC

The seismic signal’s energy and amplitude are often the strongest when they are near to

the source of transmission. The far offset signal reflection usually shows low or weak

energy readings. In this evaluation we apply AGC to our dataset to increase weak signal

reflection reading.

Muting

Signal reflection or trace muting is used to eliminate extraordinary signal events that do

not match with our primary signal reflections. Such extraordinary events are amplitude,

that is, reverberation near the surface which regularly arrives earlier than the true signal

reflection.

Noise Filtering

In this evaluation we applied the most common type of noise filtering, which is the

Bandpass filter. We have set the filter to remove low pass frequency noise which is

commonly caused by surface waves, such as the air coupling effect and mechanical noise.

20

Static Correction

Signal reflections are subject to two types of delays. The vertical depth from the signal

transmitter to the receivers is known to cause significant delay on the signal arrival time.

The consistent horizontal movement of the survey vessel causes displacement between

the transmitter and the receivers resulting in a signal reflection delay. The static problem

is commonly caused by the combination of both types of delays. The static correction is

applied to rectify the geometry of signal reflection points to give true signal reflectivity

when gathered by the receiver at the surface. Signal reflections were subject to two types

of delays.

Velocity Filter

Velocity filter is applied to remove near surface noise. The velocity of the noise can be

distinguished from the apparently deep signal reflection velocity. The F-K filter, known

as the Frequency Wave-number filter, is a common approach when dealing with near

surface noise. Transformation from time and spatial domain to the frequency domain is

necessary to distinguish the true signal reflections and noises that manifest the signal

frequency. Subsequently, near surface noises are removed.

NMO Correction

NMO correction is applied in our evaluation to stack all seismic shots in a single

horizontal line for a Common Midpoint Signal Reflection or in short, CMP stack. NMO

correction removes the offset dependence for each signal travelling time recorded in the

CMP stack. The result of removing the offset dependence for each signal travel time

record in the CMP stack produces seismic traces with a zero offset value.

Velocity Analysis

The seismic traces, which are the signal travelling time recordings and the signal velocity

energy values, need to be matched. In order to match the seismic traces and its

corresponding signal velocity energy value, the velocity with high energy value is

identified. The semblance of the signal velocity energy value acted as a guide to pick the

matching seismic trace. The process of matching the velocity energy value with the

seismic traces produces a velocity model that we have built. The same velocity model

will be used when dealing with other seismic traces within this dataset.

Seismic Trace Stacking

The stacking of all signal travel time with zero offset values increases the energy of the

reflected signals, because the reflected waves are spatially coherent or spatially consistent

between each of the signal travel time records. The spatial coherency of all the signal

travel time records in the same stack increases the signal-to-noise ratio (SNR) and

decreases the energy of the noise.

Post-Stack Depth Migration

Post-Stack Depth Migration is the most time consuming and CPU exhaustive operation of

all the seismic data processes listed in the workflow. However, it is an important step to

obtain an accurate signal reflection point in a highly complex geological formation.

Signal reflections are subject to diffraction and scattering when propagating via the Earth

subsurface. In this evaluation and testing we apply Post-Stack Depth Migration to

geometrically correct the signal reflection point to give an accurate depiction of a seismic

time image.

The seismic data processing workflow has been executed on the Deakin University Computer

Cluster, which is physically located in the School of Information Technology. The cluster

consists of 20 physical nodes with each node consisting of an Intel based dual 1.6 Gigahertz

21

CPU. Each CPU in a node is made of quad core processors, which makes a total of 8

processors per node. Each node is allocated 8 Gigabytes of memory.

These nodes are interconnected with a 10 Gigabit infiniband network. The computer cluster

runs on Centos Linux operating systems and uses SUN Grid Engine version 6.1 to perform

job queuing and management. The computer cluster is designed as such that 10 physical

nodes are used to support 20 virtual nodes and the remaining 10 act as normal physical nodes.

The fraction of a historical data that we have obtained from an oil and gas company17 is 122

Gbytes in size, containing 28 billion elements of seismic shot records. These shot records or

seismic traces are time recordings of acoustic signal reflections obtained during a geological

mapping operation.

The original historical dataset that we have acquired is in the form of SEGY format which is

the most dominant seismic data format [37]. Released by the Society of Exploration

Geophysicist (SEG) in 1975, hence the name SEGY; it is an open format controlled by a

technical committee. SEG-Y format allows storing of seismic digital data on magnetic tapes.

However, packages P1 and P2 own a specific seismic data format and do not comply directly

with the SEGY format. Therefore, format conversion from SEGY to .su18 is necessary for

the seismic data to be processed with package P1. Subsequently, format conversion from

SEGY to .rsf19 is also necessary prior to executing the seismic data on package P2. The

format conversion executions were made on the experimental testbed as indicated in 6.1. The

CPU execution times and seismic data size after each format conversion were recorded as

shown in table 13.

Operation (minutes) conversion

Column, c Conversion type t1 t2 t3 ttotal tave (GBytes)

1 .su .segy 67 72 77 216 72 122

2 .su .rsf 63 70 56 189 63 115

3 .rsf .su 69 71 67 207 69 122

4 .rsf .segy 74 81 79 234 78 122

Tab 13. Execution of seismic data conversions for format of packages P1 and P2

Table 13 shows a cross conversion from different seismic data formats. The purpose of this

test was to analyse the CPU execution time and changes in data size for each format

conversion operation. Each of the seismic data format conversion operations is represented

by columns cn, from c1 up to c4.

17

The name of this company is not disclosed due to privacy concern and because the hydrocarbon basin is still

subject to a revisit in the future.

18

The .su seismic data format for package P1 was thoroughly discussed in our 1st technical report titled,

Izzatdin A.A, Goscinski A. (2010). The Study of Seismic UNIX in Relation to Reflection Seismology Models.

School of Information Technology Technical Report TR C10/2. Deakin University Australia.

19

The .rsf seismic data format for package P2 was earlier discussed in the 2nd Technical report titled Izzatdin

A.A, Goscinski, A. (2010). The Study of Madagascar Seismic Data Processing Package in Relation to

Reflection Seismology Models. School of Information Technology Technical Report TR C10/5. Deakin

University Australia.

22

From table 13, we have formulated a mathematical model to obtain the total CPU execution

time for each column, c derived in equation (4):-

(4)

Each conversion type or column c, undergoes a repetitive execution three times and is

summed up to obtain the total CPU execution time denoted as ttotal. Based on equation (4), we

derived the average time taken for each column, cn as shown in equation (5):-

3 (5)

The CPU execution time ttotal, for each column cn, is then divided by the number of execution

repetitions, to obtain the average cntave for each format conversion operation. It is emphasized

again over here that the initial data size for the testing that we have conducted is 122 GBytes.

From table 13, we have identified that from c1tave, package P1 (.su) took an average of 72

minutes to convert to the SEGY (.segy) format. The data size after conversion from

package P1 format into SEGY format is 122 GBytes, which is similar to the initial data size.

It is noticed however, that there is small difference in size between package P1 data size and

the SEGY data size. The SEGY data is 3.6 Mbytes more than package P1 data size. This is

due to the fact that the SEGY data structure consists of 2 segments, the header segment and

the seismic trace segment. The header segment is approximately 3600 bytes in size

containing the description20 of the SEGY data. Package P1 (.su) data structure does not

have the header segment. This explains the minor difference of 3.6 Mbytes between both

.su and .segy formats.

denoted by c2tave. The data size after conversion from package P1 (.su) format into package

P2 (.rsf) format is 115 GBytes, which is similar to the initial size of the data.

indicate by c3tave. The data size after conversion from package P2 (.rsf) format into

package P1 (.su) format is 122 GBytes.

indicate by c4tave. The data size after conversion from package P2 (.rsf) format into the

SEGY (.segy) format is 122 GBytes, which is consistent with the previously recorded data

size in c1tave.

Following the data format conversion, we have conducted an evaluation and testing of

packages P1 and P2 according to the workflow WF depicted from table 12.

20

Description of SEGY data was discussed thoroughly in our 1st Technical Report, “The Study of Seismic

UNIX in Relation to Reflection Seismology Models” [3].

23

6.3 Evaluation and Testing

Table 14 shows the CPU execution time for sequential execution for both packages P1 and

P2. Package P3 was not included in the table due to the fact that it is a seismic visualization

and interpretation package. Seismic data processing programs in package P1 were initially

written to only support sequential execution. Thus a comparison of sequential execution

between packages P1 and P2 was carried out.

(s) Processing (in Minutes)

Workflow, (WF) P1 P2

Exec1 Exec2 Exec3 ExecAve Exec1 Exec2 Exec3 ExecAve

s1 Automatic Gain 23 19 21 21 27 19 29 25

Control (AGC)

s2 Muting 37 28 31 32 26 21 25 24

s3 Noise Filtering 182 165 190 179 194 179 173 182

s4 Static Correction 67 59 60 62 107 95 83 95

s5 Velocity Filter to 503 479 461 481 382 368 384 378

remove near surface

noise

s6 Normal Moveout 535 531 557 541 230 206 221 219

(NMO) Correction

s7 Velocity Analysis 329 361 342 344 285 292 293 290

s8 Seismic Trace 17 12 19 16 25 31 28 28

Stacking to remove

coherent and

random noise

s9 Post-Stack Depth 8931 9521 9484 9312 7950 8012 7513 7825

Migration

Total CPU Execution time 10988 9066

Tab 14. Sequential execution of Seismic Data Processing Workflow for packages P1 and P2

Each seismic data processing step, s, was sequentially executed three times, indicated as

Exec1, Exec2 and Exec3. The purpose of repeating each step three times is to take the average

reading, ExecAve as recorded in table 14. The overall seismic data processing execution for

package P1 is 10988 minutes, which took approximately 7 days. Package P2 consumed 9066

minutes or approximately 6 days to complete the overall tasks. Obvious similarity between

both packages P1 and P2 are that, both packages took longer CPU execution time to complete

WFs9. As expected, based on the literature studies in [2-3], both packages consumed lengthy

duration to perform the Post-Stack Depth Migration. Package P2 however, performed the

computation of Post-Stack Depth Migration much faster than Package P1.

Conclusion

In this report, we have presented a result of seismic data processing workflow on packages P1

and P2 using a historical seismic dataset. Package P3 acts as a seismic data visualization and

interpretation tool and was not included together in the evaluation and testing. It is apparent

that from the overall execution of our workflow, WF, package P2 completed earlier than P1.

However, the individual CPU execution time of the WF for both P1 and P2 differs

considerably. For instance, P1WFs1 completed 4 minutes earlier than P2WFs1 and P1WFs3

24

finished 3 minutes earlier than P2WFs3. However, a significant comparison in CPU execution

time took place on WFs9. The 9066 minutes completion time for P2WFs9 shows that the

package P2 completed approximately 32 hours earlier than the execution of P1WFs9. The

reason for a faster P2 CPU execution time lies on its .rsf data structure and format. We

have learned from [9] and from the series of CPU execution of WFs for both packages P1 and

P2, that P2 data format, which is the .rsf, is structured with less complexity and uses a

contemporary memory arrangement approach.

P2 data format consists of two segments, which are the meta-information and data sequence.

The meta-information describes the basic information about the dataset, and the data

sequence contains the actual seismic traces or shot records in binary form. Following the

meta-information is the data sequence segment which describes the primary content for the

input. Data sequence contains time recordings of acoustic signal reflections sampled during a

typical seismic mapping. Each signal reflection called shot records is represented as an

element in a multiple dimensional array. The data format design for P2 eased the accessibility

of the binary data for complex computation which resulted in P2WFs9 completing much

earlier than P1WFs9.

In this report, we have also introduced a taxonomy of seismic data processing functions.

Although, seismic data processing functions have been classified by many geophysical

experts, thus far from our literature studies [1-7], very few have described and classified

seismic data processing functions from the view point of open seismic data processing

packages.

methods with respect to computing and seismic data processing packages. Instead of

representing seismic data processes in a linear listing, we classify the processes into three

function groups which are; Data Manipulation, Reflection Seismology Process and

Visualization. Arbitrarily the classes reflect the standard processes available in numerous

seismic data processing software packages.

The Data Manipulation function group deals with modification of fundamental systems

commands to better suit the administrative purposes of a software package. For example, the

fin function in UNIX is to display general information of a file. Package P2, however,

modifies it to sfin, to be used by the package when displaying information for package P2

file format. The Systems class also inherits similar Pre-processing methods as described in

the existing categorization of seismic methods, such as seismic Data Editing and Format

Conversions.

methods to manipulate seismic datasets and to perform analysis. Velocity analysis, Moveout

Corrections, Seismic Trace Stacking, Time and Depth Conversions and Migration techniques

are grouped in this class. Apparently the methods grouped under this class reflect the existing

category of seismic Processing methods. However, the seismic Image Construction method is

removed due to the fact that they are closely related to the field of seismic interpretation and

prediction rather than visualization.

The Visualization function group is essential in constructing credible seismic image for the

purpose of interpreting and predicting possible hydrocarbon traps. Seismic image

construction is the ultimate goal of seismic data processing. In this function group, we

25

identified common and unique features possessed by each packages P1, P2 and P3. Package

P1 supports variety of graphic formats which includes bitmap, postscript and displaying in

Xwndows environment. Package P2 on the other hand supports image construction of three

dimension and higher data representation. Package P3 virtually supports all graphical

functionality of P1 and P2. Package P3 is a graphical user interface (GUI) based package and

complies with UNIX distributions as well as Windows operating systems.

distributed system for packages P1, P2 and P3. By having these packages to execute in a

distributed processing environment, such as the cloud or a computer cluster, we can then

measure the functions execution performance and speed up for all three packages. However,

from our preliminary studies, several issues will need to be addressed.

industry as it indicates important information such as the oil and gas well location, as

well as hydrocarbon drilling points. Hence, it is vital to secure the seismic data when

being transferred over a vast network.

Network Latency and Reliability – Streams of massive seismic dataset needs to be

transferred in a reliable and accelerated manner to the cloud or computer cluster in

order to realize real-time processing. The transmission of large volumes of seismic

data via the IP network through the best effort delivery could possibly cause

expensive data to be lost.

Storage – The high volume of raw seismic data obtained from the geological survey

would need to be amassed in a suitable location with reasonable storage cost and

capacity.

Further testing on the open seismic data processing packages functionalities and their ability

to execute on a distributed processing environment is necessary, and shall be discussed in the

upcoming report.

Reference

[1] Sheriff R. E, Geldart L. P. (1995). Exploration Seismology. Second Edition,

Cambridge University Press.

Research Sdn Bhd.

[3] Izzatdin A.A, Goscinski A. (2010). The Study of Seismic UNIX in Relation to

Reflection Seismology Models. School of Information Technology Technical Report

TR C10/2. Deakin University Australia.

[4] Murillo A. E, Bell J. (2000). Distributed Seismic Unix: a tool for seismic data

processing. Applications of Distributed Computing Environments. Vol 11 Issue 4

pages 169-187.

http://en.wikipedia.org/wiki/Seismic_Unix.

26

[6] Stockwell J. (2009). Geophysical Image Processing with Seismic UNIX. Center for

Wave Phenomenon.

[7] Jr, J. W. S. and J. K. Cohen. (2002). The New SU User's Manual. Colorado School of

Mines Center for Wave Phenomena. The Society of Exploration Geophysicists. Vol

3.2 Issue 107.

[8] Murillo A.E, Bell J.(1999). Distributed Seismic UNIX: a tool for seismic data

processing. Concurrency: Practice and Experience. DOI: 10.1002/(SICI)1096-

9128(19990410) Volume 11, Issue 4, pages 169–187, 10 April 1999.

[9] Izzatdin A.A, Goscinski, A. (2010). The Study of Madagascar Seismic Data

Processing Package in Relation to Reflection Seismology Models. School of

Information Technology Technical Report TR C10/5. Deakin University Australia.

Geophysics. RSF School and Workshop, Vancouver.

[11] Bustos H.I.A, Silva M.P, Bandeira C.L.L. (2009). A MAP algorithm for AVO

seismic inversion based on the mixed (L2, non-L2) norms to separate primary and

multiple signals in slowness space. IEEE Applications of Computer Vision. ISSN:

1550-5790. ISBN: 978-1-4244-5497-6

Monitoring of water infiltration using GPR data. 10th European Meeting of

Environmental and Engineering Geophysics.

[13] Izzatdin A.A, Goscinski A. Hobbs M.(2010). The Study of OpenDtect Seismic Data

Interpretation and Visualization Package in Relation to Seismic Interpretation and

Visualization Models. School of Information Technology Technical. Deakin

University Australia.

[14] Beheer B.V. (2010). Madagascar Batch Processing. OpendTect User Documentation

version 4.0 Chapter 7, dGB Earth Sciences.

[15] Scales, J. A. (1997). Theory of Seismic Imaging. Golden Colorado, Samizdat Press

Volume 1 and 2. ISBN 1560800941.

[17] Telford W.M, Gelbert L.P, Sherff R.E, Keys D.A. (1990). Applied Geophysics.

Cambridge University Press.

[18] Stein J.A, Langston T. (2007). A Review of Some Powerful Noise Elimination for

Land Processing. European Association of Geoscientist and Engineers EAGE. 69th

Conference.

[19] Scales J.A, Snieder R. (1998). What is Noise?. Journal of Geophysics. Volume 63,

Issue 4. Pages: 1122–1124.

27

[20] Brignell J. (2006). Smoothing of Data. Department of Electronics & Computer

Science, University of Southampton. Brignell Associates

Mere Warminster.

[21] Bjorck A. (1996). Numerical Methods for Least Squares Problems. Society of

Industrial and Applied Mathematics. SIAM. ISBN-13: 978-0-898713-60-2

Prentice Hall. ISBN-10: 0131918354.

[23] Han, D.-h. and M. Batzle (2000). Velocity, Density and Modulus of Hydrocarbon

Fluids --Data Measurement. Society of Exploration Geophysicist (SEG) Annual

Meeting 2001, University of Houstan Department of Earth and Environmental

Science. University of Houstan Department of Earth and Environmental Science.

heavy oil sand reservoir: Manitou Lake, Saskatchewan. Canadian Society of

Exploration Geophysics, Consortium for Research in Elastic Wave Exploration

Seismology.

interpretation through automated velocity picking in semblance velocity images.

Special issue IEEE WACV Machine Vision and Applications Volume 13, Number 3,

141-148. June 2001. SpringerLink.

Society of Exploration Geophysicists. ISBN 1560801182.

[28] Zimmerman J.J. (1996). It’s All a Matter of Space and Time. The Geophysical

Corner. Geophysical Integration Committee. American Association of Petroleum

Geologists.

[29] Claerbout J. (2010). Basic Earth Imaging (BEI). Stanford Exploration Project.

Computer Based Learning Unit, University of Leeds. Retrieved 2 December 2010,

from http://sepwww.stanford.edu/sep/prof/bei/toc_html/node1.html.

[30] Burnett W.A, Ferguson R.J. (2008). Reversible Stolt migration. Research Report.

Consortium for Research in Elastic Wave Exploration Seismology. Department of

Geosciences, University of Calgary.

[31] Hardy R.J.J. (2010). Migration in Practice. Excess Geophysics Consultancy and

Course Material. Tonnta Energy Dublin.

[32] Hill N.R.(1990).Gaussian Beam Migration. Journal of Geophysics. Volume 55. Issue

11. Society of Exploration Geophysics.

28

[34] Stephen Johnson (2006). Stephen Johnson on Digital Photography. O'Reilly.

ISBN 059652370.

[35] Bellevue Linux Users Group (2006). The X Window System:A Brief Introduction.

The Linux Information Project

[36] Wessel P, Walter H. F.(2010). Generic Mapping Tools (GMT). Wessel, Smith and

Volunteers.

[37] Barry K. M, Cavers D.A, et al. (1975). "Recommended standards for digital tape

formats." Journal of Geophysics 40(2): 344–352.

29

- 01530224Cargado porHany ElGezawy
- Borehole seismic applications.docxCargado poranima1982
- 1 PAPR351 LanfengLiu HanCargado porFredy Castillo Melgarejo
- Modul 01 Seismik RefleksiCargado porEva
- Xie Jin Wu 2006 Wave Equation Illumination AnalysisCargado porpasio31
- c3-4Cargado porVinay Gupta
- 1seismicBSR.pdfCargado porWisnu Slamet Priyanto
- CEE9533A - 2017 Assignment _1Cargado porBlari Ozuna
- Estimating Rock Porosity and Fluid Saturation Using Only SeismicCargado porDewi
- AVOCargado porAsim Naseer
- Design and Construction of LeveesCargado porKashif Muhammad
- Introduction SeismicCargado porAhmad Rafi'i Hamdy
- Assignment 1Cargado porTejas
- 2006-08Cargado porShafini Saupi
- Seismic Imaging in and Around Salt BodiesCargado porAdly Al-Saafin
- Farmer 1994Cargado porgrf93
- Review of the Effects of Seismic and Oceanographic Sonar_2010Cargado porSha Jalandoni Sotelo
- Seismic attributesCargado porJeevan Babu
- Free SoftwaresCargado porprouserdesigner77
- How to Use KrigingCargado porKenya Alkimim
- Course SyllabusCargado porBrien Naco
- 5gm Chand Arya FullpaperCargado porAnisVisu
- ESurveySections_QuickReferenceGuideCargado porvijaigk
- Lam_1983Cargado porSaiful Azimi
- Tabel Seismologi.pdfCargado porDewi Yuanita
- Monier-Williams et al., 2009. Review of Borehole Based Geophysical Site Evaluation Tools and TechniquesCargado pormOchO
- InterpolationCargado porNivi Senthil
- IBP1647_12Cargado porMarcelo Varejão Casarin
- Investigation MethodCargado porim23world
- PDFCargado por828391

- GS_PPACargado porAsep Thea
- Baskovich-MRBCargado porOswaldo Tello
- HW12Cargado porlongpatrol42
- Lohit Matani _ lohitmataniCargado porJahnavi Patil
- FCC Remarks 2-24-2010Cargado porRob Daniel
- Hrm in BankingCargado porRishi Balchandani
- Markting Strategy of Walton MobileCargado porsajid_391
- adxl335Cargado porMilo Latino
- 061Paper-S_Stefanova Application of Trialogical DesignCargado porAizee
- Con SolariumCargado porKay Smith
- Service InnovationCargado porhyjulio
- DATA STRUCTURE ASSIGNMENTCargado porGaganpreet Singh
- FAT-PumpsCargado poraqhammam
- catalago Trator 7515 2012 AtualCargado porAnderson Guerini
- AG. CEO SPEECHCargado porKENYA TRADE NETWORK AGENCY(KENTRADE)
- Microsoft VDI and VDA FAQ v3 0Cargado porDeep Thought
- Topic 6 - Resource Management Student BookletCargado porSaihielBakshi
- Handbook of Rural DevelopmentCargado porKostis Karanasios
- A043Y698_AS480_ENCargado porThomas Pendergrass
- Kerkyra Yacht ManualCargado porAlonso Olaya Ruiz
- Tutorial FlywheelCargado pornikhil tiwari
- Files & Correspondence Management Course OutlineCargado porEmerson O. St. G. Bryan
- Manual HP all in oneCargado porVictor Huaman Alvarado
- VSP_G400_G600_F400_F600_Quick_Start_FE-94HM8032-06Cargado porcresmak
- TOSHIBACargado porArun Pathania
- Save Dimmeys. Yarra Council. Urban Design Advice. McGauranCargado porSave Dimmeys
- How Can You Communicate Data Between the S7-200 and the S7-300 via PROFIBUS DP_ - ID_ 2615830 - Industry Support SiemensCargado portayantrungquochp
- Turtlebot Basics(1)Cargado porayu
- MB0049-spring-2016Cargado porsmu mba solved assignments
- Ch 1 RMK SIACargado porSiti Zakinah Novrigia