 Open Access
 Total Downloads : 285
 Authors : Mbang, Uba Bassey, Falaki, S. O, Alese, B. K, Enikanselu, P. A
 Paper ID : IJERTV3IS040942
 Volume & Issue : Volume 03, Issue 04 (April 2014)
 Published (First Online): 06052014
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Implementation of a Composite Hybrid LMS/RLS Adaptive Deconvolution System for Seismic Oil Prospecting (With Matlab)
Mbang, U. B1

Federal Inland Revenue Service (FIRS) Government Business Tax Office (GBTO)
Plot 7, IBB Way, Calabar Cross River State Nigeria.
Enikanselu, P. A3
Falaki, S. O2 , Alese, B. K2

Department of Computer Science, Federal University of Technology,x
Akure, Ondo State, Nigeria.

Department of Geophysics, Federal University of Technology,
Akure, Ondo State, Nigeria.
Abstract A composite adaptive deconvolution system that integrates a proposed hybrid Least Mean Square(LMS) and Recursive Least Squares (RLS) adaptive filtering algorithm with existing LMS, RLS, Normalised LMS,etc., algorithms for deconvolution of seismic sequences is proposed. The composite model accepts input as reflections detected from an oil well. The system then removes echoes and reverberations using systems identification principles before subjecting the emergent sequence (primary and secondary reflections) to adaptive deconvolution using a choice algorithm among multiple algorithms stacked for that purpose. The output sequence (the estimated primary reflections), the error sequence as well as the filter coefficient numbers/values are then graphically displayed for visual appraisal. The proposed system is implemented with MATLAB and it has a graphical user interface that shifts the choice of the algorithm for deconvolution to the user. Convergence is tested by comparing the output of each adaptive deconvolution algorithm with the standardized Albert Wieners signal deconvolution output. Results obtained by testing the system with data sourced from The Mathworks Inc. shows that the hybrid LMS/RLS algorithm converges faster to the Wieners coefficients at lower offset and higher iteration values compared to the other algorithms.
Keywords seismic, reflection, deconvolution, algorithm, exploration, prospecting, leastsquares, adaptive, filtering
1. INTRODUCTION
Oil prospecting or exploration can be achieved by various methods ranging from the prehistoric use of hunches or heuristics (rule of thumb) to the conventional use of core samples (coring), the magnetometer (Magnetic Method), the gravimeter (Gravity Method), soil chemical analysis (Chemical Method), natural and induced electrical currents (Electrical Method), Radioactivity (Radioactive Method), Well Logging, use of Seismographs or seismometers (Seismic
Method), etc. Of all these oil exploration methods, the seismic method which uses seismographs, geophones (for onshore exploration) and Hydrophones (for offshore exploration) is the method often used for exploration in most developed and developing countries[2][5].
Oil prospecting, both in onshore and offshore environments comprises of very complex processes some of which involve heavy instrumentation, microscopic and visible organic and inorganic matter evaluation, sound/shock wave generation and detection of reflected signals, etc. To a geologist, geophysicist, or seismologist, the sound made by a particular substratum (an area under survey for oil deposits) is directly or indirectly related to the properties of that substratum, viz. the chemical composition of the underlying rocks, the geophysical processes that characterize the area in terms of denudation, rock formation, weathering, solidification, volcanicity, etc [15].
It is therefore of common practice to try and study the kind of sound or vibration that the layers of the earth will give when an acoustic signal generator is used to generate a wave that propagates down the layers of the earth crust. Hence dynamites or other modern signal generators are used to generate a train of pulses into the earth or water and geophones (seismic wave detectors) or other signal detectors are planted on some remote places on the same plane to detect the kind of vibration, reverberation, travel speed, soil properties, etc. that emerges from the excitation sequence.
In this research, we formulate statistical procedures for modeling the response of the earth crust to an excitation sequence (signal) both on the bare ground or marshy fields/shallow waters (Onshore) and in the sea or deep water (Offshore). The modeled procedures are then implemented in Matlab for seismic sequence enhancement by least squares Error (LSE), least mean square (LMS), and hybrid LSE/LMS methodologies.

ENVIRONMENTAL GEOLOGY OF AN OIL FIELD
(ONSHORE/OFFSHORE)

A Schematic of an Oil Field
An open field or ground which has little or no surface water is said to be an onshore environment [10]. Such an environment can be geographically stratified into different layers as shown in fig.2.1, below.
inbuilt digital filters to process the geophones raw data and converts it to seismic lines [11].

MODELS/ALGORITHM FORMULATION Simple mathematical modeling reveals that y(n), the
received signal, can be modeled in terms of s(n), the excitation signal, and the boundary delays di as
N
y(n)
i1
ai s(n di )
(3.1)
Where {ai} are the coefficients of reflection at the interfaces between the various layers of the earth and {di} denotes the corresponding set of propagation delays. Moreover, N is a finite integer and refers to the total number of coefficients counting from 1. i.e. n=1,2,3,, N [2][3].
Fig. 2.1 A cross section of the earth surface showing the layers of the earth and an oil reservoir in an onshore scenario (Adapted from [11]).
In order to collect seismic data, shock waves are sent into the ground and signal detecting devices are used to measure how long it takes for the subsurface rocks to reflect these waves back to the surface [11]. The shock waves used today are generated by pounding the earth surface with giant vibrator trucks (see fig 3.2). This is preferred to the erstwhile use of explosives and dynamites which may cause other environmental hazards. When these shock waves travel into the earth, boundaries between the rocks reflect part of these waves back while some percentage of the wave energy goes downward. The reflected waves and their arrival times are then detected and recorded by listening devices known as geophones.
It must be noted that the propagation delay is a function of the time t taken for the excitation wave s(n) to travel to the reflector, get reflected and then be received at the geophone as y(n). This time is modeled as
t = 2D/v (3.2)
Where D is the depth of the medium (distance from top to the reflector) and v is the signal velocity in the rocks.
Hence di, defined as
di = t (3.3)
are the propagation delays, is the actual time taken for a signal to travel from the source to the reflector and then back to the geophone and t is the ideal time that a signal with wave velocity is supposed to travel to and fro the depth in the absence of propagation delays between rock boundaries. Moreover, the delay is used to estimate the reflectivity of water coefficients and these reflectivity coefficients are of great importance in the deconvolution of the received signal [7].
In practice, the number N of reflection coefficients is usually large hence the quality and wave content of y(n) depends largely on the properties of the layers of the rocks that reflect s(n). Moreover, seismic analysis and evaluation over time reveals that y(n) is a convolution (a complex mixture) of the excitation signal s(n) and the sequence u(n) which characterizes the medium or layer of the earth[2][3][4].
L
This u(n) is modeled as
u(n) ai(n di )
i1
(3.4)
where {ai} and {di} are as defined above, but the delayed transient is the main factor that the geophysicist is out to analyze, while i=1, 2, , L, for any finite integer L.
To achieve this, we try to isolate the component u(n) from the received signal y(n) by means of deconvolution (the inverse operation that separates convolved signals) of the convolved sequence(s) below:
y(n) s(n)*u(n)
(3.5)
Fig.2.2: shock waves propagation in an onshore environment
While in the offshore scenario, a third sequence r(n) is convolved with u(n) such that
The geophysicist or geologist then collects the data in the geophone for computer processing. The computer uses the
y(n) s(n) *u(n) * r(n)
(3.6)
Hence the basic onshore model for least squares error treatment is (3.5), or
y(n) s(n) * L a (n d )
i
i
i1
(3.7)
Note: ALG shall be used as an acronym for Algorithm in the formulations below.

Algorithm Formulation for Model Optimization
ALG.1: Onshore Model Optimization Procedure by Conventional Least Squares
Given the model in (3.5) or (3.7) above, we will adopt the least squares optimization criterion in designing a least squares error inverse filter for deconvolving s(n) from u(n) so that u(n) can be studied in isolation. To do this, the following statistical assumptions are invaluable [3][4][5].

Assumptions

We assume that the sequence u(n) that characterizes the medium is made up of a collection of uncorrelated reflections. Hence u(n), just like white noise will have an autocorrelation sequence given by
0,
(l) Cu ,
l 0 l 0
(3.8)
Where Cu is an arbitrary constant equal to the expectation, Eu of u.

Assume also that the sequence s(n), the input train of pulses, is made up of highly correlated impulses (waveforms) such that successive samples of s(n) do not vary much from one another. This means that s(n) can be estimated from past samples of s(n), viz. s(n1), s(n2), s(n3), , . Hence we can comfortably form a weighted linear combination of the past L sample of
Fig. 3.3: Finite Impulse Response (FIR) inverse filter model for isolation of the unwanted component s(n) from u(n).
Where H(z) is the ideal impulse response of the desired filter and H (z) is the estimated impulse response of the designed
digital filter
To continue the minimization process, let
(n) e2 (n)
s(n), a process called linear prediction.
n0
denote the sum of squared errors. Then
(3.11)

However, since geophysical evidence over time proves
L 2
(n) s(n) i y(n i)
(3.12)
that the excitation sequence s(n) (which is unmeasured
n0
i0
a priori) is the domineering component of the convolution in 3.5, [4][13][8] then it becomes very reasonable to also predict s(n) based on past samples of y(n) (which were actually received at the geophone), viz. y(n1), y(n2), y(n3), , y(nL). i.e.
L
, Where the substitution in 3.6 was used and i are filter coefficients.
i
Now, differentiating (n) partially with respect to each of the filter coefficients i and equating the result to zero (for orthorgonality),
s(n) i y(n i)
i1
(3.9)
{ (n)}
2
The error due to the estimation of s(n) with (n) is denoted
[
s(n)
L y(n i)
] 0
by e(n) and is given by
e(n)=s(n)(n) (3.10)
which we seek to minimize by least square means. This procedure is captured in the block diagram of fig.3.3, below where y(n) is as defined in 3.5 above.
i
n0
i
i0
L Or the in vector form as
i s(n)s(n m)
C
(3.22)
i0
n0
yy
Where is the vector of filter coefficients and C = {y(0), 0,
s(n)s(n m)
n0
(3.13)
,0}'.
i.e.
L
i0
i yy
rsy
(n) ;
(3.14)
Notice that yy is sill Toepliz as it is both symmetric and has equal elements along both diagonals, making it readily
m 0,1,2,3,4,…, L
Where yy (m) is the autocorrelation of the sequence y(n) defined as:
invertible. Moreover, the vector product 3.22 is a Toepliz combination of the Toepliz matrix yy and the column vector . Since all Toepliz matrices are invertible, 3.21 can readily
yy

n0
y(n) y(n m)
(3.15)
be solved by Gaussian Elimination, Levinson and Durbin Algorithms as well as by computer programming means.
and
rsy (m)
is the cross correlation between the desired
In the Gaussian method, our target is to invert the matrix
output sequence s(n) and the input sequence y(n), defined as
yy such that
rsy (m) s(n) y(n m) (3.16)
1 1
n 0
yy . yy . = yy . C (3.23)
The convolution sum 3.14 is the set of Yule Walker
i.e.
I.
1yyc
(3.24)
equations, also called normal equations, [2] which have been solved some decades ago [8][5] with varying degrees of or
1yyc
(3.25)
complexity.
Expressing the set 3.14 in matrix form, we have
where I is the identity matrix.
yy (0)
yy (1)
yy (L)
0 rsy (0)



Computational complexity
yy (1)
yy (0) yy (L 1)
1 rsy (1)
(3.17)
The use of Gaussian elimination to solve a system of
L equations for L unknowns requires L(L+1) / 2 divisions, (2L3 + 3L2 5L)/6 multiplications, and (2L3 +
yy (L)
yy (L 1)
yy (0) L rsy (L)
3L2 5L)/6 subtractions, for a total of approximately 2L3 / 3 operations. This means that it has a complexity of
Or in vector form as:
yy
rsy
(3.18)
order L3 or O(L3).[9]
This algorithm can be used on a computer for
Notice that the formulation 3.18 is still the same as 3.14, the familiar Yule Walker or normal equations whose solution yields the least squares optimized filter coefficients
i .
Moreover, if the optimized least squares filter H z with impulse response s(n) is to be the approximate inverse filter needed, then the desired response must be
systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods. Specific methods exist for systems whose coefficients follow a regular pattern [16]. Both the Levinson and Durbin Algorithms exploit recursion and iteration to solve the Yulewalkers equation with the key advantage that the computational complexity is reduced to order LÂ² [9].
s(n) =
s(n)
(3.19)
Hence the cross correlation between s(n) and y(n) reduces to
In this research, however, our objective is to achieve
rsy
(m)
y(0),
m 0
(3.20)
a further reduction in computational complexity,
irrespective of the size of L, by using computer
0,
Thus equation 3.17 reduces to
otherwise.
programming logic (Matlab) to write a program that
yy (0)
yy (1)
yy (1)
yy (0)
yy (L)
yy (L 1)
0
1
y(0) 0
(3.21)

Hides the computational complexity occasioned by the numerous equations encountered in this model development or at least reduce the order further;
yy (L)
yy (0) L 0

Attempt to capture the entire offshore and onshore
modeling processes in software for petroleum exploration.
Recall that
uu
us
R1
(3.32)
ALG. 2: THE PROPOSED ADAPTIVE LEAST SQUARES RECURSIVE FILTER (ALSRDF)
In order to effectively handle the problem of estimating the least squares coefficients as in 3.14 and 3.18, the following algorithm is formulated.
Where Ruu is the autocorrelation matrix of the output signal
and rus is the cross correlation between u(n) and s(n)[1].
Also, u (n) = [u (n), u (n1), , L1 (nL)]T . Hence, the vector product of 3.32 can be expressed in recursive form as
Ruu(n) = Ruu(n1) + y(n)yT(n) (3.33)
Using an exponentially decaying process, we have
Generally, the least squares solution gave rise to a
Ruu


Ruu
(n 1) y(n) yT (n)
(3.34)
formulation of the form [8]:
yy rsy
(3.27)
Hence the recursive realization of the timeupdate formulae is given in inverse matrix form as
This research seeks for ways to reduce the computation time
by exploiting recursion as follows:
Consider the input sequence s(n), the desired sequence u(n) and the coefficients of the digital filter for update (n) configured into an adaptive filter as
u(n), s(n), (n) = [0 (n), 1 (n), , L1 (n)] , where L= filter length. The estimate of the desired signal can be modeled as the output of the filter as:
Ã»(n) = T (n) s(n) (3.28) Where Ã»(n) is an estimate of the desired signal u(n) (the signal
R1 (n) = R1 (n1) + update (n) (3.35)
uu uu
ALG.3: The Proposed Hybrid LSE/LMS Algorithm Step 1: Least Squares Problem formulation
Consider a finite set of observations {s(n)} and {u(n)},
where {u(n)} is the set of all past samples from n = 0 to now. We define three deterministic cost functions as:
n
that characterizes the earth content ). See fig 3.4, below for a typical transversal filter flow diagram.
1. .
LSE
(c) ek2
k 0
(3.36)
, where e(k) = s(k)u(k) (3.37)
2.
LMS
(c) Een2
(3.38)
n
And in terms of weighted least squares error (WLSE) as:
3. .
WLSE
(c, n)
2
nk . ek
k nM 1
(3.39)
The problems 1, 2 and 3 can essentially be given the following optimal solutions as modeled in 4, 5 and 6 below, respectively.
Fig.3.4: Configuration of a RLS Adaptive filter




CLSE arg min LSE c
c
(3.40)
Where Ã» (n) is an estimate of the desired signal u(n).
Which essentially means: find those filter coefficients that
minimize the cost function in problem 1.
But the filter error is given by

CLMS arg LMS c
c
(3.41)
e(n)=u(n) – Ã» (n)=u(n) – T(n) s(n) (3.29) Minimization of the mean squared error means taking
Which similarly means: find those filter coefficients that minimize the cost function in problem 2, and
expectation of the squared errors. That is, E(e2(n)) = E{[u(n) T(n) s(n)] 2 }

C[n] arg min LSE c, n,
c
where the
function
uu
2 T (n)
(n) T (n)R
uu
us
(3.30)
(n)
c, n
n
nk e[k]2
k nL1
But we prefer the minimization of least squares error, which means
is such that Ã˜ is the forgetting factor and
0 .1
e2 n
n
0 for orthogonality
(3.31)
Step 2: Hybrid Model Formulation
We now formulate a hybrid optimum solution
CLMS,WLSE arg LMS,WLSE c
c
(3.42)
such that 3.42 combines the Least Mean Square and Weighted Least Squares optimization advantages, where the cost function associated with 3.42 is formulated as
MSE,LSE
c, n E[ (c, n)] E n nk e[k]2
k nL1
c, n E[ (c, n)]
can be manipulated such that
LMS ,LSE
n
n nk 2
E nk e[k]2
MSE,LSE c, n E e[k]
k nL1
(3.43)
k nL1
2 H H
Notice that the LMS strategy takes expectation of e(k)2 ordinarily, while the WLSE takes the sum of the expression
E[ u[n]
] 2
c c
Ruuc (3.50)
e(k)2, weighted by an exponential weighting factor
. nk ,[0 1]
Where the forgetting factor is set to the default, Ã˜ =1. Moreover, the gradient of the cost function with respect to the coefficient vector c, according to [6], is given as
Now, since s(n) and u(n) are assumed jointly stationery and stochastic zero mean processes, [6] we specify
c MSE,LSE c 2(Ruuc )
(3.51)

the autocorrelation function
rss(k)=E[s(n)s(nk)] (3.44)

the autocorrelation matrix
Rss = E{s[n]sT[n]} (3.45)
And the corresponding cross correlation matrix as

su =E{u(n)s(n)} (3.46)
Notice that instead of inverting the autocorrelation matrix Ruu as we did before, the Gradient Search Method avoids the computational complexity associated with matrix inversion by use of iteration to update the coefficient vector [c(n)]. This results in the coefficient update rule,
c[n] c[n 1] ( Ruuc[n 1])(3.52) Where is a step size parameter [6] and the negative gradient is the term
Assume that u[n] is the output of a linear FIR filter to the input s[n]. Then
Ruu
c(n 1)
MSE,WLSE(c)
cc(n1)
(3.53)
u[n]=hTs[n] (3.47)
and dim(h)=dim(c) (3.48) Note: Reference [6] gives more details from a similar process.
The desired signal u(n) can be modeled using Systems Identification principles [6] as
u[n] = hH s[n]+v[n] (3.49)
Where fig. 3.5, below, shows v[n] as noise superimposed on the input signal and h is the impulse response of the system to be identified and v(n) is additive noise.
Fig.3.5: Systems Identification problem in a noisy environment
Then from 3.43, the hybrid cost function

PERFORMANCE COMPARISON BY SIMULATION The following algorithm is used to compare the
performance of these algorithms:

Algorithm for Comparison of Adaptive Filtering Algorithmic performances

Step 1: Create the Signals for Adaptation; Step 2: Generate a noisy signal;
Step 3: Corrupt the Desired Signal by adding the Noisy Signal;
Step 4: Create a reference signal that is highly correlated with the signal in step 2 above [14].
Step 5: Construct adaptive filters based on proposed algorithms, viz:

Adaptive Least Mean Square (ALMS) and Normalized Adaptive Least Mean Square (NALMS);

Conventional Recursive Least squares (CRLS) and Adaptive Recursive Least Squares (ARLS);

Improved ARLS;

Hybrid LMS/RLS.
Step 6: Graphically display their output sinusoids for comparison and performance evaluation with respect to the ideal ALBERT WIENERS STANDARDIZED OUTPUT.
Step 7: Investigate convergence using algorithmic learning curves.
Step 8: End.
3.3 PROPOSED DECONVOLUTION SYSTEMS ARCHITECTURE
The following block diagram gives a schematic for deconvolving seismic sequences.
In effect, the proposed system accepts input as reflected sequences due to an explosion from an oil well, compares it with a pilot sequence and processes both sequences with a downsample factor k = 32 leaving the choice of algorithm(from at least 5 different algorithms) and adaptation step size selection to the user.

SYSTEMS IMPLEMENTATION/SIMULATION WITH MATLAB
In this section, the implementation of An Adaptive Least Squres Digital Filter model for Oil Processing both in Offshore and Onshore environments is considered. This systems realization strategy is summarized below:

WHY MATLAB FOR IMPLEMENTATION
The programming Language employed for the implementation of the system is Matlab 7.9. The reasons that informed the use of Matlab in the implementation of Adaptive Least Square Digital Filters include the ease for functions and data plotting, an inherent numerical computing environment, easy database design, manipulations and query processing, inbuilt Graphical User Interface(GUI), synergy with C, C++, JAVA, FORTRAN, SIMULINK, etc.

Graphical User Interface (GUI)
This serves as a link between the intended users and the intricacies of the software and hardware components of the system. The GUI hides from the end users the complex communication between the designed/implemented systems software and the computer's hardware making it possible for an end user who is grossly uninformed on the workings of the machine hardware to place a query in plain language and get an instant or near instant feedback.

System Requirements:


The software used is MATLAB(R) The language of technical computing, version 7.9.0.529 (R2009b) 32 bit (win 32), August 12, 2009, License number: 161051

Windows operating system (preferably windows 2000 and latter variants).

Platform and systems requirements:
Windows 32 bit, Windows 64bit, Mac OSA 64 bit and Linux 64bit are supported.

IMPLEMENTATION OF THE PROPOSED COMPOSITE MODEL (IN MATLAB)

DATABASE FOR THE PROPOSED SIMULATION The modeled algorithms in section 3 are meant to be
implemented with Matlab R2009b with the aim of assisting in the deconvolution of highly convolved seismic traces or sequences. Most of the areas in the Northern Nigeria like the Kukawa Axis of the Borno Basins, the Chad basin, and the Bida Basins do not have available exploration data for open source use. Hence equivalent terrains were sort after using Google earth and other prospecting tools. In this respect, therefore, the data for this simulation is sourced from the Mathworks Inc., USA. They are tabulated in table 4.1 and 4.2(see appendices) and are repeatedly referred to during the coding process.
Data set.1:
Data records
No. of
Iterations(L)
Filter order (L)
Step size (mu)
Block length (n)
Input to adaptive filter (x)
1
100
2
0.001
1
Randn(1,100)
2
200
4
0.002
2
Randn(1,200)
3
300
6
0.003
3
Randn(1,300)
4
400
8
0.004
4
Randn(1,400)
5
500
10
0.005
5
Randn(1,500)
Table 4.1: Parametric datasets for simulating adaptive filtering algorithms
Data records
No. of Iterations(L)
Filter order (L)
Step size (mu)
Block length (n)
Input to adaptive filter (x)
6
600
13
0.006
6
Randn(1,600)
7
700
14
0.007
7
Randn(1,700)
8
800
16
0.008
8
Randn(1,800)
9
900
18
0.009
9
Randn(1,900)
10
1000
20
0.010
10
Randn(1,1000)
11
1100
22
0.011
11
Randn(1,1100)
12
1200
24
0.012
12
Randn(1,1200)
13
1300
26
0.013
13
Randn(1,1300)
14
1400
28
0.014
14
Randn(1,1400)
15
1500
30
0.015
15
Randn(1,1500)
16
1600
32
0.016
16
Randn(1,1600)
17
1700
34
0.017
17
Randn(1,1700)
18
1800
36
0.018
18
Randn(1,1800)
19
1900
38
0.019
19
Randn(1,1900)
20
2000
40
0.020
20
Randn(1,2000)
21
2200
42
0.040
21
Randn(1,2200)
22
2400
44
0.080
22
Randn(1,2400)
23
2600
46
0.120
23
Randn(1,2600)
24
2800
48
0.160
24
Randn(1,2800)
25
3000
50
0.200
25
Randn(1,3000)
Conventional Least Mean Square(CLMS), Normalized LMS, etc will all be put together by matlab coding.
4.2.3. Graphical user Interface:
The user interface of the implemented seismic oil exploration system is presented below
Fig. 4.1: Graphical User Interface for the proposed composite seismic
deconvolution system.
Fig. 4.2: Log on screen/user authentication interface.

Algorithmic Simulation by Matlab R2009b

This study proposed a hybrid LMS/RLS Adaptive filtering algorithm, an Adaptive RLSE algorithm and an improved LSE algorithm. However, owing to the desire to model a composite deconvolution system for onshore and offshore seismic sequence deconvolution, other extant algorithms like the
Fig.4.3: Interactive Algorithm selection screen.
Fig. 4.4: Desktop Display screen
Fig. 4.5: Logon screen after choice algorithm has been selected.

Algorithmic Outputs
1. Application of the Hybrid LMS/LSE algorithm on data records.
Fig. 4.6: Matlab plot of record 10 from Data set 1
Fig. 4.7: Application of the Hybrid LMS/LSE algorithm to record 10 of Data Set 1 (for 8000 iterations).
Fig. 4.8: plot of the LSE resulting from the operation on record 10 of Data
set 1 using the hybrid for an explosive number of iterations (10,000).
Fig.4.9a: Frequency spectrum of enhanced signal
Fig. 4.9b: Time spectrum of hybrid enhanced signal
Fig.4.10: Offshore Deconvolution of record 28 of Data set 2
(i.e. depths are >200m above sea level) using the hybrid.

Application of the Conventional LMS algorithm on data records.
Fig. 4.11a: Deconvolution of record 10 of data set 2
Fig.4.11b: Deconvolution of record 19 of data set 1 in a simulated onshore environment (i.e on land or mrshy fields/shallow waters).

Application of the Improved (Adaptive) Recursive Least Squares Algorithm to seismic data records:
Fig. 4.12: Onshore deconvolution of record no.18 of data set 2.

Application of Predictive Deconvolution techniques on data records
Fig. 4.13: Predictive deconvolution of record 18 of data set 2.

Graphical Comparison of Algorithmic Outputs
Fig. 4.14: Use of record 10 of data set 1 for algorithmic comparison
Fig. 4.15: Comparison of the ALMS, Adaptive Hybrid LMS/RLS and the ARLS algorithmic outputs with the standard Wiener filter output
(based on effects on the sinusoid of record 18 of data set 2).
Fig. 4.16: Comparison of the ARLS, the Adaptive Hybrid LMS/RLS and the standard Wiener Deconvolution filters output (based on effects on the sinusoid of record 18 of data set 2).
Fig.4.17: Comparison of the Conventional RLS algorithm, the Adaptive Hybrid LMS/RLS and the standard Wiener Deconvolution filters output ( based on effects on the sinusoid of record 18 of data set 2).

GRAPHICAL INVESTIGATION OF ALGORITHMIC CONVERGENCE USING LEARNING
CURVES WITH RESPECT TO THE MSE OF EACH ALGORITHM
Fig. 4.18: Comparison between the LMSs and the Hybrid LMS/RLS Algorithmic learning curves based on record no.40 of data set 2, adapted to double the maximum sample number.
Fig. 4.19: Comparison of the RLSE and the conventional LSE algorithmic outputs using their mean square errors and learning curves for
record 40 of data set 2..


Deductions:
Algorithmic outputs based on data set 1 in chapter 4 and data set 2, in the appendices were presented graphically in terms of learning curves and wavelets. This choice was informed by the very fact that the tabulation of each point on
all those graph plots will definitely consume more space and be cumbersome to understand by a layman.

CONCLUSION/RECOMMENDATIONS

CONCLUSION
The Least squares criterion and its application to the modeling of least squares digital filters is most unique and offers several performance prospects. This research evolved a hybrid adaptive least squares error (LSE) / Least Mean Square Error Model and its accompanying algorithm for handling signal deconvolution in both offshore and onshore exploration terrains. The research also developed a composite model composed of the combination of all the algorithms proposed with some extant algorithms. The model makes for ease in the deconvolution of seismic traces by choice algorithms taking advantage of the easytouse graphical user interface designed with Matlab for the composite seismic sequence deconvolution system proposed. This system equally makes for the comparison of algorithmic efficiencies by the plotting of their learning curves and tests for convergence with respect to the standardized Albert Werners filter coefficients.
The designed and implemented composite seismic data deconvolution system was simulated by the help of data sets obtained from The Mathworks Inc., USA and Marine Geosciences Data Systems(MGDS), Canada. Results are displayed graphically for a visual impact.

CONTRIBUTIONS OF THE RESEARCH TO KNOWLEDGE
The research has been able to

study the existing Least Mean Squares (LMS) and Recursive Least Squares adaptive filtering models and develop a hybrid LMS / RLS model; and

provide a hybrid LMS and RLS Adaptive filtering algorithm and thus pioneer the concatenation of these two digital filter coefficients adaptation techniques thereby combining their respective advantages for improved signal analysis for oil prospecting.

develop a composite block model that combines the proposed hybrid LMS/RLS algorithm with existing adaptive filtering algorithms to make for a multi algorithm based software for seismic deconvolution in both offshore and onshore scenarios.


Recommendations/Future Research
Further research on the design and implementation of adaptive filters should be sponsored to ensure that most tools like fuzzy logic, genetic programming, etc are incorporated for better results while putting to use state of the art seismic tools and equipment in a standardized computer laboratory.
1
2
SIGNAL PROCESSING DATA PARAMETERS 

Records 
No of samples 
Desired sinusoid(waveform) 
Filter Length 
Step size 
Autoregressive coeff. 
Moving average 
Sample realizations 
Decimation factor 
Reflectivity sequence 
n 
S(n) 
L 
Âµ 
ar 
ma 
nr 
M 
v 

1 
(1:10 ) 
sin(0.0 1*pi*n ) 
1 
0.00 1 
[1,1/ 2] 
[1,
0.8,0.4 ,0.2] 
1 
5 
0.8*randn(10, 1) 
2 
(1:20 ) 
sin(0.0 5*pi*n ) 
4 
0.00 2 
[1,1/ 4] 
[1,
0.6,0.2 ,0.4] 
1 
5 
0.6*randn(20, 1) 
3 
(1:30 ) 
sin(0.1 0*pi*n ) 
7 
0.00 3 
[1,1/ 8] 
[0.25,
0.25,0. 25, 0.25] 
3 
5 
0.25*randn(3 0,1) 
4 
(1:40 ) 
sin(0.1 5*pi*n ) 
10 
0.00 4 
[2,1/ 2] 
[0.2,
0.4,0.4 ,0.2] 
6 
5 
0.4*randn(40, 1) 
5 
(1:50 ) 
sin(0.2 0*pi*n ) 
13 
0.00 5 
[2,1/ 4] 
[1,
0.125, 0.125, 1] 
10 
5 
0.125*randn( 50,1) 
6 
(1:60 ) 
sin(0.2 5*pi*n ) 
16 
0.00 6 
[2,1/ 8] 
[1,
0.8,0.4 ,0.2] 
15 
10 
0.8*randn(60, 1) 
7 
(1:70 ) 
sin(0.3 0*pi*n ) 
19 
0.00 7 
[1,1/ 2] 
[1,
0.6,0.2 ,0.4] 
21 
10 
0.6*randn(70, 1) 
8 
(1:80 ) 
sin(0.3 5*pi*n ) 
22 
0.00 8 
[1,1/ 4] 
[1,
0.8,0.4 ,0.2] 
28 
10 
0.25*randn(8 0,1) 
9 
(1:90 ) 
sin(0.4 0*pi*n ) 
25 
0.00 9 
[1,1/ 8] 
[1,
0.6,0.2 ,0.4] 
36 
10 
0.4*randn(90, 1) 
1 0 
(1:10 0) 
sin(0.4 5*pi*n ) 
28 
0.01 
[2,1/ 2] 
[0.25,
0.25,0. 25, 0.25] 
45 
10 
0.125*randn( 100,1) 
1 1 
(1:15 0) 
sin(0.5 0*pi*n ) 
31 
0.01 1 
[2,1/ 4] 
[0.2,
0.4,0.4 ,0.2] 
75 
15 
0.8*randn(15 0,1) 
(1:20 0) 
sin(0.5 5*pi*n ) 
34 
0.01 2 
[2,1/ 8] 
[1,
0.125, 0.125, 1] 
11 0 
15 
0.6*randn(20 0,1) 

1 3 
(1:25 0) 
sin(0.6 0*pi*n ) 
37 
0.01 3 
[1,1/ 2] 
[1,
0.8,0.4 ,0.2] 
15 0 
15 
0.25*randn(2 50,1) 
1 4 
(1:30 0) 
sin(0.6 5*pi*n ) 
40 
0.01 4 
[1,1/ 4] 
[1,
0.6,0.2 ,0.4] 
19 5 
15 
0.4*randn(30 0,1) 
1 5 
(1:35 0) 
sin(0.7 0*pi*n ) 
43 
0.01 5 
[1,1/ 8] 
[1,
0.8,0.4 ,0.2] 
24 5 
15 
0.125*randn( 350,1) 
1 6 
(1:40 0) 
sin(0.7 5*pi*n ) 
46 
0.01 6 
[2,1/ 2] 
[1,
0.6,0.2 ,0.4] 
30 0 
20 
0.8*randn(40 0,1) 
1 7 
(1:45 0) 
sin(0.8 0*pi*n ) 
49 
0.01 7 
[2,1/ 4] 
[0.25,
0.25,0. 25, 0.25] 
36 0 
20 
0.6*randn(45 0,1) 
1 8 
(1:50 0) 
sin(0.8 5*pi*n ) 
52 
0.01 8 
[2,1/ 8] 
[0.2,
0.4,0.4 ,0.2] 
42 5 
20 
0.25*randn(5 00,1) 
1 9 
(1:55 0) 
sin(0.9 0*pi*n ) 
55 
0.01 9 
[1,1/ 2] 
[1,
0.125, 0.125, 1] 
49 5 
20 
0.4*randn(55 0,1) 
2 0 
(1:60 0) 
sin(0.9 5*pi*n ) 
58 
0.02 
[1,1/ 2] 
[1,
0.8,0.4 ,0.2] 
57 0 
20 
0.125*randn( 600,1) 
2 1 
(1:60 0) 
sin(1.0 *pi*n) 
61 
0.02 1 
[1,1/ 4] 
[1,
0.6,0.2 ,0.4] 
65 0 
20 
0.8*randn(65 0,1) 
2 2 
(1:60 0) 
sin(0.0 1*pi*n ) 
64 
0.02 2 
[1,1/ 8] 
[1,
0.8,0.4 ,0.2] 
7 
5 
0.6*randn(70 0,1) 
2 3 
(1:60 0) 
sin(0.0 05*pi* n) 
67 
0.02 3 
[2,1/ 2] 
[1,
0.6,0.2 ,0.4] 
4 
5 
0.25*randn(7 50,1) 
Appendices:
REFERENCES
Data set 2:
Table: 4.2: Data for Systems Simulation.

H. Malani, Systems identification through RLS adaptive filters, National Conference in Innovative Paradigms in Engineering and Technology (NCIPET 2012). Proceedings published by International Journal of Computer Applications (IJCA). ICT Dept, ML VTEC, Bhilwara, 2012.

J. G. Proakis and D. G. Manolakis, Introduction to Digital Processing, Macmillan, New York, 1999.

J. G. Proakis and D. G. Manolakis, Introduction to Digital Signal Processing. Macmillan, London, 1988.

J. G. Proakis and D. G. Manolakis, Digital Signal Processing, 4th Ed. Prentice Hall Inc., 2007.

J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms and Applications. 3rd Edition. Prentice Hall. Upper Saddle River, NJ., 1996.

F. Feldbauer and G. Geiger, Adaptive systems, problem classes, geiger@tugraz.at. Signal processing and speech communication laboratory, Inffeldgasse, 16C/EG. Last modified on October 30, 2012.

L. T. Ikelle and A. Lasse, Introduction to Petroleum Seismology, Society of Exploration Geophysicists. ISBN 1 560801298, 2005.

J. Makhoul, B. Bolt, and I. Newman, Linear prediction: a tutorial review, Proceedings of the IEEE Vol. 63, Issue 4, Cambridge. Pgs: 561580, April. 2005.

V. Golub, and J. Charles, Matrix Computation, 3rd Edition, 1996. (John Hopkins Strides in Mathematical Sciences).

Encyclopedia Britannica, 2007.

San Joaquin Geological Society, The san joaquin valley, Pacific Section AAPG. P.O.Box 1072, Bakersfield, CA, 93302, 2008

The Mathworks Inc.,USA, 2013.

J. Makhoul, The Theory of Linear Prediction. California Institute of Technology, USA, 1975.

S. Haykin, Adaptive Filter Theory. 3rd Edition, Prentice Hall, New Jersey, 1996.

G. Sandy, and A. G. Bonar, The Evolution of Oil Well Drilling Technology in Alberta, 1883 1970. University of Calagary Press, Business & Economics. 451 Pgs, 2005.

M. Lipson, and S. Lipschutz, Schaums Outline of Theory and Problems of Linear Algebra. Mc Graw Hill Professional, 2nd Ed., 2001.

C. Moler, The Growth of MATLAB and The MathWorks over Two Decades. (PDF) (.Http://www.mathworks.com/company/newsletters/news_notes/ clevescorner/jan06.pdf. Retrieved August 18, 2008 )

R, Goering. Matlab edges closer to electronic design automation world, EE Times, April, 2004.
SIGNAL PROCESSING DATA PARAMETERS 

Records 
No of samples 
Desired sinusoid(waveform) 
Filter Length 
Step size 
Autoregressive coeff. 
Moving average 
Sample realizations 
Decimation factor 
Reflectivity sequence 
n 
s 
L 
Âµ 
ar 
ma 
nr 
M 
v 

24 
(1:600) 
sin(0.010*pi*n) 
70 
0.024 
[2,1/4] 
[0.25,
0.25,0.25, 0.25] 
8 
5 
0.4*randn(800,1) 
25 
(1:600) 
sin(0.015*pi*n) 
73 
0.025 
[2,1/8] 
[0.2,0.4,0.4,
0.2] 
13 
5 
0.125*randn(850,1) 
26 
(1:600) 
sin(0.020*pi*n) 
76 
0.026 
[1,1/2] 
[1,
0.125,0.125, 1] 
18 
5 
0.8*randn(900,1) 
27 
(1:600) 
sin(0.025*pi*n) 
79 
0.027 
[1,1/4] 
[1,0.8,0.4,
0.2] 
24 
10 
0.6*randn(950,1) 
28 
(1:600) 
sin(0.030*pi*n) 
82 
0.028 
[1,1/8] 
[1,0.6,0.2,
0.4] 
30 
10 
0.25*randn(1000,1) 
29 
(1:600) 
sin(0.035*pi*n) 
85 
0.029 
[2,1/2] 
[1,0.8,0.4,
0.2] 
39 
10 
0.4*randn(1100,1) 
30 
(1:600) 
sin(0.040*pi*n) 
88 
0.03 
[2,1/4] 
[1,0.6,0.2,
0.4] 
48 
10 
0.125*randn(1200,1) 
31 
(1:600) 
sin(0.045*pi*n) 
91 
0.031 
[2,1/8] 
[0.25,
0.25,0.25, 0.25] 
59 
10 
0.8*randn(1300,1) 
32 
(1:600) 
sin(0.050*pi*n) 
94 
0.032 
[1,1/2] 
[0.2,0.4,0.4,
0.2] 
70 
15 
0.6*randn(1400,1) 
33 
(1:600) 
sin(0.055*pi*n) 
97 
0.033 
[1,1/4] 
[1,
0.125,0.125, 1] 
83 
15 
0.25*randn(1500,1) 
34 
(1:600) 
sin(0.060*pi*n) 
100 
0.034 
[1,1/8] 
[1,0.8,0.4,
0.2] 
96 
15 
0.4*randn(1600,1) 
35 
(1:600) 
sin(0.065*pi*n) 
103 
0.035 
[2,1/2] 
[1,0.6,0.2,
0.4] 
111 
15 
0.125*randn(1700,1) 
36 
(1:600) 
sin(0.070*pi*n) 
106 
0.036 
[2,1/4] 
[1,0.8,0.4,
0.2] 
126 
15 
0.8*randn(1800,1) 
37 
(1:600) 
sin(0.075*pi*n) 
109 
0 
[2,1/8] 
[1,0.6,0.2,
0.4] 
143 
20 
0.6*randn(1900,1) 
38 
(1:600) 
sin(0.080*pi*n) 
112 
0 
[1,1/2] 
[0.25,
0.25,0.25, 0.25] 
160 
20 
0.25*randn(2000,1) 
39 
(1:600) 
sin(0.085*pi*n) 
115 
0 
[1,1/2] 
[0.2,0.4,0.4,
0.2] 
192 
20 
0.4*randn(2250,1) 
40 
(1:600) 
sin(0.090*pi*n) 
118 
0 
[1,1/4] 
[1,
0.125,0.125, 1] 
225 
20 
0.125*randn(2500,1) 
41 
(1:600) 
sin(0.095*pi*n) 
121 
0 
[1,1/8] 
[1,0.8,0.4,
0.2] 
261 
20 
0.8*randn(2750,1) 
42 
(1:600) 
sin(0.100*pi*n) 
124 
0 
[2,1/2] 
[1,0.6,0.2,
0.4] 
300 
5 
0.6*randn(3000,1) 
Acknowledgement:
The lead author, Mbang, U. B wishes to acknowledge the moral support given by his lovely wife, Mrs. Kebe Uba Bassey, his mother, Madam Christiana Bassey and his 3 kids: Master Goodsuccess U. B. Yessababu, Master Evergreen U.
B. Yessababu and Miss Fruitfulvine U. B. Yessababu. May the almighty God bless them all, amen.