 Open Access
 Total Downloads : 231
 Authors : Mr. P. Ramesh, Mr. D. Naresh Kumar, Mr. M. Srikanth
 Paper ID : IJERTV2IS111129
 Volume & Issue : Volume 02, Issue 11 (November 2013)
 Published (First Online): 03122013
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Wavelet Based Satellite Image Change Detection Using Level Sets Algorithm
Mr. P. Ramesh.1 
Mr. D. Naresh kumar2 
Mr. M. Srikantp. 
Assistant Professor .MLRIT. 
Assistant Professor. MLRIT. 
Assistant Professor. MLRIT. 
Abstract
In this paper satellite image change detection can done by using level sets algorithm for to get robustness against noise, wavelet transform is exploited to obtain a multi resolution representation of the difference image, which is obtained from two satellite images acquired from the same geographical area but at different time instances. A Level set algorithm is then applied to the multi resolution representation of the difference image for segment the difference image into the changed and unchanged regions. The proposed change detection method has been conducted on satellite images . The simulation results clearly show that the proposed change detection method using level sets algorithm gives good performance compared to Fourier analysis .

INTRODUCTION
Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation.
An image may be defined as a two dimensional function f(x, y), where x & y are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y & the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of DIP refers to processing digital image by means of digital computer. Digital image is composed of a finite number of elements, each of which has a particular location & value. The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play the single most
important role in human perception. However, unlike humans, who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image.
There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. This is limiting & somewhat artificial boundary. The area of image analysis (image understanding) is in between image processing & computer vision.
There are no clearcut boundaries in the continuum from image processing at one end to complete vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low, mid, & highlevel processes. Lowlevel process involves primitive operations such as image processing to reduce noise, contrast enhancement & image sharpening. A low level process is characterized by the fact that both its inputs & outputs are images. Midlevel process on images involves tasks such as segmentation, description of that object to reduce them to a form suitable for computer processing & classification of individual objects. A midlevel process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. Finally higher level processing involves Making sense of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision. Digital image processing, as already defined is used successfully in a broad range of areas of exceptional social & economic value.

Image Introduction
An image is represented as a two dimensional function f(x, y) where x and y are spatial coordinates and the amplitude of f at any pair of coordinates (x,
y) is called the intensity of the image at that point.

Gray scale image
A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane. I(x,
y) is the intensity of the image at the point (x, y) on the image plane. I (xylem) takes nonnegative values assume the image is bounded by a rectangle [0, a] [0, b]I: [0, a] [0, b] [0, info)

Color image
It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for blue. An image may be continuous with respect to the x and y coordinates and also in amplitude. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called quantization.

Coordinate convention
The result of sampling and quantization is a matrix of real numbers. We use two principal ways to represent digital images. Assume that an image f(x,
y) is sampled so that the resulting image has M rows and N columns. We say that the image is of size M X
N. The values of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use integer values for these discrete coordinates. In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. Following figure shows the coordinate convention. Note that x ranges from 0 to M1 and y from 0 to N1 in integer increments.
The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the second to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y.


WAVELETS
A.FOURIER ANALYSIS
Fig1gfdg
Fig1gfdg
Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps the most well known of these is Fourier analysis, which breaks down a signal into constituent sinusoids of different frequencies. As shown in Fig (1)Another way to think of Fourier analysis is as a mathematical technique for transforming our view o the signal from timebased to frequencybased.
Fig (1)
For many signals, Fourier analysis is extremely useful because the signals frequency content is of great importance. So why do we need other techniques, like wavelet analysis?
Fourier analysis has a serious drawback. In transforming to the frequency domain, time information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when a particular event took place. If the signal properties do not change much over time that is, if it is what is called a stationary signalthis drawback isnt very important. However, most interesting signals contain numerous non stationary or transitory characteristics: drift, trends, abrupt changes, and beginnings and ends of events. These characteristics are often the most important part of the signal, and Fourier analysis is not suited to detecting them.
B.SHORTTIME FOURIER ANALYSIS:
In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier transform to analyze only a small section of the signal at a timea technique called windowing the signal. Gabors adaptation, called the ShortTime Fourier Transform (STFT), as shown in Fig (2) maps a signal into a twodimensional function of time and
frequency.
Fig (2)
The STFT represents a sort of compromise between the time and frequencybased views of a signal. It provides some information about both when and at what frequencies a signal event occurs. However, you can only obtain this information with limited precision, and that precision is determined by the size of the window. While the STFT compromise between time and frequency information can be useful, the drawback is that once you choose a particular size for the time window, that window is the same for all frequencies. Many signals require a more flexible approachone where we can vary the window size to determine more accurately either time or frequency.
C.WAVELET ANALYSIS
Wavelet analysis represents the next logical step: a windowing technique with variablesized regions. Wavelet analysis allows the use of long time intervals where we want more precise lowfrequency information, and shorter regions where we want high frequency information.
Fig(3)
As shown in Fig (3) Heres what this looks like in contrast with the timebased, frequencybased, and STFT views of a signal:
Fig (4)
You may have noticed that in Fig (4) wavelet analysis does not use a timefrequency region, but rather a timescale region. For more information about the concept of scale and the link between scale and frequency, see How to Connect Scale to Frequency?

LEVEL SET ALGORITHM
A.PROBLEM FORMULATION:
Let X1 and X2 be two coregistered multi temporal remote sensing images acquired over the same geographical area at two different times. Let X be the scalar M Ã— N difference image generated from X1 and X2 by applying the change vector analysis technique CVA [11]. The aim of the proposed approach is to generate a changedetection map representing the changes occurred between the two acquisition dates of the two images. To this end, we propose to formulate this issue as a segmentation problem and solve it by means of a variational level set method.
A well known segmentation model is the one proposed by Mumfordshah aiming at finding a contour C which segments the difference image X into non overlapping regions associated with changed and unchanged classes, respectively.
The related energy functional is given by:
FMS (I,C)=
XI 2 dx dy+
/ c
 I 2 dxdy+
C (1)
Where is the image domain, C is the length of the contour C, and are positive
parameters. The minimization of MumfordShah functional results in an optimal contour that segments the difference image X, in addition to an image I
FMS (
, c1, c2
)=
X c12
formed from smooth regions within each of the connected components in the image domain
separated by the optimal
H ()dxdy
X c2 2
(1 H ())dxdy +
contour C.
The minimization of the above functional is
H ()dxdy
(4)
difficult in practice as it is a non convex problem. A possible solution is to consider the special case where the image I in the functional (1) is a piecewise constant function . In this case, the energy functional is given by:
FMS (C, c1, c2)
where H is the Heaviside function and it is approximated in practice by the smooth function H defined by:
H (Z ) = 1/ 2 (1+ 2 / arctan( z / ))
(5)
= (2)
X c12dxdy
inside(c)
X c2 2dxdy C
outside(c)
The minimization of the functional (4) with respect to the levelset function and the two constants c1 and c2 can be done according to the iterative approach adopted in [12].
Step 1: Set k=0 and initialize by k
The functional (2) is known as the piecewise
constant MumfordShah segmentation model or
Step 2: For = k , compute
ck and ck
as the
1 2
ChanVase model [12].
averages of X in >0 and 0, respectively:
We call the two terms in (2) the global
Step 3: compute
k 1
by solving the following
fitting energy. The parameter controls the tradeoff between the goodnessoffit and the length of the curve C. c1 and c2 are two constants that
partial differential equation:
approximate the image X in the segments inside(C) and outside(C) respectively. For a fixed contour C, the optimal values for these parameters are given by the average of X over the regions inside(C) and
k 1
= k
– t
F MS
( k 1 ,
k k
c
c
c
c
1 , 2 )
outside(C).
In order to minimize the above functional, we adopt a levelset formulation.
B.LEVELSET FORMULATION:
In the levelset method the curve C is represented by the zero level set of a Lipchitz function : —> R
such that:
C= {(x, y) :(x, y) 0},
Inside(c)= {(x, y) :(x, y) 0},
Where t is the time step
Step 4: Reinitialize locally to the signed distance function to the curve.
Step 5: End the process if the convergence is reached;
Otherwise go to step 2
Outside(c)= (3)
\ {(x, y) :(x, y) 0},
By replacing the unknown variable C by the levelset function , we obtain :

SOURCE CODE
clc; clear all; close all;
I11=imread('ALASKA_1985_321.jpg'); I1=rgb2gray(I11);
figure,imshow(I1);title('Original image at time T1'); I1=im2double(I1); I22=imread('ALASKA_2005_321.jpg'); I2=rgb2gray(I22);
I2=imresize(I2,[size(I1)]); figure,imshow(I2);title('Original image at time T2'); I2=im2double(I2);
x=zeros(size(I1)); for r=1:length(I1)
for c=1:length(I2) x(r,c)=abs(I2(r,c)I1(r,c));
end end
for r=1:length(I1) for c=1:length(I2)
x(r,c)=((x(r,c)min(min(x)))./(max(max(x)) min(min(x)))).*255;
end end save x
figure,imshow(x),title('differnce image'); [x,S]=myswt(x,1,'haar'); figure,imshow(x),title('wavelet output image'); x1=x(1:S(1,1),1:S(1,2));
c1=0.0732; c2=0.6310;
lm1=0.05; lm2=0.09;
for r=1:S(1,1)
for c=1:S(1,2)
vp(r,c)=(lm1*(x1(r,c)c1)^2+lm2*(x1(r,c) c2)^2+0.5*length(x1));
end end
m=mean(mean(vp)); for r=1:S(1,1)
for c=1:S(1,2) if vp(r,c)<=m out(r,c)=1;
else out(r,c)=0; end
end end
figure,imshow(out),title('OUTPUT IMAGE AFTER SEGMENTATION')

SIMULATION RESULTS
Fig.1:Original image at time T1
Fig.2: Histogram of the image at T1
Fig.3: Original image at time T2
Fig.4: Histogram of the image time T2
Fig5:Difference image.
Fig 6: Histogram of he difference image
Fig.7: Global region based segmentation

EXPERIMENTAL RESULTS
Fig: Alaska 1
Fig: Alaska 2
CONCLUSION:
In this paper satellite image change detection can done by using level sets algorithm and compared with the experimental results, wavelet transform is exploited to obtain a multi resolution representation of the difference image, which is obtained from two satellite images acquired from the same geographical area but at different time instances. A Level set algorithm is then applied to the multi resolution representation of the difference image for segment the difference image into the
changed and unchanged regions. The proposed change detection method has been conducted on satellite images and by using MAT LAB tool simulation results verified and corresponding generated images were shown. These images shows the difference of changes in satellite images.
REFERENCES

M. Torma, P. Harma, and E. Jarvenpaa, Change detection using spatial data problems and challenges, in Proc. IEEE Int. Geosci. Remote Sens. Symp., Jul. 2007, pp. 19471950.

S. Leprince, S. Barbot, F. Ayoub, and J.P. Avouac, Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 6, pp. 15291558, Jun. 2007.

C. Shah, Y. Sheng, and L. Smith, Automated image registration based on pseudoinvariant metrics of dynamic landsurface features, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 11, pp. 3908 3916, Nov. 2008.

L. Bruzzone and D. Prieto, Automatic analysis of the difference image for unsupervised change detection, IEEE Trans. Geosci. Remote Sens., vol. 38, no. 3, pp. 11711182, May 2000.

T. Kasetkasem and P. Varshney, An image change detection algorithm based on Markov random field models, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 8, pp. 18151823, Aug. 2002.

T. Celik, Unsupervised change detection in satellite images using principal component analysis and kmeans clustering, IEEE Geosci. Remote Sens. Lett., vol. 6, no. 4, pp. 772776, Oct. 2009.

F. Bovolo and L. Bruzzone, A detail preserving scaledriven approach to change detection in multitemporal SAR images, IEEE Trans. Geosci. Remote Sens., vol. 43, no. 12, pp. 29632972, Dec. 2005.

T. Celik and K.K. Ma, Unsupervised change detection for satellite images using dualtree complex wavelet transform, IEEE Trans. Geosci. Remote Sens., vol. 48, no. 3, pp. 1199 1210, Mar. 2010.