Enhancement and Filteration on Image

DOI : 10.17577/IJERTV9IS020348

Download Full-Text PDF Cite this Publication

Text Only Version

Enhancement and Filteration on Image

Cloud Computing

Miss.Mallika Gaur Computer Science Department Deal, Dehradun

India

Abstract Today, in all aspect, Cyberspace or cloud computing has a big role. Satellite remote sensing generates hundreds of GB of raw images that needs to become further processed to become the basis of several different GIS products. A cloud based representation of such a workflow has been developed by the department of space. The system comprises several technologies. A SAAS provides collection of services for such as geocode generation and data visualization. The platform leverages Aneka technology to dynamically provision the required resources.

The aim is to elaborate the texture of image and then with the help of constraint process the image. The GIS (Geography Information System) basically grow when works on troposheets. It is totally cloud based and lots of technology along with that growing. One of them is Aneka.

Keywords Cyberspace; satellite; GIS;Image Processing; Aneka; (key words)

  1. INTRODUCTION

    Whenever a human or a machine received data of two or more dimensions an image that processed is called a Digital images processing. In simple way we can say that to develop the dimensions pictures by digital image processing is used. Digital image processing as a broad spectrum of application, such as remote sensing via satellite and other space crafts. A single digital image can present a very large amount of information in a compact and easily interpreted form. A digital image containing several millions bits of information can be displayed on a single photographic print or display monitor.

  2. EASE OF USE

    A. Image classification

    We can classify images in three represented black and white images, gray images and color images. The characteristics of each type of images area as follows:

    For black and white: each pixel is represented by one bit. These images are sometimes referred to as bi- level, binary, bi-tonal images. For gray scale: each image is represented by luminance level, for pictorial images, topical gray scale images are represented by256gray scale level or 8bits. For documents images 4-7bits are usually sufficient.

    Color images have multiple component each pixels of a color image can be represented by luminance and chrominance components. For instance, the NTSC translated system uses luminance and chrominance I and Q to represent color. Maintaining the Integrity of the Specifications…

  3. DEFINITION OF IMAGE PROCESSING

    An image can be thought of as matrix of numbers. A simple example of digital image. The digital representation of that scene is shown on the digital representation of that scene that is present object space each point with in the digital representation corresponds to an area in objects space. It can be observed that the sampling frequency was selected so that an Object space. It can be observed that the sampling frequency was selected so that an object the size of the black square would appear in the digital image as a 2-element by 2- element object. A higher sampling frequency night that an object the size of the black square could not be detected in digitization representation of the same scene. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar:

  4. TECHNIQUES OF IMAGE ENHANCEMENT The image enhancement method based on either spatial r frequency domain techniques. The purpose of that is to develop the fundamental ideas underlying and relating these approaches.

    1. Spatial domain method

      • Point Operation

        • Contrast stretching: In proving the contrast of image s by digital processing. There are two contrast stretching:

          • Linear

          • Nonlinear

        • Noise Clipping:Image noise arising from noise sensor or channel transmission errors usually appears as discrete isolated pixels variations that are not spatially correlated.

      • Area Operation: Many images techniques are based on operations performed on local neighborhood of input pixels. The image is convoluted with a finite impulse response filter called spatial mask.

      • Pseudocolouring

    2. Frequency Domain Methods

    Frequency Domain processing techniques are based on modifying the fourier transform of an image.

    • Transform Operations: The transform operation enhancement technique, zero memory operation are performed on a transformed image followed by the inverse transformation.

      • Linear

      • Homomorphism

      • High Pass

    • Pseudo coloring

    • Pseudocoding refers to mapping a set images into a color image.

  5. EQUATIONS

  1. Negation of Image: Through the image processing software we can negate any image. The main idea about negating an image is that after complimenting the gray value of every pixel, we can get negative of an image.

    GVout=255-GVin

    Where GVin is the gray value of a pixel on image and GVout is the gray value of the pixel after complimenting the gray value by applying this formula to every pixel, we will get negative of image.

  2. Linear contrast enhancement: LCE is best applied to remotely sensed images with Gaussian or near-Gaussian histograms, that is, when all the brightness values falls generally within a single, relatively narrow range of the histogram and on one mode is apparent.

BVout=((BVin-MINk)/(MAXk-MINk))quantk

Where BVin is the original input brightness value and quantk is the range of the brightness values that can be displayed on the CRT.

VI. REMOTE SENSING DATA ANALYSIS

The analysis of remotely sensed data is performed using a variety of image processing techniques, including analog image processing of the hardcopy data and applying digital image processing algorithms to digital data.

  1. Digital Image processing

    Scientist made significant advances in digital processing of remotely sensed data for scientific visualization, which are summarized in this book and others Wolf and yaeger1993.Fundamental methods of rectifying the remotely sensed data into land use and land cover, and identifying change between dates of imagery are now performed routinely and reasonable precision.

    For example, numerous studies have synthesized texture information from the spectral data in the imagery. Also several attempts at contextual classification make use of neighboring pixel values.

  2. Analog(Visual Image processing)

    Most of the fundamental elements of image interpretation identified are used in visual image analysis, including size, shape shadow, color, parallex, pattern, texture, site and association.

  3. Figures and Tables

1) Positioning Figures and Tables:

3*3

(a) 5×5

(b)

Original Image

2

0

1

3

5

4

8

3

1

Average value of 3 X 3spatial moving window

3

4

5

Filtered image

a.

Fig. 1. Result of applying low frequency convolution

The primary pixel under investigation at any one time is BVn=BVij. The spatial moving average then shifts to the next pixel, where the average of all nine brightness value is computed. The neighborhood ranking median filter is useful for removing noise in an image, especially shot noise by which individual pixels are corrupted or missing. Image smoothing is useful for removing periodic salt and pepper noise recorded by electronic remote sensing.

ACKNOWLEDGMENT

This Paper is dedicated to my parents and friends who helped me along the path to complete with dedication.

I love you!

References

  1. Digital image Processing

  2. ImageClassification

Coding [[abstract]('https://allinone.fwscart.com/')]

#include<stdio.h>

#include<stdlib.h>

#include<conio.h> void main(void)

{

char fname[300]; FILE *fpi, *fpo;

int i,j,size,r,c,row,col; float sum;

unsigned char **FiltBuf; char *image;

printtf("please enter your file name/n"); scanf("%s", fname);

printf("enter the row/n"); scanf("%d", & raw); printf("enter the col/n");

if((fpi=fopen(fname,"r"))==NULL)

{

printf("file not found"); getch();

exit(0);

}

image= ( char *) malloc(sizeof(char)*row*col); fread(image ,sizeof(char), row*col, fpi); fpo=fopen("result.raw","w");

FiltBuff=(unsigned char**) malloc(row*sizeof(unsigned char*)); for(i=0;i<row;i++)

FiltBuff[i]=(unsigned char**) malloc(col*sizeof(unsigned char*)); for(i=0;i<row;i++)

for(j=0;j<col;j++)

{

FiltBuff[i][j]=image[i*col+j];

}

}

for(r=1;r<row-1;r++)

{

fclose(fpi); fclose(fpo); getch(); return;

}

for(c=1;c<col-1;c++>

{

sum=0;

for(i=-1;i<=1;i++) for(j=-1;j<=1;j++)

sum+=(FiltBuff[r+i][c+j]); image[r*col+c]=sum/9.0;

}

printf("%d",r);

}

fwrite(&image, sizeof(char), row*col, fpo)

Leave a Reply