A Basic Study of Image Processing and Its Application Areas

DOI : 10.17577/IJERTV6IS070217

Download Full-Text PDF Cite this Publication

Text Only Version

A Basic Study of Image Processing and Its Application Areas

Shonima Vasudevan Dr. M. Nagarajan

Research Scholar Associate Professor

Computer Science Dept Computer Science Dept

CMS College of Science and Commerce CMS College of Science and Commerce

Coimbatore, Tamil Nadu, India Coimbatore, Tamil Nadu, India

AbstractDigital Image processing has become popular and rapidly growing area of application under Computer Science. A basic study of image processing and its application areas are carried out in this paper. Each of these applications may be unique from others. To illustrate the basic concepts of image processing various reviews had done in this paper. The main two applications of digital image processing are discussed below. Firstly pictorial information can be improved for human perception, secondly for autonomous machine perception and efficient storage processing using image data. Digital image can be represented using set of digital values called pixels. Pixel value represent opacities, colors, gray levels, heights etc. Digitization causes a digital image to become an approximation of a real scene. To process an image some operations are applied on image. This paper discusses about the basic aspects of image processing .Image Acquisition means sensing an image .Image Enhancement means improvement in appearance of image

.Image Restoration to restore an image .Image Compression to reduce the amount of data of an image to reduce size .This class of technique also include extraction/selection procedures .The importance applications of image processing include Artistic effects ,Bio-medical ,Industrial Inspection ,Geographic Information system ,Law Enforcement, Human Computer interface such as Face Recognition and Gesture recognition.

Keywords – Digital Image, Image Processing, Enhancement, Restoration, Compression.

I. INTRODUCTION

An image is better than information. In digital image processing an image is defined as a two dimensional function, f(x, y) where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of co- ordinates (x, y) is called the intensity or gray level of the image at that point [2].The first application area of digital image processing was in newspaper industry. The two cities are connected by submarine cables and pictures were sending through these cables. After this process they use Bart land cable picture transmission system. Using this time reduced from a week to less than three hours. They used printing equipment at the receiving ends. The components of an image processing system are:

  • Image sensor

  • Image processing hardware

  • Intelligent processing machine

  • Image processing software

  • Storing device

  • Display device

  • Image recording

  • Networking

Basically two parts are there in image sensor. They are Physical device and digitizer. An energy radiated by the object is handled by physical device and digitizer convert output of physical device into digital form.

To perform the specialized process on image, an image processing hardware is used just before the intelligent device (super computer).An Intelligent processing machine is used to perform all digital image processing task offline. Offline means we can apply different enhancement techniques for images that we already have. Image Processing software are specially designed to perform specific task. Storing device is used for storing an image. Storage devices can be of three types. Short term storage, on-line storage for fast recall, storage for frequent use. Generally a color monitors are used as display devices. Image recording (hard copy) can be done using laser printer, inkjet units and CD-ROMs etc. By using networking, one system user can also process the system at another area. High bandwidths are required. Optical fiber and broadband technologies are better options.

  1. DIGITAL IMAGE PROCESSING (DIP)

      1. Image Acquisition

        Fig-3: Image Acquisition

        The first step of digital image processing is image acquisition. In image acquisition sensing of an image is the first process. In this process image is sensed by illumination from source and reflection or observation from sensors. Scaling is done in image acquisition. From scanners, aerial cameras or digital cameras input images can be done. Image thus produce should have a high quality image with greater resolution. Image acquisition helps in proper image analysis.

        Preprocessing techniques can be used to improve the image. According to these techniques distortions can be suppressed and some additional features can be added to enhance image. Image size is reduced because high resolution images need a longer time. For each pixel color image is converted to grey scale image. In grey color intensities of red, blue and green remain equal.

      2. Image Enhancement

        Image enhancement means processing an image so that appearance of resulting image is more improved than original image. They are divided into two types.

        • Spatial Domain methods

        • Frequency domain methods.

          Spatial domain refers to aggregate of pixel composing an image. It refers to the plane of an image itself but frequency domains are based on the manipulation of Fourier transform of an image. There could be no general theory in the case of image enhancement because an image may be good image for one human being at the same time; it may be bad image for another human being. To improve the quality of an image various filters can be applied. Special effects can be added to the image by using filters. It may be image sharpening, image blurring, contrast etc. There are different types of filters:

        • Smoothing filters

        • Low pass filters

        • Median filters

        • Sharpening filters

        • High pass filters

        • Bandass filters

          Smoothing filters are mainly used for noise reduction and for blurring. For the removal of small details from an image and bridging of small gaps in lines or curves blurring can be done. For the noise reduction linear filter and non linear filter can be used.

          Low pass filter adopted the concepts of spatial filtering. In the Fourier domain low pass filter can eliminate high frequency components leaving low frequencies. The net effect of low pass is image blurring. Low pass filters are linear filters.

          In Median filter instead by taking the average, the gray level of pixel is replaced by the median of the gray levels in neighborhood of that pixel. When the noise pattern consist of strong spike like components and edge sharpness median filters is effective. The main objectives of sharpening filters are to enhance the blurred image. For autonomous target detection in small weapons image sharpening can be done.

          High pass filters eliminates low frequency components. High pass filter are used for slowly varying characteristics of an image. Average intensity, sharpening of edges, overall contrast and other sharp details can be done using high pass filters.

          Low and high frequencies of selected frequency regions can be removed using Bandass filters.

          Fig-4: Image Enhancement

      3. Image Restoration

        Image appearance can be improved using image restoration. Image restoration techniques are based on probabilistic analysis of an image or mathematical models. To enhance the quality of an image various filters are used. Image restoration removes the degradations during the acquisition of the image. Degadation may be noise which shows errors in pixel values or out of focusing or blurring due to camera motion.

        Fig-5: Image Restoration

      4. Image Compression

    The term data compression refers to the process of reducing the amount of data required to represent a given quantity of information. Image compression means reducing the bandwidth for transmissions and to reduce the storage space. This compression technique represents pictorial information in a more compact form by removing redundancies.

    For image compression source encoder and source decoder are used. It is used for reducing or eliminating any coding inter pixel. In the first stage source coding process mapper transforms input data into a format to reduce redundancies. In the second source coding process use quantizer, it reduces the efficiency of mapper. It reduces psycho visual redundancies.

    In the final stage symbol encoder use the variable or fixed length coding to the quantizer output. To reduce the impact of channel noise redundancy is inserted into the source encoded data. Mainly channel encoder and decoder are used for this purpose.

    Fig-6: Image Restoration

    The classification of strategies are of two types, lossless or lossy. Lossless technique means the restored data file is identical to the original file. We cannot afford to misplace even a single bit of information. Data files that represent images and other acquired signals do not have to be kept in perfect condition for storage or transmission and if some changes are made a small amount of additional noise may be formed but no harm is done. This technique is known as lossy.

  2. IMPORTANCE APPLICATIONS OF IMAGE PROCESSING

    Some of the applications of digital image processing are discussed as followings:

    1. Artistic effects

    2. Bio-Medical

    3. Industrial Inspection

    4. GIS (Geographic Information System)

    5. Law Enforcement

    6. HCI (Human Computer interface)

      3.6.1 Face Recognition 3.6.2Gesture Recognition

        1. Artistic effects

          To make images more visually appeal we can apply artistic effects. We can also add special effects to make composite images. Some of the artistic effects are given below.

          • Solarize

          • Posterize

          • Saturation

          • Emboss

          • Gray Scale

          • Invert

          • Sepia

          • Add Color Noise

            Solarize effects changes light area into darker and darker area into light. Posterize effect reduce number of colors on the image and gives an artistic look. Saturation affects the intensity of colors. Emboss effect gives a relief look to our photos. Gray scale effects can be applied to color image. Invert effects causes inversion from negatives to positive. Sepia effects give old time look to our picture. Add color noise effect add or increase the amount of colors noise to our picture.

            Fig-7: Artistic effects

        2. Bio-Medical

          Bio-Medical is one of the most broad field of Image Processing. It consist of biomedical signal gathering, Image forming, picture processing and image display to medical diagnosis based on features extracted from images. Hardware and software loads to clinical imaging devices.

          Fig-8: Bio-Medical

          Two types of clinical imaging are there. They are radiological imaging and secondly nuclear imaging. Radiological imaging contains radiography, thermography, ultrasound, nuclear medicine and CT. Nuclear imaging is moving towards organ metabolism. One of the most challenging research topics in bio medical image processing are Heart imaging.

        3. Industrial Inspection

          Industrial Inspection systems are used in all kinds of industries. Instead of human operators machines do their job. This method used to provide imaging based automatic inspection and process guidance in industry. This field consist of lots of technologies, software, hardware, integrated systems, methods and expertise. In Industrial inspection task detecting some anomalies is a major task. For example detection of broken aspirin tablets can be done using this technique. This application was implemented in an object oriented image processing software.

          Fig-9: Industrial Inspection

        4. GIS (Geographic Information System)

          GIS is mainly used for capturing, storing, checking and displaying data related to positions on earths surface. It performs spatial analyses. For the running of GIS low-end desktop computer and laptops to high end servers are needed. Main types of data used in GIS are:

          • Spatial data

          • Tabular data.

            Spatial data consist of geographic information and is combined with real world coordinates. According to spatial data proper display and analysis can be done. Again Spatial data is divided into two.

          • Vector Spatial data

          • Raster Spatial data

            Vector spatial data represent features with clear boundaries. For example schools, hospitals, banks etc. Raster spatial data involves satellite images, aerial photographs etc.

            Tabular data specifies spatial features, it provides detail about the characteristics of that features. For example tabular data includes address book with address or sales compiled by ZIP codes. Both these data are combined together to form a map.

            Fig-10: Geographic Information System

        5. Law Enforcement

          Law enforcement are used in different areas. It consist of Biometric Identification/Verification. It includes finger print, face, and iris. It can automatically read license plate, paper currency or cheque fraud, and automated counting or reading of serial number for tracking and identifying bills.

          • Required for image formation for some modalities (CT, MRI etc).

          • Image Restoration and Sharpening-For creating a better image.

          • Image Repossession-Search for the image of interest.

          • Measurement of pattern-Measures a range of objects in an image.

          • Image Acknowledgement-Differentiate the objects in an image.

          • Hallucination-Monitor the objects that are not visible.

            Fig-11: Law Enforcement

        6. HCI (Human Computer interface)

          HCI means Human Computer Interface. It consist of face recognition and gesture recognition. These tasks can be extremely difficult. Face recognition is the process of identifying images and videos of people. It is done by analyzing and comparing patterns. Algorithms are used to extract facial features and compare them in database for best match.

          Gestures can be recognized using mathematical algorithm. Gestures are mainly comes from face or hand. To interpret sign languages computer vision algorithms and cameras are used.

          Fig-12: Human Computer Interface

  3. GOALS OF DIP (DIGITAL IMAGE PROCESSING)

        • Improve high quality image.

        • Image Enhancement-Improvement of an image.

        • Feature extraction can be done.

        • Computer-aided diagnosis in appearance of images.

  4. CONCLUSIONS

    We have presented the basics of image processing, image acquisition, image enhancement, Image restoration and image compression and its applications such as artistic effects Bio-Medical, Industrial inspection, Geographical Information system, Law enforcement, Human computer interface such as ace recognition, Gesture recognition are discussed in the paper. This study will help researchers to work on various fields. Such as image processing, fault detection in industrialized industries, medical image segmentation. There are many more complex modifications we can make to the images.The filters use mathematical algorithms to modify the image. Some filters are easy to use, while others require a great deal of technical knowledge. Image processing help the students of computer science, Electronics, IT, Bio-Medical, Mechanical, elecrical etc.

  5. REFERENCES

  1. Dr. Sanjay Sharma, A text book on Digital Image Processing, Publications of s.k.kataris and sons, fourth edition, 2013.

  2. Rafael C. Gonzalez and Richard E. woods, A text book on Digital Image Processing, Publications of Pearson, Third edition, 2014.

  3. Anil.k. Jain, A handbook of Fundamentals of Digital image Processing, 1989.

  4. Russ, John C., The image Processing hand book, Third edition, CRC Press.

  5. Jahne, Bernd, Digital Image processing: Concepts, Algorithms and Scientific Applications, Third edition, Springer-Verlag.

  6. Pratt, William K, Digital image Processing, Third edition, Wiley

  7. Bhausaheb, Dnyandeo Mhaske, A. R. Dani Study of Image Processing Enhancement and Restoration, IJCSI International Journal of Computer Science and Engineering Technology, Volume: 8, November 2011.

  8. Shailendra Kumar Dewangan,Importance and Applications of Digital Image Processing and its applications, IJRET International Journal of research in engineering and technology, Volume: 3, May 2014.

  9. Basavaprasad B and Ravi M,A study on the importance of Image Processing and its applications, IJRET International Journal of Research in Engineering and Technology, Volume: 3, May 2014.

  10. B. Sreenivas and B. Narasimha chary, Processing of Satellite Image using Digital Image Processing, A world forum on geospatial, January 2011.

  11. Faizen Ahmad, Aaima Najam and Zeeshan Ahmed, Image-based Face Detection and Recognition: State of the Art, IJCSI International Journal of Computer Science Issues, Volume; 9, November 2012.

  12. Shailendra Kumar Devangan,Human Authentication using Biometric Recognition, IJCSET International Journal of computer Science and engineering Technology,ISSN:2229-3345,volume:6,No:4,PP.240- 245,April 2015.

  13. Ritu Tiwari,Anupam Shukla,Chandra Prakash,Dhirender Sharma,Rishi Kumar & Sourabh Sharma, Face Recognition using Morphological Method,IEEE,2009

  14. N.Patel & S.K Dewangan, An Overview of Face Recognition Schemes,International Conference of Advanced Research and Innovation(ICARI-20015),Institution of Engineers(India),Delhi State Centre, Engineers Bhawan, new Delhi,India,2015.

  15. H.M. Zelelew, A.T. Papagiannaki and E. Masad, Application of Digital Image Processing Techniques for Asphalt Concrete Mixture Images,International Association for computer methods and Advances in Geo-mechanices (IACMAG)the 12th International conference,October-2008.

  16. D. Maltoni, D. Maio, S. Prabhakar and A. Jain, Handbook of fingerprint recognition, Publications of Springer,2002.

  17. Dr. D. Vasumathi and M. Upendra Kumar, Neural Networks based Development of Digital Image processing ClassificationTechniques,IJRIME volume:2,April-2012.

  18. Nagalkar V.J and Asole S.S, Brain Tumar Detection Using Digital image Processing Based on Soft Computing, Signal and Image Processing Journal,Volume:3,Pages:102-105,2012.

  19. Jimmy Singla, Technique Of Image Registration In Digital image Processing-A Review, International journal of Information Technology and Knowledgement Management,Volume:5,Pages:239- 242,July-December 2012.

  20. Abou-Bakr M. Ramadan, Mazhar M. Hefnawi, Ahmed M.EI-Garhy and Fathy Z.Amer,Forecasting gamma radiation levels using digital image processing, A Journal on Life Science,Volume:9,2012.

  21. Ramakrishna Reddy, G.V. Hari Prasad and Eamani, Content-Based Image Retrieval Using Support Vector Machine in digital image processing techniques, IJEST, International Journal of Engineering Science and Technology,Volume:4, April-2012.

  22. Faizan Ahmad, Aima Najam and Zeeshan Ahmed, Image-based Face Detection and Recognition: State of the Art, IJCSI International Journal of Computer Sciences issues,Volume:9,November-2012.

  23. Shailendre Kumar Dewangan,Human Authentication Using Biometric Recognition,International Journal of Computer Science & Engineering Technology (IJCSET), ISSN: 2229-3345, Vol.6, No.4, pp.240-245, April 2015.

  24. S.K Dewangan,Identification of Colors in Photographic Images using Color Quantization, Proceedings of International Conference of Advance Research and Innovation (ICARI,ISBN:978-93-5156-328- 0,pp.318-322), Institution of Engineers (India),Delhi State Centre, Engineers Bhawan, New Delhi,India,February,2014.

  25. en.wikipedia.org/wiki/Image Processing.

Leave a Reply