A Review on 2D Image Representation Methods

DOI : 10.17577/IJERTV4IS041201

Download Full-Text PDF Cite this Publication

Text Only Version

A Review on 2D Image Representation Methods

Bineeth Kuriakose

Department of Computer Engineering Govt. Model Engineering College Kochi, India

Preena K P

Department of Computer Engineering Govt. Model Engineering College Kochi, India

AbstractImage Representation is the process of generating descriptions from the visual contents of an image. There is a wide range of image representation methods which have been proposed in the past few decades. Some of these methods are designed for specific application areas while others are more generalized and can be applied in various fields. Each of the image representation methods is unique in its own way and has its advantages and drawbacks. This paper does a brief review on different existing two dimensional image representation methods by focusing its approach, advantages, limitations and applications. The analysis shows that most of the representations is based on machine perception model which cannot be understandable by a human being just by looking through his eyes. The paper ends with the suggestion of introducing a representation which can understandable by both a human and machine.

Keywords Image representation; classification; human perception

pattern recognition. The driving forces behind the competitive research in this area is because of : (1) need of efficient encoding of images, (2) to decrease the gap in understanding an image by human cognitive power, (3) evolvement and development of modern digital transmission media recently and (4) need of a better method in supporting different image processing applications.

The study of representing natural image is always attractive. Methods proposed for image representation range from color histogram to feature statistics, from spatial frequency to region-based, and from color-based to topology- based. The statistical properties in physical level usually grasp semantics in difficulty. All these schemas either lose spatial information, or lose briefness of representation, or are highly task-specific, or are not understanding-oriented; In a word, none of them can meet these demands simultaneously [4].

  1. INTRODUCTION

    With the advent of the digital information era and the development of the computer multimedia technology, all kinds of image data are dramatically increasing [1]. The contemporary means for still image representation depend on their application, for example medicine, digital libraries, electronic galleries, geographic information systems, documents archiving, digital communication systems, content- based image retrieval, object recognition, robot vision, remote sensing applications etc. Therefore, the study on image representation method becomes increasingly important [2].

    Image Representation is the way in which images are described and stored in the computer. The efficiency of image processing algorithms will always be determined by the selection of different image representation methods to a great extent [3]. Image representation is of primary importance for object recognition and image understanding. A good representation schema should be (1) honest, (2) general, (3) brief and (4) helpful for advanced tasks. As a fundamental data structure, a representation should capture the distribution of image features honestly and quickly, and make them accessible to higher processing layers [4].

    The area of research in the field of image representation becomes an active area recently because of the development of digital media communication. Designing efficient image representation is an important challenge in fields of computer graphics, computer vision, robotics, image processing and

    This paper does a review on different existing two dimensional image representation methods and classified them on the basis of certain criteria. The main contribution of this work is to study, review and analyze the different image representations and tabulation of the findings so obtained. The paper is divided into six sections. Second section covers the basic image classifications and the machine representation of images. Third section classifies the existing image representations into three categories and discussed about each one of it in brief. Fourth section covers the review some literary works proposed on image representation area. Fifth section covers the analysis of reviewed works and finally the paper ends with the conclusion.

  2. IMAGE BASIC CLASSIFICATION AND MACHINE REPRESENTATION

    This section describes the basic classification of images and how an image is represented in a machine.

    1. Raster and Vector Images

      An image is an artifact that depicts or records visual perception. Depending on whether the image resolution is fixed, it may be of vector or raster type.

      Raster images have a finite set of digital values, called pixels. The digital image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual element in an image, holding quantized values that represent the brightness of a given color at any specific point [5].

      Vector images is the use of geometrical primitives such as points, lines, curves, and shapes or polygonsall of which are based on mathematical expressionsto represent images in computer graphics. Vector images are based on vectors (also called paths), which lead through locations called control points or nodes. Each of these points has a definite position on the X and Y axes of the work plane and determines the direction of the path; further, each path may be assigned a stroke color, shape, thickness, and fill. Vector graphics can be magnified infinitely without loss of quality, while pixel-based graphics cannot [6].

      Fig. 1. Example of a vector and a raster image

    2. Representation of Image in a Machine

    An image on a photographic film is a continuous function of brightness values. Each point on the developed film can be associated with a gray level value representing how bright that particular point is. To store images in a computer we have to sample and quantize (digitize) the image function. Sampling refers to considering the image only at a finite number of points. And quantization refers to the representation of the gray level value at the sampling point using finite number of bits [7]. Image data is always displayed on a computer (or digital device) screen as a bitmap. This consists of a rectangular arrangement of small colored squares called pixels arranged in horizontal rows. Every image has a resolution which is measured in the number of pixels per row, its width, and the number of pixel rows, its height. For example an image with a resolution of 800 by 600 pixels would have 800 pixels in each row and 600 of these rows. How big the image appears on screen, as measured in inches or centimeters is another matter. This depends on the dpi or ppi (dots per inch = pixels per inch) of the screen. For example, the dpi of a typical PC monitor is 72dpi; the dpi of an iPad3 is 264 dpi. This means the same image will display about 1/3rd as big on the iPad as it will on the PC. An advantage of the iPad's high dpi is images will appear to be very sharp because the pixel size is so small.

  3. CLASSIFICATIONS ON IMAGE REPRESENTATIONS

    Image Representation is the foundation of good performance of various image processing tasks. To represent each image effectively a large number of image representation methods have been proposed over the time [8]. And these image representation methods can be classified on the basis of three parameters: (1) Based on level of processing (2) Based on level of abstaction (3) Based on image features.

    1. Image representations based on level of processing

      Based on the level of processing of images by a machine for different purposes, the image representation methods are grouped into four categories: Pixel based, Block based, Region based and Hierarchical based.

      Fig. 2. Classification based on level of processing

      1. Pixel based representation: Pixel representation is the simplest representation to define an image. In digital imaging, a pixel, pel,or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device. The representation includes simple neighbourhood relations between elements (4,8,6- connectivity). Each pixel contains only local information for each element. The number of elements in the representation is normally big and is used for displaying the image and it has applications in medical imaging where each pixel has got its own importance [5].

      2. Block-based representations: Here, the image is divided in a set of (rectangular) array size. The number of elements is slightly smaller than with pixel-based, still only local information is stored which is same that of pixel based representations. Block based representations can be done for both gray-scale and binary images. The representation is used in compression, segmentation, extracting different image features etc [9].

      3. Region based representations: Also known as super- pixel representation. Here the regions are not rectangular and it is formed by grouping similar and connected pixels. The adjacency information between regions is represented usually as RAG(region-adjacency graph) or combinatorial map. The representation is used for object detection and segmentation, but different unions of multiple regions have to be considered [9].

      4. Hierarchical representations: The representation uses most likely unions of regions of region- based representations. The image representation can be done at different scales. Examples includes min-/max-tree, – tree, quad tree, bin tree etc. Applications includes object detection, video segmentation, image segmentation and filtering, image simplification etc [10].

    2. Image Representations based on their level of abstraction

      Based on the level of abstraction, the image representation methods is categorized into three groups: Low level, Intermediate level and High level representations.

      Fig. 3. Classification based on level of abstraction

      1. Low-level representations or statistical image representation: These representations start with pixel based representations, in which pixel brightness or colour is used directly. In computer vision, we are interested in physical quantities of observed scenes, e.g. distance, albedo, etc., or in recognition of objects. Pixel based representations contain this information in a very indirect form. We should extract desirable invariant quantities from raw images, that is, to transform images to some different representation.

        The most direct way to obtain some useful operations on images is to represent them as mathematical objects. There are two common types of mathematical representations of images. They are functional representations and stochastic models. Functional representations can be used for applying spatial transformations (e.g. scaling, rotation, affine, or projective) and for changing the functional basis (Fourier and wavelet transforms, etc.). Stochastic models (e.g. Markov fields) are useful in extracting statistical properties of visual scenes.

      2. Intermediate representations: An intermediate-level image representation lies between low-level sample-based statistical representation and high-level object-based semantic representation. We can also consider intermediate representations as a way to fill the semantic gap between pixel-level and semantic-level representations [11].

        This image representation aims to meet the following criteria.

        • It can describe a large class of images with simple constructs and structures, i.e., simple syntax and yet versatile semantics.

        • It can be derived from raw pixels at a reasonable

          complexity.

        • It is compact to conserve transmission bandwidth and storage.

        • It can directly support common image analysis,

        synthesis, and query operations, without having to be converted into other representations.

        These criteria are important for a wide range of multimedia applications, such as digital library, telemedicine, distance-

        learning, image/video streaming, e-commerce, content-based image retrieval, etc [11].

        Contour-level representation is an example for intermediate representation where the contours in the image can be extracted locally or globally. In the second case, images will be represented as regions. Contours can also be represented as connected chains or as separated edge points. In addition to some degree of invariance, these representations also help to reduce data dimensionality. That is, if images are 2D signals, then contours are 1D signals. Structural representation is another example which has basic elements such as straight lines, corners, arcs, etc. Vector graphics representation is also belong to intermediate-level representations in which image representation is based on vectors which lead through locations called control points or nodes. The structural or contours image descriptions are derived from raw images in computer vision, while vector graphics is the reverse process.

      3. High level or Semantic (or knowledge-based) representations: Semantic Representation is the highest level of image representation in which image regions are labelled with meaningful labels (words). It offers a high-level scene description using constructs like edges,shapes, surfaces, even three-dimensional (3-D) objects, and the relation between the constructs. However, semantic image representations are difficult to generate and computationally expensive, unless the image is synthesized from known mathematical models. It remains a daunting technical challenge to extract objects in acquired natural images [11].

    3. Image Representations based on image features

      Image representation can be classified on the basis of extraction of features from an image to describe its content [12]. These image features can be categorized as low-level and higher-level descriptors. Color, texture, and shape are traditional low-level image features which have been widely used for image representation. Higher-level representations will try to capture a richer semantic content of an image and the perceptual features will come under the higher-level representation.

      Fig. 4. Classification based on image features

      1. Color based image representations: Color-based image representation techniques include color histograms [13], [14], color moments [15], color sets [16], color constants [17], etc. which are mainly characterized by simplicity and easy computation. However, most of color features lack color-spatial distribution information which is particularly beneficial in grayscale image representation [12].

      2. Texture based image representations: Texture-based image features include the information about spatial arrangement of pixel intensity values, as opposed to color features, and are usually preferred to color features when images with highly textured objects are being represented. Low-level texture features describes an image region which comprises of smaller repeating elements that periodically repeat themselves in a manner defined by a certain rule. There are four principal approaches used to derive texture features statistical, geometrical, model based, and signal- processing or filtering-based approach [12].

        Statistical methods describe texture by statistical properties of the grey levels of pixels in an image. The most popular statistical methods include cooccurrence matrices [18] and autocorrelation function [19]. Geometrical texture representations such as tree structured features [20] characterize texture as being constructed of smaller units called textons, which are spatially arranged based on a certain placement rule. Model based texture methods, e.g. Markov random fields [21], and Fractal model [22] focus on building an image model which describes texture. Filtering based approaches to texture representation use filtering techniques such as Spatial Domain filtering [23] or Fourier domain filters [24] to perform frequency analysis of grey level intensities in an image.

      3. Shape based image representation: Shape-based image representation is more efficient in classification of images which include objects with distinct shapes, and is often used as an alternative to texture and color-based image representation techniques. Most of the shape-based image representation techniques belong to either contour-based or region-based approaches [12].

        Region-based approaches use contour and the region enclosed within that contour to represent shape, while contour-based approaches extract shape features using only edge information in an image. Each of these approaches to shape feature extraction can be subdivided further into global and structural methods. Global region-based approaches represent shape as a whole. Some of this type of approaches used in computer vision include simple features such as area, Euler number, and eccentricity, grid method [25], and shape matrix [26]. Structural region-based approach, on the other hand, divides shape region into segments and extracts features from each segment separately. This approach was applied in convex hull [27], and core model [28] methods. Global contour-based methods represent shape contour as a whole, similarly to global region-based approach. Examples of popular global contour-based methods include simple shape descriptors (e.g. compactness, eccentricity and perimeter), shape signatures [29], spectral descriptors such as Fourier descriptor [30] or wavelet transform [31]. Structural

        contour-based approaches include chain code [32], polygon decomposition [33], b-splines [34], and shape invariants such as geometric, algebraic and differential invariants.

      4. Perceptual feature based image representations: Higher-level image descriptors, which exploit rules of human visual perception mechanisms are also called perceptual features. There have been numerous attempts by computer vision researchers to improve computers performance in image analysis by mimicking human vision. This is achieved by applying Gestalt laws of human perception. According to Gestalt laws, human vision can perceive objects based solely on their shape without seeing minor details such as texture or shades of color of the objects. Perceptual grouping has got applications in edge detection, object detection, CBIR etc [12].

  4. REVIEW OF SOME IMAGE REPRESENTATION WORKS

    1. Cognitive Image Representation [2]

      Cognitive Image Representation corresponds to the hypothesis of the human way for image recognition using consecutive approximations with increasing resolution for the selected regions of interest. Such image representation is suitable for the creation of the objects learning models, which should be extracted from image databases in accordance with predefined decision rules.

      The main idea of this approach is based on the method for image representation with Inverse Spectrum Pyramid (ISP) decomposition. The decomposition represents the image with consecutive approximations based on any kind of 2D orthogonal transform (DCT, WHT, etc.), retaining the resolution and increasing the approximation quality. The calculated transform coefficients build the consecutive layers of the spectrum pyramid.

      One of the main advantages of this method is the ability to obtain query results in large databases faster. It permits the development of interactive systems that allow the user to define various queries of the kind: to find N most similar images which best suit the chosen set of image properties. Significant application areas include, image coding, image archiving, the image transmission systems, distance learning, remote medical diagnostics and patients monitoring, etc.

    2. BSP Representation [35]

      A Binary Space Partitioning (BSP) tree can be used to represent different images. The BSP representation in color images is done by the following steps:

      Firstly, the whole image is binarized using the Binary Quaternion Moment-Preserving (BQMP) thresholding method. Secondly, a partitioning line is then chosen to divide the output image into two regions such that at least one of the regions is relatively homogenous, i.e., for a binary image it is either almost or completely black or white. Thirdly, a color is chosen to represent the part of the input image contained in each region. In the interest of computational speed, the element values of the representative color are calculated as the mean of the red, green and blue components of all the pixel colors in the region. Finally, these color values together with

      the partitioning line parameters are recorded and they are used as the representation of the image at the first partition level.

      The process is repeated until no more regions can be partitioned or until it reached certain number of iteration. Therefore, at the end of the j-th iteration one has j number of image representations at a hierarchical order.

      A BSP tree representation of an image is illustrated in Fig. 5. When representing an image f with a tree denoted as Tf, the nodes of the tree are used as a 'container' of information in each partition region of the image. The amount of information contained in a node can be as much as is necessary. The root node contains the average color of the whole image and the first partitioning line. The nodes that correspond to regions which are not partitioned (cell) contain only the color information of the region.

      Fig. 5. Example of a BSP Tree Representation

    3. Bio Inspired Model Representation [4]

      Bio inspired model is an image representation model based on non-classical receptive field (nCRF) and backwards control mechanism proposed by the inspiration of biological mechanisms. The model is used for image representation and image analysis using a multi-layer neural network, which is rooted in the human vision system. Having complex neural layers to represent and process information, the biological vision system is far more efficient than machine vision system. The neural model simulate non-classical receptive field of ganglion cell and its local feedback control circuit, and can represent images, beyond pixel level, self-adaptively and regularly. An nCRF of ganglion cell (GC) is the basic structural and functional unit of retina. The results of experiments, rebuilding, distribution and contour detection, prove this method can represent image faithfully with low cost, and can produce a compact and abstract approximation to facilitate successive image segmentation and integration. This representation schema is good at extracting spatial relationships from different components of images and highlighting foreground objects from background, especially for nature images with complicated scenes. Further it can be applied to object recognition or image classification tasks in future.

    4. MPS Representation [11]

      MPS is a versatile semantics-driven image representation that can support many common operations in visual computing and communications, in addition to being itself an efficient image coding scheme. The MPS consists of edges

      that are extracted and organized successively from fine to coarse scales. The edges are further classified into two types: pulse edge and step edge. MPS is an intermediate-level image representation. The MPS image representation reaches a good compromise between its construction cost and descriptive power. It ha a compact form, and hence is amenable to image compression.

      Furthermore, since the representation consists of semantically meaningful primitives like edges of different scales and types, many common image operations, such as classification, restoration, detection, and content-based retrieval, can be performed directly in the MPS framework, without first converting the coded image back to the spatial domain. It got applications in compression, scene classification etc.

    5. Bag of Visual Words

      In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features. A definition of the BoW model can be the "histogram representation based on independent features" [36].

      To represent an image using BoW model, an image can be treated as a document. Similarly, "words" in images need to be defined too. To achieve this, it usually includes following three steps: feature detection, feature description, and codebook generation. To extract the BoW feature from images involves the following steps: (i) automatically detect regions/points of interest, (ii) compute local descriptors over those regions/points, (iii) quantize the descriptors into words to form the visual vocabulary, and (iv) find the occurrences in the image of each specific word in the vocabulary for constructing the BoW feature (or a histogram of word frequencies) [37].

      One of the disadvantages of BoW is that it ignores the spatial relationships among the patches, which are very important in image representation. But the researchers have proposed several methods to incorporate the spatial information. The applications include image classification, Content based image indexing and retrieval (CBIR) etc [14].

    6. Object Bank [38]

      Object Bank (OB) is a representation model of natural images based on objects, or more rigorously, a collection of object sensing filters built on a generic collection of labeled objects. While the OB representation offers a rich, high-level description of images, a key technical challenge due to this representation is the curse of dimensionality, which is severe because of the size(i.e., number of objects) of the object bank and the dimensionality of the response vector for each object. Typically, for a modest sized picture, even hundreds of object detectors would result into a representation of tens of thousands of dimensions. Therefore to achieve robust predictor on practical dataset with typically a couple of hundreds of instances per class, structural risk minimization via appropriate regularization of the predictive model is essential. Applications includes scene classification, object recognition etc.

    7. Deep Neural Networks based Image Representation [8]

    DNNs have shown good performance for image representation in many computer vision tasks. Restricted

    Boltzmann Machine (RBM) [39], Auto-Encoder (AE) and Convolutional Neural Nets (ConvNet) [40] are three typical building blocks used for building DNNs. Based on these building blocks, some task-specific DNN architectures are also proposed, like Convolutional Deep Belief Networks (CDBN) [41], Reconstruction Independent Component Analysis (RICA) [42], Deconvolutional Networks (DN) [43] etc. These techniques further improve the performance of many computer vision tasks, like hand-written digit recognition, object recognition etc. The intuitive observation from these work is that different layers of the DNNs extract different features in different scales, and these features range from the low-level features (edge, corner) to higher level features (for example, semantically meaningful object parts). However, the training of these DNNs usually is very time consuming, and there are many tricks for the network training.

  5. ANALYSIS OF REVIEW METHODS

    An analysis about the reviewed image representation is done in this section. The advantages, disadvantages and applications of each method is compared with the rest. TABLE illustrates the results.

    TABLE I. COMPARISON OF REVIEWED IMAGE REPRESENTATION METHODS

  6. CONCLUSION

    Image representation plays an important role in the optimization of different image processing operations. Each of the image representation method was introduced to achieve some better results in some particular image processing application. The review makes an attempt to go through many of the existing image representations in brief. And the authors believe that, this may help in understanding about the different image representations for the researchers, who wants to further investigate in the field. The analysis shows that, the current image representation is based on machine level understanding only. That is, it is not possible for a human who wants to understand how the image is represented and what actually the image means just by seeing those representations. So, there is a possibility in the concept of representing an image which can be understandable by both a human and a machine. The visual perception theory in representing an image in human brain can provide a good image representation which may get potential applications in future.

    ACKNOWLEDGMENT

    The authors would like to thank the Image Processing Association of MEC (iPAM) for the support given to complete this work and the digital library of Govt. Model Engineering College, Kochi for providing the access to use the resources for this review.

    Method

    Advantage

    Limitations

    Applications

    Cognitive

    Faster query

    results from large database, Low computational

    complexity

    Representation may vary based on human perception and

    selected regions of interest

    Development of interactive database systems

    BSP

    Moment- preserving thresholding technique

    Mathematically expensive in choosing different

    parameters

    Image Coding, Content based image indexing

    Bio Inspired Model

    Low cost, Good at extracting spatial relationships

    Some biological details of RF and

    retina remain unknown

    Object Recognition and Image Classification

    MPS

    Better and more direct operability, Carries semantic

    information

    MPS

    Construction is computationally expensive

    Image Compression, Scene Classification

    Bag of

    Visual Words

    Semantic features are included

    Ignores spatial relationships among the patches

    Image classification, Content based image indexing

    and retrieval

    Object Bank

    More efficient and scalable for

    large scene datasets

    Curse of

    dimensionality,

    Expensive in training

    Scene Classification,

    Object Recognition

    Deep Neural Network

    Represent object in

    different granularities

    Training is time consuming, Computation/ Optimization

    issue

    Speech recognition, Hand-written digits

    recognition

    REFERENCES

    1. Shaohong Fang, Chuanbo Chen, Yunping Zheng, An improved color image representation method by using direct non-symmetery and ant- packing model with triangles and rectangles, International Joint Conference on Artificial Intelligence, 2009.

    2. Roumen Kountchev, Stuart Rubin, Mariofanna Milanova, Vladimir Todorov, Roumiana Kountcheva, Cognitive Image Representation Based On Spectrum Pyramid Decomposition, 10th Wseas Int. Conf. on Mathematical Methods and Computational Techniques in Electrical Engineering (Mmactee'08),May 2-4, 2008.

    3. Xueli Wu, Bingzheng Wang, Yongquan Xia, A New Method for Image Representation, IEEE 3rd International Conferenceon Communication Software and Networks (ICCSN), 2011.

    4. Hui Wei, Qingsong Zuo, and Bo Lang, A Bio-Inspired Model for Image Representation and Image Analysis, 23rd IEEE International Conference on Tools with Artificial Intelligence, 2011.

    5. Rafael C. Gonzalez, Richard E Woods, Digital Image Processing, Pearson Education India, 2009.

    6. Nigel Chapman; Jenny Chapman, Digital Multimedia, Wiley. p. 86, 2002.

    7. How are images represented in the computer?, [Online]

      Available: http://www.cse.unr.edu/~bebis/CS302/image_info.html

    8. Shenghua Gao, Lixin Duan, and Ivor Tsang, DEFEATnet A Deep Conventional Image Representation for Image Classification, IEEE Transactions on Circuits and Systems for Video Technology, 2015.

    9. Jayaraman S, Veerakumar T, Esakkirajan S, Digital Image Processing, Tata McGraw-Hill Education, 2011.

    10. Yu, Kuan-Ting, Shih-Huan Tseng, and Li-Chen Fu. "Learning hierarchical representation with sparsity for RGB-D object recognition." Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.

    11. Xiaohui Xue and Xiaolin Wu, Directly Operable Image Representation of Multiscale Primal Sketch, IEEE Transactions on Multimedia, Vol. 7, No. 5, October 2005.

    12. Albina Mukanova, Gang Hu, Qigang Gao, N-gram Based Image Representation And Classification Using Perceptual Shape Features, Canadian Conference on Computer and Robot Vision, 2014.

    13. M. Swain and D. Ballard, Color Indexing, International Journal of Computer Vision, vol. 7, no. 1, Nov. 1991, pp. 1132.

    14. J. Han and K.K. Ma, Fuzzy Color Histogram and Its Use in Color Image Retrieval, IEEE Transactions on Image Processing, vol. 11, no. 8, Aug. 2002, pp. 944-952.

    15. M. Stricker and M. Orengo, Similarity of Color Images, Storage and Retrieval of Image and Video Databases III, vol. 2, no. 420, 1995, pp. 381-392.

    16. Y. Deng, B.S. Manjunath, C. Kenney, M.S. Moore, and H. Shin, An Efficient Color Representation for Image Retrieval, IEEE Transactions on Image Processing, vol. 10, no. 1, Jan. 2001, pp. 140- 147.

    17. C.L. Novak and S.A. Shafer, Supervised color constancy using color chart, School of Computer Science, Carnegie Mellon University: Pittsburgh, PA, Rep. CUM-CS-90-140, 1990.

    18. V. Arvis, C. Debain, M. Berducat, A. Benassi, "Generalization of The Cooccurrence Matrix for Colour Images: Application to Colour Texture Classification," Image Analysis & Stereology, vol. 23, no. 1, 2004, pp. 63-72, doi: 10.5566/ias.v23.p63-72.

    19. T. Toyoda and O. Hasegawa, Texture Classification Using Extended Higher Order Local Autocorrelation Features, in Texture 2005: 4th International Workshop on Texture Analysis and Synthesis, pp. 131- 136.

    20. R.W. Enrich and F.L. Lai, Texture Region Growing Based Upon A Structure Model Of Texture, Virginia Polytechnic Institute and State University: Blacksburg, VA, Rep. CS79010-R, Nov. 1979.

    21. G.R. Cross and A.K. Jain, Markov Random Field Texture Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-5, no. 1, Jan. 1983, pp. 25-39.

    22. A.P. Pentland, Fractal-Based Description Of Natural Scenes, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI- 6, no. 6, Nov. 1984, pp. 661-674.

    23. M. Unser and M. Eden, Nonlinear Operators For Improving Texture Segmentation Based On Features Extracted By Spatial Filtering, IEEE Transactions on Systems, Man, and Cybernetics, vol. 20, no. 4, Aug. 1990, pp. 804-815.

    24. T.I. Hsu, A.D. Calway, and R. Wilson, Texture Analysis Using The Multiresolution Fourier Transform, 8th Scandinavian Conference on Image Analysis, May 1993, pp. 823-830.

    25. G.J. Lu and A. Sajjanhar, Region-Based Shape Representation and Similarity Measure Suitable for Content-Based Image Retrieval, Multimedia Systems, vol. 7, no. 2, Mar. 1999, pp. 165-174.

    26. A. Goshtasby, Description And Discrimination of Planar Shapes Using Shape Matrices, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-7, no. 6, Nov. 1985, pp. 738-743.

    27. N.M. Sirakov and P. Mlsna, Search Space Partitioning Using Convex Hull and Concavity Features for Fast Medical Image Retrieval, Proc. IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Apr. 2004, pp. 796-799.

    28. C.A. Burbeck and S.M. Pizer, Object representation by cores: identifying and representing primitive spatial segions, The University of North Carolina: Chapel Hill, NC, Rep. TR94-048 (Version 3), Oct. 1994.

    29. S. Giannarou and T. Stathaki, Shape Signature Matching For Object Identification Invariant To Image Transformation And Occlusion,

      Proc. 12th international conference on Computer analysis of images and patterns (CAIP07), 2007, pp. 710-717.

    30. D.S. Zhang and G. Lu, A Comparative Study of Fourier Descriptors for Shape Representation And Retrieval, Proc. 5th Asian Conference on Computer Vision (ACCV02), Jan. 2002, pp. 646-651.

    31. Q.M. Tieng and W.W. Boles, Recognition of 2D Object Contours Using The Wavelet Transform Zero-Crossing Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 8, Aug. 1997, pp. 910-916.

    32. J. Iivarinen and A. Visa, Shape Recognition of Irregular Objects, Proc. SPIE Intelligent Robots and Computer Vision XV: Algorithms, Techniques, Active Vision, and Materials Handling, vol. 2904, 1996, pp. 25-32.

    33. J. Li and R.M. Narayanan, A Shape-Based Approach to Change Detection of Lakes Using Time Series Remote Sensing Images, IEEE Transactions on GeoScience and Remote Sensing, vol. 41, no. 11, Nov. 2003, pp. 2466-2477.

    34. Z. Huang and F.S. Cohen, Affine-Invariant B-Spline Moments for Curve Matching, IEEE Transactions on Image Processing, vol. 5, no. 10, 1996, pp. 1473-1480.

    35. S Sudirman and Guoping QIU, Color Image Representation using BSP Tree, International Conference On Color In Graphics And Image Processing – CGIP2000.

    36. Fei-Fei, L, Perona, P., A Bayesian Hierarchical Model for Learning Natural Scene Categories, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005.

    37. Chih-Fong Tsai, Bag-Of-Words Representation in Image Annotation: A Review, Hindawi International Scholarly Research, 2012.

    38. Li-Jia Li, Hao Su, Eric P. Xing, Li Fei-Fei, Object Bank: A High- Level Image Representation For Scene Classification & Semantic Feature Sparsification, Advances In Neural Information Processing Systems (2010).

    39. G. E. Hinton, S. Osindero, and Y.-W. Teh, A fast learning algorithm for deep belief nets. Neural Computation, vol. 18, no. 7, pp. 1527 1554, 2006.

    40. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Backpropagation applied to handwritten zip code recognition, Neural Comput., vol. 1, no. 4, pp. 541551, 1989.

    41. H. Lee, Y. Largman, P. Pham, and A. Y. Ng, Unsupervised feature learning for audio classification using convolutional deep belief networks, in Advances in Neural Information Processing Systems, 2009, pp. 10961104.

    42. Q. V. Le, A. Karpenko, J. Ngiam, and A. Y. Ngn, ICA with reconstruction cost for efficient overcomplete feature learning, in Advances in Neural Information Processing Systems, 2011, pp. 1017 1025.

    43. M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus, Deconvolutional networks, in Computer Vison and Pattern Recognition, 2010, pp. 25282535.

    44. Mudassar Raza, Muhammad Sharif, Mussarat Yasmin, Saleha Masood and Sajjad Mohsin, Brain Image Representation and Rendering: A Survey, Research Journal of Applied Sciences, Engineering and Technology 4(18): 3274-3282, 2012.

Leave a Reply