Era Identification and Recognition of Stone In-scripted Kannada Characters Using Artificial Neural Networks

DOI : 10.17577/IJERTCONV2IS05019

Download Full-Text PDF Cite this Publication

Text Only Version

Era Identification and Recognition of Stone In-scripted Kannada Characters Using Artificial Neural Networks

Dr.H.S.Mohana, Dean & Professor

IT Department

Malnad College of Engineering Hassan- 573 201, Karnataka, India hsm@mcehassan.ac.in

G. Shivakumar, Associate Professor IT Department

Malnad College of Engineering Hassan- 573 201, Karnataka, India gskmce@gmail.com,

Rajithkumar B K, P.G Student ECE Department

Malnad College of Engineering Hassan- 573 201 Karnataka, India rajit.bk@gmail.com

C.M.Naveen Kumar Assistant Professor IT Department

Malnad College of Engineering Hassan- 573 201, Karnataka, India cmnk.it@gmail.com

Abstract: The stone inscription which is still prevailing and need to be explored has to be treasured, as it standby our rich heritage and culture, but due to several adverse factors like environment, invaders etc. it has been difficult to recognize the characters in the inscriptions. Prompt growth of technology and prevalent use of computers in the business and in other fields more and more organizations are converting their paper documents in to electronic documents that can be processed by computers. Presently the archeological department is using the technique estampage (dabbing technique), which is a traditional way of getting the mirror image of the stone inscriptions which requires more time and physical labor. In this context, the present work is aimed at: Extracting the information from stone inscription, Enhancement of the image by image processing techniques using Matlab and Character recognition and century identification

.In this proposal Matlab is used as the basic software, wherein using image processing toolbox stone inscriptions can be easily documented. A simple digital camera will solve the purpose, the neural network toolbox for learning and training.

KEYWORDS: Artificial Neural Network (ANN), Back Propagation Algorithm (BPA), Gray-Level Co- Occurrence Matrix (GLCM)

  1. INTRODUCTION

    Prompt growth of technology and prevalent use of computers in the business and other areas, more and more organizations are converting their paper documents into electronic documents that can be processed by computers. Recognition of any hand written characters with respect to any language is difficult. Kannada, the native language of a south western state in India has several speakers in Karnataka. Kannada script recognition differs from other language scripts in a few significant ways. As not every artifact can be re-documented because it is stolen or lost over time, so we have to process existing documentation. Before and after the invention of paper, writings were inscribed e.g. into stone, ceramic or metal. So we use image processing techniques to recognize the characters efficiently.

    Since time immemorial it has been a practice to

    engrave the literature in stones which depicts the culture and tradition of the descendants and the people. Maximum number of the stone inscriptions are said to be found in Karnataka. The stone inscription which is still prevailing and need to be explored has to be treasured, as it standby our rich heritage and culture, but due to several adverse factors like environment, invaders etc. it has been difficult to recognize the inscriptions. The previous method of exploring them includes estampage, X-ray fluorescence which require either human labour or heavy expenditure. A simple digital camera will solve the purpose; hence using Matlab would be effective.

    Here in the present work, the image is processed such that its era is identified & recognized. The major problem which arises while identifying the characters in an inscription is the difference in the styles in literature in different era. This difference is taken as the major feature for the classification of the era of the stone inscriptions and implemented using Matlab using image processing, artificial neural network toolboxes. Here Hoysala, BadamiChalukya, and Ashoka eras are considered for classification. Gabor filter will serve the purpose. The stone inscriptions of ancient eras is translated to current Kannada characters using global features, grid features, Gray Level Co-occurrence Matrix(GLCM), and Gabor filters which helps to read and recognize the characters of ancient era, which is different from the current Kannada characters.

  2. RELATED WORKS

    The review of the literature pertaining to the present topic is presented to the readers. In [1] authors concentrates on the century identification of ancient Tamil characters and converting them into current centurys form using MATLAB. In this paper, a method for recognizing Tamil characters from stone inscriptions, called the contour-let transform, which has been recently introduced, is adopted. Features are extracted directly from gray-scale character images by Gabor filters which are specially designed from statistical information of character structures. An adaptive sigmoid function is applied to the outputs of Gabor filters to achieve better performance on low- quality images. In order to enhance the discriminability of the extracted features, the positive and the negative real parts of the outputs from the Gabor filters are used separately to construct histogram features has been carried out in [2]. In [3] authors worked on Texture features that are based on the local power spectrum obtained by a bank of Gabor filters are compared. The features differ in the type of nonlinear post-processing which is applied to the local power spectrum. The following features are considered: Gabor energy, complex moments, and grating cell operator features. The capability of the corresponding operators to produce distinct feature vector clusters for different textures is compared using two methods: the Fisher criterion and the classification result comparison.

  3. ALGORITHM

    The input images of various ancient Kannada characters are collected from various era stone inscriptions.

    • The database is created by cropping the individual characters from the collected images.

    • Different characters of the same era are cropped and grouped under individual folders for training.

    • Mixtures of all the characters of different era are grouped under same folder for testing from which the query image is selected.

    • For the purpose of displaying the current century characters, the current century Kannada characters are also collected, cropped & grouped in a separate folder.

  4. METHODOLOGY

    The image is generally color or a gray scale image. Purpose of converting it to gray image is that the image depth of a gray scale image is 8 bits and for that of the color is 24 bits. Thus, opting conversion makes the operations simpler.

    The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images and remove detail and noise. It uses a kernel that represents the shape of a Gaussian (`bell-shaped') hump. This kernel has some special properties which are detailed below.

    The Gaussian distribution in 1-D has the form:

    (1)

    Where is the standard deviation of the distribution. We have also assumed that the distribution has a mean of zero (i.e. it is centered on the line x=0). The distribution is illustrated in Figure 1

    Figure 1: 1-D Gaussian distribution with mean 0 and

    =1

    In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form:

    (2)

    This distribution is shown in Figure 2

    Figure 2: 2-D Gaussian distribution with mean (0, 0) and =1

    Te idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function, and this is achieved by convolution. Since the image is stored as a collection of discrete pixels there is a need to produce a discrete approximation to the Gaussian function before we can perform the convolution.

    4.1 Using a Gray-Level Co-Occurrence Matrix (GLCM)

    A statistical method of examining texture that considers the spatial relationship of pixels is the gray- level co-occurrence matrix (GLCM), also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix.

    4.1.2 Creating a Gray-Level Co-Occurrence Matrix

    To create a GLCM, use the gray co-matrix function. The gray co-matrix function creates a gray-level co- occurrence matrix (GLCM) by calculating how often a pixel with the intensity (gray-level) value i occurs in a specific spatial relationship to a pixel with the value j. By default, the spatial relationship is defined as the pixel of interest and the pixel to its immediate right (horizontally adjacent). Each element (i,j) in the resultant GLCM is simply the sum of the number of times that the pixel with value i occurred in the

    specified spatial relationship to a pixel with value j in the input image. The number of gray levels in the image determines the size of the GLCM. By default, gray co matrix uses scaling to reduce the number of intensity values in an image to eight, but you can use the Num Levels and the Gray Limits parameters to control this scaling of gray levels. The gray-level co- occurrence matrix can reveal certain properties about the spatial distribution of the gray levels in the texture image.

    4.2 Artificial Neural Network:

    Neural networks are composed of simple elements operating in parallel. These elements are inspired by biological nervous systems. As in nature, the connections between elements largely determine the network function. Trainers can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements. Typically, neural networks are adjusted, or trained, so that a particular input leads to a specific target output. Neural networks have been trained to perform complex functions in various fields, including pattern recognition, identification, classification, speech, and vision and control systems. Neural networks can also be trained to solve problems that are difficult for conventional computers or human beings. The toolbox emphasizes the use of neural network paradigms that build up to– or are themselves used in– engineering, financial, and other practical applications.

  5. IMPLEMENTATION

      1. Image coordinate system

        Generally, the most convenient method for expressing locations in an image is to use pixel coordinates. In this coordinate system, the image is treated as a grid of discrete elements, ordered from top to bottom and left to right, as illustrated by the following figure 3.

        Fig 3: The pixel coordinate system

        For pixel coordinates, the first component r (the row) increases downward, while the second component c (the column) increases to the right. Pixel coordinates are integer values and range between 1 and the length of the row or column.

        There is a one-to-one correspondence between pixel coordinates and the coordinates MATLAB uses for matrix subscripting. This correspondence makes the relationship between an images data matrix and the way the image is displayed easy to understand

      2. Spatial Coordinates

        In the pixel coordinate system, a pixel is treated as a discrete unit, uniquely identified by a single coordinate pair. From this perspective, at times, however, it is useful to think of a pixel as a square patch. In this spatial coordinate system, locations in an image are positions on a plane, and they are described in terms of x and y (not r and c as in the pixel coordinate system).

        The following figure illustrates the spatial coordinate system used for images. Notice that y increases downward.

        The Spatial Coordinate

        System

        Fig 4: The spatial coordinate system

        This spatial coordinate system corresponds closely to the pixel coordinate system in many ways. For example, the spatial coordinates of the center point of any pixel are identical to the pixel coordinates for that pixel.

      3. Binary Images

        In a binary image, each pixel assumes one of only two discrete values: 1 or 0. A binary image is stored as a logical array. By convention, this documentation uses the variable name BW to refer to binary images.

      4. Grayscale Images

        A grayscale image is a data matrix whose values represent intensities within some range. MATLAB stores a grayscale image as a individual matrix, with each element of the matrix corresponding to one image pixel. By convention, this documentation uses the variable name I to refer to grayscale images.

      5. True color Images

        A true color image is an image in which each pixel is specified by three values one each for the red, blue, and green components of the pixel's color. MATLAB store true color images as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. True color images do not use a color map. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel's location.

        Graphics file formats store true color images as 24- bit images, where the red, green, and blue components are 8 bits each.

        A true color array can be of class uint8, uint16, single, or double. In a true color array of class single or double, each color component is a value between 0 and 1. A pixel whose color components are (0, 0, 0) is displayed as black, and a pixel whose color components are (1, 1, 1) is displayed as white. The three color components for each pixel are stored along the third dimension of the data array.

      6. Indexed Images

        An indexed image consists of an array and a color map matrix. The pixel values in the array are direct indices into a color map. By convention, this documentation uses the variable name X to refer to the array and map to refer to the color map.

        A color map is often stored with an indexed image and is automatically loaded with the image when you use the imread function. After you read the image and the color map into the MATLAB workspace as separate variables, you must keep track of the association between the image and color map

        The relationship between the values in the image matrix and the color map depends on the class of the image matrix. If the image matrix is of class single or double, it normally contains integer values 1 through

        p, where p is the length of the color map. The value 1 points to the first row in the color map, the value 2 points to the second row, and so on. If the image matrix is of class logical, uint8 or uint16, the value 0 points to the first row in the color map, the value 1 points to the second row, and so on.

      7. Artificial neural network

    Neural networks are composed of simple elements operating in parallel. These elements are inspired by biological nervous systems. As in nature, the connections between elements largely determine the network function. You can train a neural network to perform a particular function by adjusting the values of the connections (weights) between elements.

    Fig 5 Neural network processing

    Conceptually a network forward propagates activation to produce an output and it backward propagates error to determine weight changes (as shown in Figure 5). The weights on the connections between neurons mediate the past values in both directions.

    5.7.1 TheStructure of the ANN

    A neural network is a network of multiple neurons There are 3 layer, the input layer (denote by x), the output layer (denote by y) and the hidden layer

    There will be only one layer of input and output, but can have multiple layer of hidden layer. The number of neurons in input and output layer is problem- specific, but there is no general rule on number of layers and neurons in hidden layer

    The node input output equations

    s1 = sigmoid (x1w11 + x2w21 + x3w31 + ¢1) s2 = sigmoid (x1w12 + x2w22 + x3w32 + ¢2) t1 = sigmoid (s1u11 + s2u21 + 1)

    t2 = sigmoid (s1u12 + s2u22 + 2)

    t3 = sigmoid (s1u13 + s2u23 + 3)

    y1 = linear (t1v11 + t2v21 + t3v31 + 1) = t1v11

    + t2v21 + t3v31 + 1

    y2 = linear (t1v12 + t2v22 + t3v32 + 2) = t1v12

    + t2v22 + t3v32 + 2

    Neural networks are adjusted, or trained, so that a particular input leads to a specific target output. Neural networks have been trained to perform complex functions in various fields, including pattern recognition, identification, classification, and speech, vision, and control systems. Neural networks can also be trained to solve problems that are difficult for conventional computers or human beings. The toolbox emphasizes the use of neural network paradigms that build up to–or are themselves used in– engineering, financial, and other practical applications. In the present work neural network is employed for the training purpose. All the characters of Hoysala, Ashoka, Badami Chalukya and current have to be given for the neural network.

    5.8 Back propagation Algorithm (BPA)

    The Back propagation algorithm is used to learn the weights of a multilayer neural network with a fixed architecture. It performs gradient descent to try to minimize the sum squared error between the networks output values and the given target values. Figure 6 depicts the network components which affect a particular weight change. Notice that all the necessary components are locally related to the

    weight being updated. This is one feature of back propagation that seems biologically plausible. However, brain connections appear to be unidirectional and not bidirectional as would be required to implement back propagation.

    Figure 6: The change to a hidden to output weight depends on error (depicted as a lined pattern) at the output node and activation (depicted as a solid pattern) at the hidden node.

    Back propagation is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an

  6. RESULTS:

    appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities.

    Standard back propagation is a gradient descent algorithm, as is the Widrow-Hoff learning rule, in which the network weights are moved along the negative of the gradient of the performance function. The term back propagation refers to the manner in which the gradient is computed for nonlinear multilayer networks.

    There are generally four steps in the training process:

    1. Assemble the training data.

    2. Create the network object.

    3. Train the network.

    4. Simulate the network response to new inputs.

    In the present work 3 hidden layers are used for the purpose of training and classification. The number of iterations and error rate has to be considered. Here around 2000 iterations with an error rate of 0.03% have been taken.

    Fig 7 Gray image. Fig 8 Filter image Fig 9 Edge image

    Fig10 Dialuted image Fig 11 Fill all holes Fig 12 Noise removal based on area

    Fig 13 Noise removal based on height Fig 14 Dilated image Fig15 Bounding box

    Fi g 16 Query image Fig 17 user not selects any image Fig 18 Recognized character and its era

    Fig 19 Query (Ashoka ) image Fig 20 Current Kannada Character translation

  7. CONCLUSUIONS:

In this paper we presented a new method of character recognition ,era identification and translation of acient kannada stone inscription .This provide a good tool for the people for identification the era of the stone inscritption .It also helps a common man knowing the present kannada literature to read the ancient literature.

The major advantage of this project is simple digital camera is used for data collection.Here we use guassian filter for noise removal and smoothening of the image.canny filter serves the purpose of edge detection of the charecters.Bounding boxes will help us identify the region of interest.The feature of individual characters of different eras are extracted using globar filter which is a very compatible filter for texture images .ANN ,GLCM,grid and global feature are used for the training,classification and translation of the characters .The character of hoysala Badami Chalukya, and Ashoka has different feature vectors and hence they are successfully classifed into different categories or eras.

Here we taken only Hoysala,Badami Chalukya and Ashoka periods for era identification only the individual characters.This can be implemented for other eras also,it is a user friendly software i.e while graphical user interface less skilled or semiskilled person can avail the facility.This can be used in the archeological department for the purpose of era recognition ,translation and will act as a guidelines

for the common people.The main disadvantages is maximum number of data base is necessary for the purpose of training.

REFERENCES:

[1] Century Identification and Recgonition of Ancient Tamil Character Regonition, S Rajakumar

,Research scholar, Sathyabama Univarsity

,Department of ECE,chennai,India. Dr.V.Subbaih Bharathi ,Principal,DMl College of Engineering

,Department of Cse ,Chennai ,India

  1. Gabor filters-based feature extraction for chaacter recognition, Xuewen Wang ,Xiaoqing Ding,Changsong Liu

  2. Comparsion of Texture Feature based on Gabor Filters, Simona E.Grigoresu,Nicolai Petkov,and Peter Kruizinga

  3. Digital Image Processing(2nd edition),Rafel C Gonzalez ,Richard E.woods

  4. Digital image Processing using MATLAB,Rafel C Gonzalez ,Richard E.woods,Steven l.Eddins

  5. Printed Number Recognition using Matlab , Maalinee Ramu, UNIVERSITY TEKNOLOGI, MALAYSIA

  6. Template Matching Method for Recognition Musnad Characters Based on Correlation Analysis,

    Mohammed Ali Qatran, Department of Computer Science, Amran University, Yemen

  7. Printed and Handwritten Mixed Kannada Numerals Recognition Using SVM,G. G. Rajput, Rajeswari Horakeri, Sidramappa Chandrakant Department of Computer Science, Gulbarga University, Gulbarga, Karnataka-India

  8. Machine Vision Based Non-Magnetic Object Detection and Removal on Moving Conveyors in Steel Industry through Differential Techniques, H.

    S. Mohana, Malnad College of Engineering, Hassan, India et al. International Journal of Computer Vision and Image Processing (IJCVIP) 2 (2012): 3, accessed (March 14, 2014), doi:10.4018/ijcvip.2012070105

  9. Detection and Classification of Moving Objects by Using Real Time Traffic Flux through Differential and Graphical Analysis, H.S.Mohana,MCE,Hasssan,Ashwathakumar,G.,MM CE,Hassan,G.Shivkumar,MCE,Hassan,Manjunatha K.C.,MCE,Hassan, cicsyn, pp.414-419, 2009 First International Conference on Computational Intelligence, Communication Systems and Networks, 2009

  10. Ching Y. Suen and Robert J. Shillman, Low Error Rate Optical Character Recognition of

    Unconstrained Handprinted Letters Based on a Model of Human Perception, IEEE Transactions on

    Systems, Man, and Cybernetics, June 1977.

  11. Vinod Chandra and R. Sudhakar, Recent Developments in Artificial Neural Network Based Character

    Recognition: A Performance Study, IEEE, 1988.

  12. Evelina Maria De Almeida Neves, Adilson Gonzaga, Annie France Frere Slaets, A Multi-Font

    Character Recognition Based on its Fundamental Features by Artificial Neural Networks, IEEE, 1997.

  13. D. Sasikala, R. Neelaveni, Correlation Coefficient Measure of Multimodal Brain Image Registration

    using Fast Walsh Hadamard Transform, Journal of Theoretical and Applied Information Technology,2005

  14. Website referred http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm

http://en.wikipedia.org/wiki/Gaussian_blur http://www.mathwork.com

About author

Prof. Dr H.S.Mohana Obtained B.E Degree in Electrical and Electronics Engineering from University of Mysore during 1986. Obtained M.E from IIT ROORKEE and Ph.D from

VTU in 2011.He worked as chairmen and Member of Board of Examiner and Board of studies with several universities. Presented research findings in 12 National Conferences and in 4 International conferences. Recognized as AICTE expert committee member. Completed, one AICTE/MHRD-TAPTECH project,and one AICTE/MHRD- Research project successfully. Coordinated TWO ISTE Sponsored STTP. Presently, working as Professor and Dean (AA) at M C E, HASSAN.

Prof. Mr. G. Shivakumar obtain BE Degree in

1990 from MCE, Hassan, M.Tech in 1998 from IIT Kharagpur and PGD(HRM) from KSOU Mysore in 2005. He is currently registered for PhD (external) in VTU. He is serving MCE since 1990 and presently working as Associate Professor.. He has attended 32 workshops/ Symposiums. . His areas of interest are Microprocessor and Microcontroller based System Design, Soft Computing, Pattern Recognition, Affective Computing and Virtual Instrumentation. He has published 8 international journal papers, 14 international conference papers and 10 national conference papers.

Mr. Rajithkumar B K obtained B.E degree in Electronics and Communication Engineering from Visvesvaraya Technological University during 2012- 13, pursuing M.Tech in Digital Electronics and Communication Systems at Malnad College of Engineering, Hassan.

Presently, he undertakes his academic project on stone in-scripted, handwritten, mixed Kannada Characters Recognition using Template Matching Method at MCE, HASSAN. He has attended 6 workshops/ Symposiums. . His areas of interest are Digital Image Processing, Signal Processing, and VHDL.

Mr.C.M.Naveen Kumar obtained BE degree from MCE, Hassan during 2010.

He is serving MCE since 2010 as Assistant Professor. He is currently pursuing M.Tech (PT-QIP) in Computer Science and Engineering in MCE, Hassan. He has attended 9 workshops/ Symposiums. . His areas of interest are Signal Processing, Digital Image Processing.

Leave a Reply