Indian Currency Detection for the Blind using MATLAB

Download Full-Text PDF Cite this Publication

Text Only Version

Indian Currency Detection for the Blind using MATLAB

Alen Thomas Varghese

Dept.of Electronics & Communication AmalJyothi College of Engineering Kanjirapally, India

Jewel Domonic Savio Antony Dept.of Electronics & Communication AmalJyothi College of Engineering

Kanjirapally, India

Jacob Shibu Chacko

Dept.of Electronics & Communication AmalJyothi College of Engineering Kanjirapally, India

Jerrin Varghese

Dept.of Electronics &Communication AmalJyothi College of Engineering


AbstractThis paper has led to the development of the prototype of a system that recognizes the denomination of all India currency i.e. notes and coins, aimed for visually impaired people. It is capable of reproducing audio messages that announce the denomination of a banknote placed in front of a camera by processing each frame of its continuous filming. This message is heard by the visually impaired through an earphone. This work takes its theoretical basis from the Digital Image Processing (DIP) techniques. The recognition of the bank notes is primarily from the image recognition method known as Eigen faces, which is based on the Principal Component Analysis (PCA) mathematical theory. And in order to recognize the coins of all denominations a system has been created which recognizes coin based on image subtraction technique. The process performs 3 checks (radius, coarse and fine) on the input image. The stated subsequent checks enable the technique to endorse Rotation Invariance, thus obviating the need of placing the coin at a certain angle. Subtraction between the input object image and database image is performed. Further, plotting the resultant values gives minima which if less than a standard threshold establishes the recognition of the coin. To establish a distinguished output MATLAB stimulation can be taken into accord. Therefore, from this anticipated method a well- developed system can brought out to help the visually impaired to overcome their disability so that they can have a well off life in the society.

IndexTerms Indian Currency, Image Processing, PCA, Rotation Invariance, Image Subtraction.


    The recognition of currency denomination is among the most challenging problems faced by visually impaired people since it constitutes an essential part of their day to day affairs. Though there are many recognition methods for them but these methods put them at high risk of becoming victims of deception, either because of the complexity and discomfort of using those methods or due to the dishonesty of certain people who could try to take advantage of their visual disabilities during the process. The present paper describes the phases of the implementation of a system for

    the recognition of the Indian currency both banknotes and coins, by affixing camera with their goggles.

    In order to recognize the notes, in the first stage, a digital processing of the image obtained by the camera has been made in order to locate a particular region of interest within that image. Then, taking that region of interest as an input image, a recognition stage is implemented making use of the Face Recognition using Eigenfaces method, which, in turn, is based on the mathematical technique known as Principal Component Analysis (PCA). Finally, results of tests made on the system implemented, are presented.

    The coin recognition is dealt by the method using image subtraction technique. The image subtraction technique takes two images as input and gives a third image as output, whose pixel values are simply the pixel values of the first image minus the corresponding pixel values of the second image.. It also incorporates the checking of radius which would assist in choosing the befitting coin from the database. Database amasses the standard coin used for recognizing the input image. Once the precise image is selected, its feature are extracted and subtracted from the input coin image. Image rotation invariance is introduced by rotating the image at fixed angular interval thus providing us with the exact angle of difference between the coins on analyzing the plot of the subtracted values.

    We then combine these results accordingly and the required output is reached out towards the visually impaired by an audio voice which reaches out to them through an ear phone.


    The Indian Bank Notes are of seven denominations and each of it has the value of those denominations can be noticed at the corners at the front as well as the back of the notes.


      One of the four regions of interest is determined from the image that is taken by the camera, to be then recognized through the finding of the vertex coordinates of the banknote'sregion of interest, by using the end to end projection of the binary, eroded image of the input image. When such a region is detected, it is cropped to and scaled to a 60 × 90 pixel resolution, region of interest.

      This obtained image is ready to be recognized with the processes described in section C, and will be known in there as input image. The various processes involved after the location of the region of interest are described in this section.

      1. Conversion of RGB Image into Grayscale

    The RBG-colour image delivered by the digital camera is transformed into a grayscale image seenof the Fig. I.A.1 through the calculation of the average of the three channels RGB of the image, using eqn 1

    GRAY= (GREEN+RED+BLUE)/3 (1)

    W×W window, centered on the pixel (x,y) , where W is 1/8 of the grayscale images width and W is a mean ponderation factor.


    The mean ponderation factor was heuristically chosen as =1.1 because, as a result of this selection, the white border of the banknote could be distinguished from the background in the binary image shown in Fig.I.A.2

    FIG I.A.2 Binary Image

    3) Erosion Of Binary Image

    As one can see in Fig.A.2, the binary image contains noise, which is presented as white pixels isolated in the background. Erosion aims at filtering it with a view to eliminate that noise to the maximum extent feasible. The operation constructs a new binary eroded image be by applying a structuring element, D to the original binary image, B, using (4)

    (x,y)= = { 2| + , }


    1 1 1

    The structuring element D, which is: D = 1 *1 1

    1 1 1

    FIG I.A.1 Grayscale Image

    2) Binarization of the Grayscale Image

    The method implemented was the adaptive thresholding using local means. This method proved to be robust to strong illumination changes which will allow the system to work in varied illumination conditions. The basic

    1. Calculating the Projection Profile

      The vertical PV and horizontal PH projection profile of the eroded image be(x,y) of Q rows and R columns is defined as the sum of all white pixels for rows and columns using (5) and (6), respectively



      = 1 (x,y); =

      1,2, . (5) = (x,y); =

      methodology of such method consists of calculating for a grayscale image, g(x,y) , a threshold t(x,y) for each pixel position (x,y) such that the binary image b(x,y) is as in eqn





      0, (, ) (, )

      b(x,y)={1, (2)

      The threshold, t(x,y) is computed through the equation

      (3) using the mean (x,y) of the pixel intensities in a

      The outcome of applying the equations (5) and (6) to the eroded binary imag is illustrated in Fig. 2.5. The coordinates of the region of interest are those rows and that column whose vertical and horizontal profile respectively exceeds a minimum threshold.

    2. Cropping & Scaling of the Region of Interst

      Once the coordinates of the vertices of the region of interest are located, such portion is cropped from the grayscale image to an aspect ratio of 2:3, and subsequently scaled to a 60×90 pixel resolution using the nearest neighbor interpolation method, as illustrated in Fig.I. A.3

      FIG I.A.3 Cropped and Settled Image

    3. Normalization of the Scaled Image

      In order to reduce the effects of the illumination changes on the scaled image gesc, the latter must be normalized by applying eqn (7), operation which carries the value of its current mean to zero and the value of its

      the denomination of the banknote which, at the time of the recognition, is being filmed by the the smartphone camera.

      1. The Prospectus Of Principle Component Analysis

        The goal of principle component analysis (PCA) is to reduce the dimensionality of the data while retaining as much information (but no redundancy) as possible in the original dataset.

        PCA allows us to compute a linear transformation that maps data from a high dimensional space to a lower dimensional sub-space.

      2. Setting The File System

        As training set samples, M=168 images of banknotes of different denominations were taken. The samples exhibit the four portions of interest of each denomination (see Fig. 1), and were processed in the same manner as in chapter 1. shows 24 sample images, of resolution 60 × 90. Each sample is represented in a 5400 × 1 vector, i . Next, we

        proceed to calculate, the average image of the training

        set, using eqn (8).


        current standard deviation to one. The input image gN is




        then obtained and will be classified in the recognizing phase.

        gN= ( )/ (7)


        It applies the steps for image recognition using eigenfaces which, in general terms, can be called eigen images which is taken in accordance with the principle component analysis (PCA), since here such approach is used for the recognition of images other than faces.

        Following that approach, a vector representation of the N-dimensional input image is projected to a new K- dimensional, reduced space through a change of basis operation. As a result, the input weights vector in, is obtained. The new basis is composed of the eigenvectors of the original covariance matrix, the one corresponding to the

        different banknotes data set. Such eigenvectors are called eigen images.

        Finally, the banknote denomination of the input image is identified by performing a suitable metric among the input weights vector and each one of the sample weights vectors that are obtained from the projection of each of the sample images in the new K-dimensional reduced space, which from now on, will be called banknotes space. The metric for the identification used in this paper is the Mahalanobis distance which makes use of the eigenvalues of the original covariance matrix.

        Since the system is aimed at visually impaired people, the system output is an audio message that communicates

        Each vector i is subjected to a process of normalization. Each corresponding normalized vector is obtained using eqn (9).

        =- , =1M (9)

        The set of normalized vectors is then organized as

        columns in a matrix written as A=[12 ] where every column of A is a normalized vector representing a sample image. From A we shall obtain a symmetric matrix L, using eqn (10).

        L= (10)

        According to the theory of PCA, the principal components, that form a change of basis transformation that is suitable to better represent the similarities and differences within the original data set, are each eigenvector ui of the covariance matrix of the original data set corresponding to the largest absolute values of the eigenvalues. These principal components are those known

        as eigen images and are obtained from each eigenvector of the L matrix using eqn (11).

        =A (11)

        For the foregoing reasons, we chose the first K=24 eigenvectors corresponding to the eigenvalues of the largest absolute value from the set obtained using eqn (11). Since, within PCA, there is no formal method for choosing the value of K, we have chosen 24 based on the fact that we use 24 image species of different appearance for recognition (4 regions of interest × 6 denominations).Once

        K selected, we normalize (match their module to 1) and organize them in a [u] matrix, which is part of the file system.

        Each vector in matrix A is also projected to the banknotes space, using eqn (12), where [u]Tis the transpose of the eigenvectors matrix, which is also the change of basis matrix, introduced in the PCA theory. As a result, a matrix [] is produced, which contains, by columns, each one of the sample images from the training set, but represented in the new space, as 24×1 vectors, called samples weights vectors.

        []=[] A (12)

        Those vectors will be compared one by one, within the real- time system, with an input weights vector.

      3. Representation of Input Image into New Space

        If, as performed for all sample images of the training set, a 60×90 input image in the real-time system from Fig. 7 is represented as a 5400×1 vector (i ), it can also be represented as a 24×1 vector called input weights vector:[in]=[12 K], by using and each eigen image ui , both loaded from the file system through the projection defined in eqn (13) which obtains each component of the input weights vector.

        =(- ), =1 K (13)

      4. Mahalanobis Distance Of the Image

        To determine the class to which the input image belongs, we use the Mahalanobis distance dm between in and each one of the i samples weights vectors from the training set in [], by using eqn(14), where jis each one of the K=24 eigenvalues of the original covariance matrix in the file systems vector rather than a variance, because each eigenvalue is proportional to a corresponding variance of the original covariance matrix

        used. In the figure we can see the effect of the threshold selection in the overall efficiency of the system until certain value is reached, from which on, the success rate remains constant

        FIG. I.B.5 The Output Graph


    The Recognition of Indian coins may be familiarized since it has got various methods to implement it out like the ones in coin counting machines which just checks out the size and weight of the coin. But this method is not apt because there may be coins of different denomination of the same size which makes the recognition part difficult.

    Hence we come up with this method constituting the Rotation Invariance method and Image Subtraction.

        1. Rotation Invariance Method

          Mahalanobis distance






      1. BankNote Recognition

    An image corresponding to the input weights vector is classified as a particular denomination banknote (and this denomination outputted as an audio message) if the

    minimum distance from into ieach projection from the training set is smaller than a certain threshold; otherwise, the image is discarded and no output message is emitted. The threshold was experimentally determined, from a set of test images corresponding both to banknotes of several

    denominations and random images. The results of this experimentation are shown in Fig.I.B.5, where the success rate in the process is compared against the threshold value

    The proposed approach of coin recognition consists of five modules namely, image acquisition, image segmentation, radius calculation, image subtraction and Threshld comparison. Input image of size (320*320 pixels) is acquired and coin is segmented through it. Fig. 2.1 elucidates the block diagram of the proposed methodology.

    1. Image Procurement and Detailing

      This section describes the way in which the images are procured and the way by which they are segmented and hence suitably being brought down to an level where further phase of recognition techniques can be done so that the coin can be recognized well enough. First it is converted to its grayscale and then it is further adjusted by increasing its contrast and getting its binary form. Each of these are described in the FIG.III.A.1. The binary image is transversed row wise and we get the end of diameter.Thus we get the exact position of the image.

      FIG III.A.1 The Prescribed Coin Formats

    2. Calculating the Radius

    SUBTRACTED (r,c) = OBJECT(r,c) TEST(r,c) (14)

    FIG III.A.2 Subtraction of Coins

    b) Fine Subtraction: Here in this subtraction which is same as that of the course subtraction here we reduce the step size from 10º to 1º so that the calculation be more finite than the course.

    Hence when both these subtraction are done we get a resultant value for both the subtraction. We get the minima values which give more dark patches to the out image we get.

    4) Threshold Comparison and Coin Recognition

    Once we get the minima of the gray value sum, based on comparison with a standard threshold, deductions are made whether the coin matches or not. If the minimum value lies below the threshold, coin identification is established difference between the object and test image.

    The identified coin denomination is heard out as an audio into the visually impaired ears through the earphone.

    Diameter is calculated by finding the difference between maximum and minimum position of white pixels of the binary image formed during image segmentation shown in the above Fig. this cardinal step provides the value based on which the suitable image from the database gets selected for further process, abridging the process time and irrelevant data since the Indian coins have distinct radius.

    1. Rotation Invariance Image Subtraction

      After we have procured both the object and the test image. We get into the main constituent of coin recognition called the Image Subtraction by Rotation Invariance method. The recognition part consists of two main checks to narrow down the mistakes or to make it accurate.

      1. Course Subtraction: In this subtraction the test image is given one full rotation in steps of fixed angular distance of say 10º. At each instance of rotation image subtraction is carried between the rotated test image and the input object image.


    The proposed system for the Indian currency recognition has been implemented in MATLAB software. The result for the recognition of both notes and coin has been implemented down accordingly. The bank notes recognition by the mahalanobis distance and the image subtraction technique for the coin has been implemented well off.

    The results obtained from both these substantial methods has been folded up and shown in various figurative ways. It has been verified and set out in this paper for its research senses.

    A success is considered if, by an appropriate use methodology, the system has emitted an audio message corresponding to the denomination of the banknotes or the coin in test, at most, at the second try; no result, if, after those two attempts, no message has been produced. A false positive is considered if the system emits a denomination different than that of the object in test. The issuance of a false positive can be avoided by working in near-ideal conditions.


A portable and easy access system for image recognition can be implemented using the technology of cameras and the theory of the Digital Image Processing.

One of the main constraints of the system developed in this paper is the fact that the background of the image containing the object to identify (i.e. the currency), must be contrasting with that object. Another constraint is that the illumination conditions over the image must be uniform.

This solves a day to day problem which takes place in the life of the visually impaired people.

Future works will include modifications of the technique and also merging of other image processing techniques, such as,Neural Networks training using Edge detection which wouldextricate the process from the dependency over standard light intensity and standard distance between image and camera duringimage acquisition adding on to the accuracy of the process.


We would like to thank our institution AmalJyothi College of Engineering for giving us space and time to do the research and findings for this paper.

We like to thank our Principal Fr.JoseKanampuzha for giving us his permission for all our activities we needed to do and also for his well wishes

We give thanks to our H.O.D Prof.Satheesh Kumar for giving us the support in doing this paper, and also a big thanks to our mentor Prof.Anu Abraham Mathew for his valuable advice and his constant inspiration for all that we did.


[1]. Felipe Grijalva, J.C. Rodríguez, Julio Larco and Luis Orozco , Smartphone Recognition of the U.S. Banknotes Denomination, for Visually Impaired People ©2010 IEEE

[2]. M. Turk & A. Pentland, "Face recognition using eigen faces, J. Cognitive Neuroscience, vol. 3, no. 1, pp. 7186, 1991.

[3]. R. Gonzalez and R. Woods, Digital Image Processing, 2nd New Jersey, U.S.A.: Prentice Hall, 2002.

  1. E. Ashbridge, D.I. Perrett, M.W. Oram and T. Jellema, (2000)Effect of Image Orientation and Size on Object Recognition: Responses of Single Units in the Macaque Monkey Temporal Cortex, Cognitive Neuropsychology Vol. 17: 1/2/3, pp. 1334

  2. R. Bremananth, B. Balaji, M. Sankari, A. Chitra, A New Approach to Coin Recognition Using Neural Pattern Analysis, 2005 annual IEEE,Indicon, pp. 366-370

Leave a Reply

Your email address will not be published. Required fields are marked *