Currency Counting for Visually Impaired Through Voice using Image Processing

DOI : 10.17577/IJERTV9IS050137

Download Full-Text PDF Cite this Publication

Text Only Version

Currency Counting for Visually Impaired Through Voice using Image Processing

Kolachina Sai Saranya 1 , Ajay Kumar Badhan 2 , Adavikolanu Alekhya 3

Chetty Madhumitha 4 , Vangapandu Durga Charmika 5 1,3,4,5 Student , B.Tech (Information Technology)

2 Assistant Professor,Information Technology,

Vignans Institute of Engineering for Women, Visakhapatnam, Andhra Pradesh, India

Abstract: The Blind people may not perceive the currency as well as count them. Thus, to resolve this difficulty, the framework called "Image-based currency recognition and counting system" is useful. Although the Currency Recognition System helps the blind and visually impaired people for detecting the currency still it is not so adequate. Moreover, they just represent the currency one at a time in the form of voice as the output but dont sum up the total amount. So, it becomes difficult for the blind person to determine the total amount. In this paper we are proposing a currency recognition system that makes use of the SIFT algorithm which is very efficient and fast in detecting the currency and after detecting, it will sum up the total currency detected and will provide the result in the form of output to the blind people. The proposed system is implemented to detect seven different types of Indian currencies. Initially, a pre-processing technique is implemented on the currency note. Then, ROI and OCR are implemented followed with SIFT that provides key descriptors. Finally, the Hamming distance from the KNN algorithm is implemented for matching key descriptors from the feature extraction stage. The results have shown that the proposed system can be implemented in real-time scenarios with an accuracy rate of 93.71% and running time of 0.73 seconds

Keywords: Currency counting, Image Processing, OCR, Visually impaired, Voice output.

  1. INTRODUCTION

    Today during this gift era wherever technological advances are at its vertex, there's not even one sector that remains untouched by technology. Technology made our lives much easier, apart from it it's providing several cutting edges for the disabled people similarly in each facet and one among them is currency recognition. It is a need for the visually impaired people as they are not able to identify the differences among various currency notes, due to which they are cheated by the other very easily.

    As per the statistics of the World Health Organization, there are around 285 million people among which 39 are visually impaired i.e. blind and remaining are having low vision. So there is a strong necessity to design a system that can help them to recognize the currency with ease and without any trouble. Several different techniques were proposed which mainly focused on text, color, and size of the currency but they are not well suited as they are very much sensitive to illumination conditions [6].

    Fig.1. User capturing the Rs 20/- note

    Fig.2. Indian Currencies

    In this paper, the proposed system recognizes the Indian currency in various perspectives views. The application is developed using the packages of python and then it is deployed into the hardware device which is assembled on the blind-stick with the camera on the top. The camera captures the image of the currency note and passes it to the hardware device which contains the applications been designed and provides the output in the format of sound. The application recognizes Rs10, Rs 20, Rs 50, Rs 100, Rs 200, Rs 500, Rs 2000 notes of Indian Currency. Fig.1 represents how the user uses the system to capture the image of Rs 50 and get the result. Fig. 2 represents Indian

    currencies from ten rupees to two thousand rupees. The forward and back face of each currency is represented

    A. Related Work

    Several differrent models has been designed and implemented for currency recognition by differrent authors. In [2], the authors have implemented the system using convolutional neural networks, one of the concepts used in deep learning. They have prepared the dataset for the notes of Indian currency and for fake notes they prepared dataset from childrens bank churan label.

    Ahmed Yousry et al in [3], proposed a currency recognition system for visually impaired people using ORB algorithm. They have implemented the algorithm on Egyptian currency with the accuracy rate of 96% and the runtime of 0.682 seconds. The proposed system is deployed into the mobile device through which the user can scan the currency and the output is represented in the form of voice using the mobile speaker

    Snehal Saraf et al [4] proposed a system making using of SIFT algorithm on Android platform. They too deployed the system in Android based mobile phones. The drawback with this is that the SIFT algorithm can be used for features extraction but will not detect the text features. So in the proposed system we are using SIFT and along with that OCR (optical Character Recognition) is used for detecting the text features.

    In [5], Akash Gupta et al represented a paper of review on currency detection technique in which they have represented the various techniques that are implemented till date. In [8] they have represented the currency detection system for Mexican notes using artificial vision. They have classified the notes based on color and text features using the RGB space and Local binary patterns.

    Fig.3. System Block Diagram

    1. Image Acquition

      It involves capturing the image of a currency note in the acceptable kind. The image is captured by a web camera in RGB format.

    2. Image Acquition

      In this step, the image is captured using a camera that is mounted on the head of the stick, and then the Gaussian Smoothing technique is applied to blur the image in order to remove the noise from the image. It acts as a filter that makes use of the equation represented below:

  2. PROPOSED METHOD

    The block diagram for the proposed system is

    G(x) = 1 e

    2 2

    • x2

    2 2

    represented in Fig.3. It consists of two phases:

    Phase 1: In this the proposed system is deployed to capture the unknown currency image as input for processing and comparing it with the Phase 1 and provide the output in the

    The above equation is in one dimension. If need to be represented in two dimensions then the expression will be in the format represented in equation (2):

    format of voice using the speaker. Phase I has five steps in total. The first one is the preprocessing technique which is used to eliminate the noise from the captured image and send

    G(x, y) =

    1

    2

    x2 y2

    e

    e

    2 2

    2

    (2)

    the image for the next operation. The preprocessing step is followed with segmentation and ROI extraction which are used to separate features of the currency i.e. foreground and background. Then the SIFT algorithm is implemented and finally, the final results are compared with the results of the datasets. Based on the threshold level the output is revealed in the format of voice.

    Phase 2: It is mainly used for creating datasets related to the images of Indian currency notes. These datasets will be applied with the SIFT (Scale-invariant feature transform) algorithm which will extract the features from the images in the datasets and convert them into binary descriptors. These binary descriptors will be stored in the database which will be used for comparison at the end.

    The variables x and y represents the position from the

    origin in the horizontal and vertical axis. The standard deviation represents aussian Deviation.

    1. Segmentation

      It is the second step applied in the proposed system. In this, the image is actually divided into segments. The segments that dont contain any information are removed in the form of noise. The remaining segments are used for processing. There are several methods for implementing segmentation. In the proposed system Threshold method is been implemented. It is used to convert the grayscale image received from preprocessing to binary images of 0 (represents black) and 1 (represents black). In this, the pixels are partitioned depending on their intensity value.

      The equation for the threshold method is represented in equation (3):

      1 if v(x, y) T

      Fig. 4 represents the matching of Rs. 20 with the dataset. The numerical number 17 indicates he maximum number of matching with the database descriptors.

      V(x, y) =

      0 if v(x, y) <T (3)

      Here v(x, y) is the density of the image and T represents the threshold.

    2. ROI Extraction

      ROI in general defined as Region of Interest. In this technique, after segmentation is performed on the currency note, ROI extraction is applied on a portion of the image to extract the features of it. It creates a binary mask image. In this, the pixels that are related to ROI are set to 1 and remaining as 0.

    3. SIFT Feature Extraction

      SIFT technique is applied to extract the key-points and compute its descriptors. It contains four steps that are implemented in order to extract the key-points. They are:

      1. Scale-Space Extrema Detection: This step is applied to detect the corners of the currency image. Laplacian of Gaussian is determined for the image with the

        various values of (it acts as a scaling factor).

      2. Key-Point Localization: In this the key points are determined and to get more accurate results of the extrema Taylor series expansion of scale space is implemented.

      3. Orientation Assignment: After the second step, this technique is applied on each key-point in order to achieve invariance to image rotation and then a neighborhood is adopted around the key-point location depending on the scale and the gradient magnitude and location is calculated for that region.

      4. Key-point Descriptor: Finally the key point descriptor is created. In this, a 16X16 neighborhood around the key-point is taken and divided into 16 sub- blocks of 4X4 size. They usually contain 0 and 1.

      5. Key-point Matching: After receiving the key-points for different images of the same currency, they are matched by identifying their nearest neighbors.

    4. Matching

    In this when the new image of the currency note is captured, the system applies the previous steps i.e. preprocessing steps. In order to determine the characters in the currency note that is been captured using the camera Optical Character Recognition (OCR) algorithm is implemented. It actually determines the characters in the currency. After completion of it, SIFT technique is implemented to get the binary descriptors. Once the binary descriptors are available Brute Force Matcher algorithm is implemented. In this the technique the Hamming distance technique is implemented which match the binary descriptors of newly captured currency note with that of the descriptors stored in the database. After matching the descriptors, the greatest number of matches with the descriptors in the database is represented using K- nearest neighbor technique and finally displays the amount.

    Fig.4. Matches received using Hamming Distance

  3. SYSTEM DESIGN

      1. Hardware Implementation

        1. Rasberry Pi

          Fig 5: Rasberry Pi

          The raspberry Pi is a tiny fully functional computer with low cost package. It is provided in various versions. In the proposed system the raspberry PI 3 model is used for implementation. It has a CPU of Quad core 64 bit ARM cortex. It has an internal memory of 1GB and 4 USB ports. Apart from that it has an inbuilt Bluetooth and WiFi. The application is deployed into this tiny computer which is attached to the camera. When a currency note is scanned using the camera, the application in the system will detect the note and provide the results in the form of voice through the speaker.

        2. Camera

          Fig 6: Camera

          The omega camera is the one that is deployed on the top of the blind stick and is connected to the raspberry PI system. In the proposed system the camera that has a resolution of 16 mega-pixels with USB and night vision is deployed. The camera can scan the currency notes during the night time and cost of it is even negligible. The scanned images are send to the raspberry PI and

        3. Speaker

        Fig 7: Speaker

        The speaker is connected to the raspberry PI, which will display the output in the form of voice. The speaker used in the proposed system is a basic model which is used only for audio purpose.

      2. Software Used

        Software is a group of programs that instructs the system to do some specific task as per the commands provided. These programs are built by the programmers for interacting with the system and its hardware. The software required for the proposed system is:

        • Operating System : Windows 7

        • Scripting Language : Python 2.7.2

        • IDE : IDLE for Python & Arduino IDE 1.8.5

  4. RESULTS

    The procedure that is implemented and the set of images that are used in training dataset and the results are represented in this section.

    1. Experimental Procedure

      The proposed system is deployed on to the device which is attached to the stick of the blind people. The camera is mounted on the top of the stick and it doesnt rely on capturing the image at a specific degree unlike the one represented in [7]. In this system the user has to bring the current note in front of the camera and the image will be captured. The proposed system is constructed from the libraries and modules of OpenCV. They are very much efficient and have very good accuracy in getting the results faster. Fig. 8 represents the training set of all the Indian Currencies of type (Rs 10, Rs 20, Rs 50, Rs 100, Rs 200, Rs 500, and Rs 2000). As represented, they contain the important portions that have unique features that are required to train and predict the currency. Moreover, in the dataset, we store the important portion of the currency rather than the entire currency as they may reduce the efficiency and accuracy of predicting and also reduce the speed of predicting

      Fig 8: Training Set Samples used in the system

      Fig 9: Test Samples

      Figure 9. represents the samples images of Rs 10/- that are captured at different points of scaling and illumination

      Fig 10: Visual test for the Indian Currency Not

    2. Visual Results

      In this section we have represented specification of differrent camera devices on which the proposed system is tested and also step by step visual effects of each processing stage. The proposed system is tested on various cameras from VGA which has a pixel resolution of 640 X 480, followed with high resolution pixel. In the proposed

      system we have implemented a camera which has an image resolution of 16MP with USB interface and night vision.

      The visual process of the system is shown in the fig 10, which represents the test results of rupees ten, twenty, fifty and hundred respectively.

    3. System Evaluation

    The accuracy of the proposed system is determined by using a simple formula that is represented below:

    tests _ succeeded

  5. CONCLUSION

In this paper, a stick based currency recognition system is been proposed for the blind and visually impaired using SIFT algorithm. In addition, it provides the total sum of the notes that are detected and the output is represented in the form of the voice with the help of speaker mounted on the tick. The SIFT algorithm is implemented to get the binary descriptors to be stored in the database. Finally these descriptors are used to match the results using Hamming distance which is done using KNN algorithm. The evaluation results show that the proposed system has a very

Accuracy (%) = (

total _ tests

) * 100

good accuracy rate with good processing time. However, it

has a limitation of differentiating the fake currency notes, after acquiring the results through complete analysis by

Where tests_Succeeded indicates the number of tests

that are successful in detecting the correct currency image and total_tests indicates the number of tests that are conducted on the currency image.

Table 1, represents the details related to the accuracy of the proposed system on each currency note. In total 246 currency notes are taken up for comparison and based on the success and failures the accuracy is determined.

Therefore, the proposed system has an overall accuracy of 93.71 % with the processing speed of 0.73 seconds

After this, the system calculates the number of notes that are scanned using the camera. Then sum up the total amount and display the amount in the form of voice using the speaker deployed.

Currency

Number of tests conducted on each currency using proposed system

Number of test succeeded using proposed system

Number of test failed using proposed system

Accuracy (%)

10

35

32

3

91

20

37

33

4

89

50

35

32

3

91

100

35

35

0

100

200

34

31

3

91

500

35

35

0

100

2000

35

33

2

94

Total

246

231

15

Accuracy

% = 93.71

Currency

Number of tests conducted on each currency using proposed system

Number of test succeeded using proposed system

Number of test failed using proposed system

Accuracy (%)

10

35

32

3

91

20

37

33

4

89

50

35

32

3

91

100

35

35

0

100

200

34

31

3

91

500

35

35

0

100

2000

35

33

2

94

Total

246

231

15

Accuracy

% = 93.71

TABLE I. Accuracy of the proposed system

considering different parameters or dimensions in the project. In the future work we will be trying to deploy the techniques related to determining the counterfeit note and then display the results

REFERENCES

  1. Zhu,. Et…Copy-move forgery detection based on scaled ORB, In the international journal of Multimedia Tools and Applications, Springer, Vol.75, No. 6, pp.3221-3233, 2016.

  2. Navya Krishna ET,Recognition of Fake Currency Note using Convolutional Neural Network, International Journal of Innovative Technology and Exploring Engineering (IJITEE), Volume-8, Issue – 5 March. 2019.

  3. Ahmed Yousry et al, Currency Recognition System for Blind people using ORB Algorithm, International Arab Journal of e-Technology, Vol 5, January 2018.

  4. Sneha Saraf, Vrushali Sindhikar et al, Currency Recognition system for visually impaired, IJARIIE, Vol-3 Issue-2, 2017.

  5. Akash Gupta, Tarun Sharma, The Propose Currency Detection Technique based on Sift algorithm and Bayesian Classifier A Review, IJSTE International Journal of Science Technology & Engineering, Volume 4 Issue 2, August 2017

  6. Norouzi, et al, Hamming Distance Metric Learning In advances in neural information processing system.

  7. Semary, et al, Currency Recognition system for visually impaired: Egyptian Banknote as a case study, International IEEE conference on Information and Communication Technology and accessibility 2015.

  8. Farid et al, Recognition of Mexican bank note via their color and texture features Expert system with applications, vol-39, Issue 10

Leave a Reply