fbpx

Bike Security : Using Image Processing


Call for Papers Engineering Journal, May 2019

Download Full-Text PDF Cite this Publication

Text Only Version

Bike Security : Using Image Processing

P.Sivashankari, Dept:IT-III K.B.Ragavi, Dept:IT-III Bannari Amman Institute of Technology

ABSTRACT

An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image has been digitized, it can be operated upon by various image processing operations. Image processing operations can be roughly divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image. We take one of the topic in it, which is hand print. It is one of the ways of security. In this paper, we present a novel biometric technology palm print recognition. Palm print is obtained from the inner surface of a hand between the wrist and the top of the fingers, which contains the principal lines, wrinkles and ridges on the palm, finger and fingerprint. We extend this thought over security for vehicles. Except the owner, when some one try to start the vehicle by placing his palm on the accelerator , the hand print of the person and his photo is sent to the owners mobile through MMS media and the area where the vehicle is moving is captured every 30 seconds and sent to the owners mobile. By which the job of finding the thief is made very easy.

Introduction

Imaging in the life and materials sciences has become completely digital and this transformation of visual imagery into mathematical constructs has made it commonplace for researchers to utilize computers for their day-to-day image analysis tasks. Along with this change comes the need to fully understand how image data is handled within a computer and how image processing methods can be applied to extract useful measurements and deeper understanding of image-based data.

A technique in which the data from an image are digitized and various mathematical operations are applied to the data, generally with a digital computer, in order to create an enhanced image that is more useful or pleasing to a human observer, or to perform some of the interpretation and recognition tasks usually performed by humans. Also known as picture processing.

In electrical engineering and computer science, image processing is any form of signal processing for which

the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensionalsignal and applying standard signal-processing techniques to it.

  1. Digital image processing

    It focuses on two major tasks.

    1. Improvement of pictorial information for human interpretation.

    2. Processing of image data for storage, transmission and represent for autonomous machine perception.

      A digital image a[m,n] described in a 2D discrete space is derived from an analog image a(x,y) in a 2D continuous space

      through a sampling process that is frequently referred to as digitization. For now we will look at some basic definitions associated with the digital image. The effect

      of digitization is shown in Figure 1.

      The 2D continuous image a(x,y) is divided into Nrows and Mcolumns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,…,M-1} and

      {n=0,1,2,…,N-1} is a[m,n]. In fact, in most cases a(x,y)– which we might consider to be the physical signal that impinges on the face of a 2D sensor–is actually a function of many variables including depth (z), color ( ), and time (t).

        1. Quantization

          The image shown in Figure 1 has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of

          Figure 1: Digitization of a continuous image. The pixel at coordinates [m=10, n=3] has the integer brightness value 110.

          representing the amplitude of the 2D signal at a given coordinate as an integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.

        2. Common Values

      There are standard values for the various parameters encountered in digital image

      processing. These values can be caused by video standards, by algorithmic requirements, or by the desire to keep digital circuitry simple. Table 1 gives some commonly encountered values. Quite frequently we see cases of M=N=2K where {K = 8,9,10}. This can bemotivatedbydigital circuitry or by the use of certain algorithms such as the (fast) Fourier transform.

      Table 1: Common values of digital image parameters

      The number of distinct gray levels is usually a power of 2, that is, L=2B where B is the number of bits in the binary representation of the brightness levels. When B>1 we speak of a gray-level image; when B=1 we speak of a

      Parameter

      Symbol

      Typical values

      Rows

      N

      256,512,525,625,1024,1035

      Columns

      M

      256,512,768,1024,1320

      Gray Levels

      L

      2,64,256,1024,4096,16384

      binary image. In a binary image there are just two gray levels which can be referred to, for example, as "black" and "white" or "0" and "1".

  2. Characteristics of Image Operations

    • Types of operations

    • Types of neighborhoods

    There is a variety of ways to classify and characterize image operations. The reason for doing so is to understand what type of results we might expect to achieve with a given type of operation or what might be the computational burden associated with a given operation.

      1. Types of operations

        The types of operations that can be applied to digital images to transform an input image a[m,n] into an output image b[m,n] (or another representation) can be classified into three categories

        • Point- the output value at a specific coordinate is dependent only on the input value at that same coordinate.

        • Local- the output value at a specific coordinate is dependent on the input values in the neighborhood of that same coordinate.

        • Global- the output value at a specific coordinate is dependent on all the values in the input image.

      2. Types of neighborhood

    Neighborhood operations play a key role in modern digital image processing. It is therefore important to understand how images can be sampled and how that relates to the various neighborhoods that can be used to process an image.

    • Rectangular sampling – In most cases, images are sampled by laying a rectangular grid over an image as illustrated in Figure 1. This results in the type of sampling shown in Figure 3ab.

      Figure 3a Figure 3b Figure 3c

      Rectangular sampling Rectangular sampling exagonal sampling 4-connected 8-connected 6-connected

    • hexagonal sampling – An alternative sampling scheme is shown in Figure 3c and is termed hexagonal sampling.

    Both sampling schemes have een studied extensively and both represent a possible periodic tiling of the continuous image space. We will restrict our attention, however, to only rectangular sampling as it remains, due to hardware and software considerations, the method of choice.

    Local operations produce an output pixel value b[m=mo,n=no] based upon the pixel values in the

    neighborhood of a[m=mo,n=no]. Some of the most common neighborhoods are the 4-connected neighborhood and the 8-connected neighborhood in the case of rectangular sampling and the 6-connected neighborhood in the case of hexagonal sampling illustrated in Figure 3.

  3. Requirements of our system

      1. Concept

        The concept what we use here is palm identification, just like fingerprint identification, is based onthe aggregate of information presented in a friction ridgeimpression. This information includes the flow of the frictionridges

        ,the presence or absence of features alongthe individualfriction ridge paths and their sequences, and theintricate detail of a single ridge. To understand this recognition concept, one must first understandthe physiology of the ridges and valleys of a fingerprint or palm.When recorded, a fingerprint or palm print appears as a series ofdark lines and represents the high, peaking portion of the frictionridged skin while the valley between these ridges appears as awhite space and is the low, shallow portion of the friction ridgedskin.

        Figure: palm print

      2. Palm Print Recognition

        Palm recognition technology exploits some of these palmfeatures. Friction ridges do not always flow continuouslythroughout a pattern and often result in specific characteristicssuch as ending ridges or dividing ridges and dots. A palmrecognition system is designed to interpret the flow of the overallridges to assign a classification and then extract the minutiaedetail — a subset of the total amount of information available, yet enough information to effectively search a large repository ofpalm prints. Minutiae are limited to the location, direction, andorientation of the ridge endings and splitsalong aridge path. The images in present apictorialrepresentation of the regions of the palm, two types of minutiae,and examples ofother detailed characteristics used during theautomatic classification and minutiae extraction processes.

      3. Hardware

        A variety of sensor types capacitive, optical, ultrasound, andthermal can be used for collecting the digital image of a palmsurface; however, traditional live- scan methodologies have been slow to adapt to the larger capture areas required for digitizing palm prints. Challenges for sensors attempting to attain high

        resolution palm images are still being dealt with today. One ofthe most common approaches, which employs the capacitivesensor, determines each pixel value based on the capacitancemeasured, made possible because an area of air (valley) hassignificantly less capacitance than an area of palm (ridge). Otherpalm sensors capture images by employing high frequencyultrasound or optical devices that use prisms to detect the changein light reflectance related to the palm. Thermal scannersrequire a swipe of a palm across a surface to measure thedifference in temperature over time to create a digital image.Capacitive, optical, and ultrasound sensors require onlyplacement of a palm.

      4. Software

    Some palm recognition systems scan the entire palm, while othersrequire the palms to be segmented into smaller areas to optimizeperformance. Maximizing reliability within either a fingerprint orpalm print system can be greatly improved by searching smallerdata sets. While fingerprint systems often partition repositoriesbased upon finger number or pattern classification, palm systemspartition their repositories based upon the location of a frictionridge area. Latent examiners are very skilled in recognizing theportion of the hand from which a piece of evidence or latent lifthas been acquired. Searching only this region of a palmrepository rather than the entire database maximizes thereliability of a latent palm search.Like fingerprints, the three main categories of palm matchingtechniques are minutiae-based matching, correlation-based matching, and ridge-based matching. Minutiae-based matching,the most widely used technique, relies on the minutiae pointsdescribed above, specifically the location, direction, andorientation of each point. Correlation-based matching involvessimply lining up the palm images and subtracting them todetermine if the ridges in the two palm images correspond. Ridge-based matching uses ridge pattern landmark features such assweat pores, spatial attributes, and geometric characteristics ofthe ridges, and/or local texture analysis, all of which arealternates to minutiae characteristic extraction. This method is afaster method of matching and overcomes some of the difficultiesassociated with extracting minutiae from poor quality images.The advantages and disadvantages of each approach vary basedon the algorithm used and the sensor implemented. Minutiae-based matching typically attains higher recognition accuracy,although it performs poorly with low quality images and does nottake advantage of textural or visual features of the palm.Processing using minutiae-based techniques may also be timeconsuming because of the time associated with minutiaeextraction. Correlation-based matching is often quicker to processbut is less tolerant to elastic, rotational, and translationalvariances and noise within the image. Some ridge-based matchingcharacteristics are unstable or require a high-resolution sensor toobtain quality images. The distinctiveness of the ridge-basedcharacteristics is significantly lower than the minutiaecharacteristics.

  4. Implementation

Here is the main part of the paper. We implement this technique of palm print recognition in a new way of security for starting the bike.

The bike which is designed to this security system is provided with the palm recognition system in the accelerator. Already the palm print of the owner and the other who uses the bike frequently will be captured and it will be saved in the database. So whenever the authorized person of the bike is keeping his hands on the accelerator the bike starts automatically. From this system the owner who parked his bike and went for his work can feel the security.

    1. It works

      Our implementation will definitely work. It has been described by the following steps.

      1. The authorized person saves his palm print in different angles and positions in the memory

      2. Whenever the palm print matches with the database then the bike starts automatically.

      3. The bike do not start when an unauthorized person tries to start the bike.

      4. There will be two cameras provided in front of the bike and other one facing the driver(use will be discussed).

        We discuss one of the cases here. The bike contains both the key port and palm recognition system. If the owner misses the key and a person takes the key and tries to start the bike. The bike gets started. Then when he keeps his palm over the accelerator , it scans the palm and if the print doesnt matches the database then after 10-15 seconds the camera captures the drivers image and the image of the area where is he driving and sends those images to the mobile number of the owner by an MMS system which will be attached to the bike. Every 30 seconds the mms of the area picture will be sent to the owners number, so that the thief can be tracked within hours. This also works if someone tries to start the bike using a duplicate key.

        In another case if anyone asks our bike urgently then he should also can start the bike. So there is a key start also. If he startsdriving the bike then automatically his image is being sent to the owner, but it is not necessary. So there should be a on/off switch for the MMS system which is maintained by an password.

    2. Advantages:/p>

      We can assure you that this method will work to be efficient.

      1. Only the authorized person can use the bike.

      2. The only way that the thief can steal is, when we loose the key and he gets it. But also he can be easily tracked out by the police or ourself.

      3. Since it is a palm print recognition it is understood that the security will be 100% guaranteed.

Conclusion

This security is cost effective since there are free softwares for palm recognition and only cost is for hadwares that too not very much expensive. The palm print which is scanned should not have noise in it for a perfect scan. So the noise reduction technique is used which was discussed above.For this technique to be success the MMS facility should be improved in our country. Thus the palm printing recognition which is the branch of image processing is used with our implementation of providing maximum security to bikes.

References:

[1] Alice J. OToole,P. Jonathon Phillips, Senior Member, Face Recognition Algorithms Surpass Humans Matching Faces over Changes in IlluminationIEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 9, September 2007. [2]Francisco Silva, Víctor Filipe and António

Pereira,Automatic control of students attendance in classrooms using RFID, Computer society, Journalize, 978-0-7695-3371-1/08 © 2008 IEEE DOI

10.1109/ICSNC.2008.70.

  1. W. Zhao, R. Chellappa, P.J. Phillips, and A. Rosenfeld, Face Recognition: A Literature Survey, ACM Computing Surveys, vol. 35, pp. 399-459, 2003.

  2. D. Blackburn, J.M. Bone, and P.J. Phillips, FRVT 2000 Evaluation Report, technical report, http://www.frvt.org, 2001.

  3. Prahlad Vadakkepat, Senior Member, IEEE, Peter Lim, Liyanage C. De Silva, Liu Jing, and Li Li Ling , Multimodal Approach to Human-Face Detection and Tracking , IEEE transactions on industrial electronics, vol. 55, no. 3, march 2008, pp. 1385-1393.

  4. Hetal Patel, Facial feature extraction for face recognition: a review,IEEE International Symposium on Information Technology, Vol.2, Aug. 2008

  5. Nicholl, P. Amira, A. , DWT/PCA Face Recognition using Automatic Coefficient Selection, 4th IEEE International Symposium on Electronic Design,

    Test and Applications, Jan. 2008

  6. Kresimir Delac and Mislav Grgic,Face recognition, I-TECH Education and Publishing,Vienna,Austria,2007

  1. Kienzle, W., G. Bakir, M. Franz and B. Schölkopf:

    Face Detection -Efficient and Rank Deficient,

    Advances in Neural Information Processing Systems 17, 673-680. (Eds.) Weiss, Y. MIT Press, Cambridge, MA, USA (2005)

  2. Jie Wang, K.N. Plataniotis, A.N. Venetsanopoulos, Selecting

discriminant eigenfaces for face recognition, Pattern Recognition Letters 26 (2005), science direct

  1. M.Turk. and A.Pentland., Face recognition using eigenfaces,

    Proceedings of IEEE Conference on Computer Vision and Pattern

    Recognition, Maui, Hawaii, pp. 586-591, 3-6 June 1991.

  2. M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neurosicence, Vol. 3, No. 1, pp. 71-86, 1991

Leave a Reply

Your email address will not be published. Required fields are marked *