Virtual Eye for Blind using IOT

DOI : 10.17577/IJERTCONV8IS11027

Download Full-Text PDF Cite this Publication

Text Only Version

Virtual Eye for Blind using IOT

Niveditha K Computer Science&Engg VKIT, Bengaluru

Pooja B

Computer Science&Engg VKIT, Bengaluru

Kavya P D

Computer Science&Engg VKIT, Bengaluru

Nivedha P

Computer Science&Engg VKIT,Bengaluru

Lakshmikantha G C Computer Science&Engg Asst.Professor, Dept. Of CSE

VKIT, Bengaluru

Abstract – In this paper we presents a smart stick assistive navigation system to help blind and visually impaired people with outdoor and indoor travel. Blindness is one of the more feared affliction in this evolving world. It is very difficult to travel to a desired distance and to find the desired object which is present in front of the blind people. Blind people relay on some people for their work to be done. Our system solves this problem by providing a virtual eye in the form of smart stick to the blind people so that they can lead their own life without the help of the other people. The Smart stick consists of a camera and raspberry pie attached to it which helps in the detection of the object which is present as an obstacle to the blind people, can be easily identified and informed to the blind people by the earphones which is directly attached to the blind people. In addition to the speech warning and another sensor is also placed at the bottom of the stick for the sake of avoiding the puddles. This can be achieved by using Yolo and Dark flow algorithm. The goal here is to develop a system that will provide guidance for visually impaired individuals to reach desired destinations and to live an independentlife.

Key words: obstacle detection, earphones, smart stick, visually impaired people.

  1. INTRODUCTION

    According to World Health Organization (WHO) [1], there are over 1.3 billion people who are visually impaired across the globe, out of which more than 36 million people are blind. India being the second largest population in the world, contributes 30% of the overall blind population. Although there are enough campaigns being conducted to treat these people, it has been difficult to source all the requirements. It is the era of artificial intelligence and it has gained immense traction due to large amount of data and ease of computation. Using artificial intelligence, it is possible to make these peoples life much easier. The goal is to provide a secondary sight until they have enough resources required to treat them. People with untreatable blindness can use this to make their everyday tasks much easier and simple.

    Vision is a beautiful gift of being able to see. Vision enables individuals to see and comprehend the encompassing scene. Till date daze individuals battle a considerable measure to carry on with their hopeless life. In the displayed work, a

    basic, shabby, well-disposed client, virtual eye is composed and actualized to enhance the versatility of both visually impaired and outwardly disabled individuals in a particular region. This project is helps blind people to map their world using the sense of hearing. Its a visual based project consisting of few main components such as camera, raspberry pi and earphones mounted together and additional working technologies of the internet interlinked. The input of the project will be an image/video (multiple frames), the image captured and analyzed with the help of the camera interfaced to the raspberry pi/IOT technology. Hence the object is detected and audio information is conveyed to the blind person through earphones. This system deals with an approach to make better life for blind people as it well equipped with the latest technology and it is meant to aid the visually impaired to live a life without constraints. Visual deficiency is a condition of lacking visual recognition because of physiological or neurological components Virtual impairment may cause people difficulties with normal day activities. The incomplete visual impairment speaks to the absence of development in an optic nerve or visual focus of the eye. According to recent estimation 253 million people live with vision impairment. 36 billion are blind & 217 million have moderate to severe vision impairment. The loss sight causes enormous human suffering for the affected individuals and their families. Vision allows human being to view the surrounding world.

  2. METHODOLOGY

    In order to help the visually challenged people, a study that helps those people to walk more confidently is proposed. The study hypothesizes a smart walking stick that alerts visually- impaired people over obstacles, pit and water in front could help them in walking with less accident. It outlines a better navigational tool for the visually impaired. It consists of a simple walking stick equipped with sensors to give information about the environment. GPS technology is integrated with pre-programmed locations to determine the optimal route to be taken. The user can choose the location from the set of destinations stored in the memory and will lead in correct direction of the stick. In this system, ultrasonic sensor, or headphone, PIC controller and battery are used.

    The overall aim of the device is to provide a convenient and safe method for the blind to overcome their difficulties in daily life.Fig.1.System Architecture

    Fig 1: System Architecture

    1. Device

      The USB Log-tech camera is attached to the raspberry pi. The Camera USB port transfer rate and it is exclusively for transferring pixel data. The ultrasonic sensor HC-SR04 emits a 40000 Hz ultrasonic wave which is used to calculate the distance.

    2. YOLO and Dark Flow

      You Only Look Once (YOLO) are an image classifier that takes parts of an image and process it. In classic object classifiers, they run the classifier at each step providing a small window across the image to get a prediction of what is in the current window. This approach is very slow since the classifier has to run many times to get the most certain result. But YOLO divides the image into a grid of 13×13 cells. This means it just looks at the image just once and thus faster. Each grid box predicts bounding boxes and the confidence of these bounding boxes. The confidence represents how accurate the model is that the box contains the object. Hence, if there is no object, then the confidence should be zero. Also an intersection over union (IOU) is taken between the predicted box and the ground truth to draw the boundingbox.

      As described in, each bounding box has 5 predictions: x, y, w, h and confidence. The (x,y) coordinates represent the center of the box relative to the bounds of the grid cell. The width and height are predicted relative to the whole image. Finally the confidence prediction represents the IOU between the predicted box and any ground truth box. For every cell, the classifier takes 5 of its surrounding boxes and predicts what is present in it. YOLO outputs a confidence score that lets us know how certain it is about its prediction. The prediction bounding box encloses the object that it has classified. The higher the confidence score, the thicker the box is drawn. Every bounding box represents a class or a label. Since there are 13×13 = 169 grid cells and each cell predicts 5 bounding boxes and end up with 845 bounding boxes in total. It turns out that most of these boxes will have very low confidence scores, so the boxes whose final score is 55% or more are retained. Based on the needs, the confidence score can be increased or decreased.

      From the paper, the architecture of YOLO can be retrieved, which is a conventional neural network. The initial convolutional layers of the network extract eatures from the image whereas the fully connected layers predict the output

      probabilities and coordinates. YOLO uses a 24 layer conventional layers followed by 2 fully connected layers. The final output from this model is a 7X7X30 tensor of predictions. The PASCAL VOC data set is used to train this model. There is an implementation of YOLO in C/C++ called dark net. There are pre-trained weights and cfg which can be used to detect objects on. But to make the implementation more efficient on the raspberry pi, the tensor flow implementation of dark net called the dark flow is used.

      The images are passed to this image detection framework and get the output which contains the 5 predictions as discussed before. Dark flow outputs the file with bounding boxes or a j son file. This j son file is converted into text file and takes the count of objects and omits the rest. The objects along with their count are fed into the text to speech unite Speak.

    3. Distance

      The ultrasonic sensor HC-SR04 is used to calculate the distance of the nearest object. The sensor has 4 pins: VCC, Ground, Echo and Trigger. VCC is connected to Pin 4, Ground to Pin 6, and Echo to Pin 18 and Trigger to Pin 16. It emits a 40000 Hz ultrasound which bounces back if there is an object in its path. Considering the time between the emissions of the wave and receiving of the wave, the distance between the user and the nearest object can be calculated very accurately and it can be relayed to the user. The sensor can detect objects in range of 2cm400 cm. The distance is calculated by the formula given below.

      17150*Time =Distance.

    4. Work flow of Device

      To change the output type of the object classifier from json to text, change a few lines in the predict.py in dark flow to output a text file with labels and count. To get count of the objects detected, dictionaries are used to store them as key-value pairs where name is the key and the count is the value. This dictionary is then written onto text files. The text files are read by the text to speech line by line where each line contains one key-value pair. To capture images and predict continuously, the prediction is run through a loop which goes on till end i.e. when an escape sequence is invoked. Once it is invoked, it comes out of the loop and stops program.

  3. DESIGN

    Basic Block Diagram

    The proposed system consists of three main units:

      • Ultrasonic Sensor and Flex Sensor unit.

      • GSM Module unit.

      • Image to Text and Text to Speech unit.

    Smart stick for blind using Raspberry Pi system is easy to understand and maintain.

    Fig.2.Shows the block diagram of Blind Stick

    The system consists of ultrasonic sensors, GPS module, and the feedback is received through audio. Voice output works through TTS (text to speech). The proposed system detects an object around them and sends feedback in the form of speech that is warning messages via earphone and also provides navigation to specific location through GPS. The aim of the overall system is to provide a low cost, efficient navigation and obstacle detection aid for blind which gives a sense of artificial vision by providing information about the environmental scenario of static and dynamic object around them, so that they can walk independently.

    This smart stick is an electronic walking guide which has four ultrasonic sensors. Out of these four sensors 3 sensors are used for obstacle detection which is placed on the side of the stick. The other sensor is responsible for pothole detection which is placed below the smart stick. These ultrasonic sensors range from 2-250cms. A camera is used for object identification and text identification. A toggle switch is kept which is operated by the user to enable the different features of the smart stick; finally the output of the stick is through anearpiece.

    1. Ultrasonic sensor

      High frequency sound waves are generated by ultrasonic sensor. It evaluates the echo which is received back by the sensors. The time interval between sending the signal and receiving the echo is calculated by sensor to determine the distance to an object. Ultrasonic is like an infrared where it will reflect on a surface in any shape, but ultrasonic has a better range detection compared to infrared. In robotic and automation industry, ultrasonic has been highly accepted because of its usage. In our Project the Ultrasonic sensor distance measurement Module deals with the distance measurement between the obstacle and the blind person. This module starts the process when the user turns on the device using power supply, first when the device turns on, the ultrasonic sensor will automatically give accurate distance measurement of the obstacle in front of the blind, and then the distance measured is stored in the SD card.

    2. GSM Module

      This module deals with the navigation of blind person from particular source to destination. This phase starts by Obstacle Detection. First the ultrasonic sensor gives voice command

      about the distance measurement between the obstacle and the blind person, based on that the navigation route instruction will be provided to blind by GPS Module via voice command. The navigation route is provided based on the latitude and longitude values. The latitude and longitude values will be stored so that when that value is matched the blind person gets the voice command to move left or right.

    3. Camera Module

      A Logi tech C series camera is used in this smart blind stick for capturing images which is used for object identification and text reading. The image captured in camera is processed using the technology of digital image processing. This particular camera is used as it is ideal for high definition image capturing with 1080 pixels HD quality and a premium glass lens.

    4. Ear phones

      Earphones are used as output device which gives the audio output of all the features of smart stick such as object identification, text identification, and pothole detection. Any type of wired earphones can be used.

    5. Flex sensor

      A flex sensor or bend sensor is a sensor that measures the amount of deflection or bending. Usually, the sensor is stuck to the surface, and resistance of sensor element is varied by bending the surface. Since the resistance is directly proportional to the amount of bend it is used as goniometer, and often called flexible potentiometer.

      Raspberry Pi Protocol Architecture

      Low-cost high-performance computer which can be plugged in TV and monitor and can be used as computer which is very small as credit card.

      • Its CPU is 700Mhz single core ARM1176JZF-S

      • It has 4 USB ports

      • It has dual core video core iv multimedia co-processor

      • Size of its RAM is 512mb

      • It has micro SDHC plot for storage

      • Power rating of raspberry pi is 600mA i.e.3.0W

      • It has 17*GPIO plus the same specific functions

        Fig.3.Shows the Protocol Architecture

        This Raspberry Pi works as the computer of the smart walking stick. Raspberry Pi is a credit card sized single board, low cost computer. It takes input from the GPIO pins, which can be attached to LEDs, switches, analog signals and other devices. For our proposed design, we connect the GPIO pins to the ultrasonic sensors.

        It requires a power source of 5V to be operational and we have to insert a Micro SD memory card in it, which acts as its permanent memory. For our design Raspberry Pi 1

        Model B+ is used. It contains 4 USB ports, a HDMI port, and an audio jack port and an Ethernet port. The Ethernet port helps the device connect to the Internet and install required driver APIs. It has a 700 MHz single core processor and supports programming languages such as Python, Java, C, and C++ etc. This mini Computer runs our algorithm, which helps to calculate the distance from the obstacle based on the input it receives from the sensos. Then a Text-to- Speech driver API [12] is used to convert the text message (distance) to speech, which is relayed to the person wearing the earphone

        GPIO pins

        The Raspberry Pi board has 17 GPIO INS in it. These GPIO pins provide ability to connect directly to electronic devices. The inputs will be like sensors, buttons or other communication with chips or modules using low level protocols SPI and serial UART connections. It uses 3.3V logic levels. No analog input or output is available in this GPIO pins but we can use external chords for this analog connection. The above block diagram represents the working of the raspberry pi. Many inputs such as ultrasonic sensors, switch input and camera are given to the raspberry pi board through the GPIO pins.

  4. TESTING AND RESULTS

    1. Testing

      Testing is finding out how well something works. In terms of human beings, testing tells what level of knowledge or skill has been acquired. In computer hardware and software development, testing is used at key checkpoints in the overall process to determine whether objectives are being met. For example, in software development, product objectives are sometimes tested by product user representatives. When the design is complete, coding follows and the finished code is then tested at the unit or module level by each programmer, at the component level by the group of programmers involved and at the system level when all components are combined together. There are several types of testing and few of them are listed below:

          • Unit Testing

          • Functional Testing

      Unit Testing

      Unit testing is a level of software testing where individual units/components of software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output. In procedural programming, a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class, abstract class orerived/ child class.

      The main device we have used is Raspberrypi which is used to build and run the respective functions. We are using Raspberrypi because of the compatibility to the ARM processor. The below image is the testing of the components

      with Raspberrypi. We get the Indication of working through LED.

      Fig.4.Testing the connections with Raspberrypi

      Table 1.Unit testing for the components System Testing

      System testing is the testing to ensure that by putting the software in different environments (e.g., Operating Systems) it still works. System testing is done with full- system implementation and environment. It falls under the class of black box testing.

      System testing takes, as its input, all of the integrated components that have passed integration testing. The purpose of integration testing is to detect any inconsistencies between the units that are integrated together (called assemblages). System testing seeks to detect defects both within the "inter- assemblagesand also within the system as a whole.[citation needed] The actual result is the behavior produced or observed when a component or system istested.

      System testing is performed on the entire system in the context of either functional requirement specifications (FRS) or system requirement specification (SRS), or both. System testing tests not only the design, but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software or hardware requirements specification.

      Table 2.System testing for the components

  5. ACKNOWLEDGMENT

The satisfaction and euphoria that accompany the successful completion of any task would be incomplete without the mention of people who made it possible and under whose constant guidance and encouragement the task was completed. We would like to thank Visvesvaraya Technological University, Belagavi, for having this dissertation as a part of Curriculum, which has given us this wonderful opportunity. Our heartfelt gratitude to Dr. Kumar Kenche Gowda, Principal , Vivekananda Institute of Technology, for his constant support.

We express our sincere regards and thanks to Dr. Vidya A, Professor and Head, Department of Computer Science and Engineering for her valuable guidance, keen interest and encouragement at various stages of our project.

We would like to thank our project guide Prof. Lakshmikantha G C, Assistant Professor , Department of Computer Science and Engineering for their guidance, constant assistance, support and constructive suggestions for the betterment of the project presentation.

We are grateful to our Project co-ordination Prof. Sunanda H G Assistant Professor , Department of Computer Science and Engineering for her valuable guidance and support.

In addition, we wish to express our deep sense of gratitude to Non-Teaching staffs, Librarian, Friends and our Parents for their support and encouragement.

REFERENCES

  1. Word Health Organization (WHO). Visualimpaired and Blindness [online] http://www.worldhealth Organization.com.

  2. Milios Awad, Jad El Haddad, Edgar Khneisser, Intelligent Eye: A Mobile Application for Assisting Blind People, IEEE Conference vol.15 no.12, 2018.

  3. JinqiangBai, Virtual-Blind-Road Following Based Wearable Navigation Devicefor Blind People D. gold, k. Gorden and M. Eizenman indoor navigation and tracking, IEEE Transaction vol.12 no. 5, 2018.

  4. Samruddhi Deshpande, Real Time Text Detection and Recognition on Hand Held Objects to Assist Blind People, IEEE Conference, 2017.

  5. Abhinav, Hasnat, Low cost e-book reading device for blind people L. Wang, Y. He, W,Liu, N, Jing, j, Wang, and Y. Liu, on oscillation-free emergency navigation with wireless network, IEEE Transaction, 2018.

  6. Lahav,E.P.Lopes ,Blind Aid: a Learning Environment for Enabling People who are blind to Explore and Navigate through Unknown Real Spaces on oscillation evolution navigation with wireless network , IEEE Transaction , 2017.

  7. Saurav,J.M.Loomis ,B.pressl , Smart Walking Stick for Blind integrated with SOS Navigation System IEEE transaction on Mobile computing, IEEE Transaction vol.16, no. 2, 2018.

  8. Ayat, Nada,Assistive Infrared Sensor Based Smart Stick for Blind People, IEEE Conference, vol.10 no. 6 ,2015.

  9. Filipe vizado , Joseph , Stillman S, and Essa, Software and hardware solutions for using the keyboards by blind people, IEEE

    Conference vol.4 no. 2, 2017.

  10. Hanif Fermanda Putra, Development of Eye-Gaze Interface System and Its Application to Virtual Reality Controller, IEEE Conference, vol. 3no. 9, 2018.

  11. Sutagundar , Manvi SS, Development of Eye-Gaze Interface System and Its Application to Virtual Reality Controller ACM Transaction on interactive intelligence, IEEE Transaction, 2016.

  12. G. Ruman, V.Kanna, P.G. Prasad My Eyes- Automatic Combination System of Clothing Parts to Blind People: First insights And Application Transaction on intelligence, IEEE Transaction 2017.

  13. Shinichi, Yoshihisa Nakatoh, A low cost Micro electro mechanical Braille for blind people to communicate with blind or deaf blind people through subsystem,IEEE Transaction, 2017.

  14. Balint Soveny, Gabor Kovacas ,Blind Guide a Virtual Eye for Guiding Indoor and Outdoor Movement, IEEE International Conference, vol. 14 no. 2, 2014.

  15. Tomoyuki, Kazuyuki, Blind Guide a Virtual Eye for Guiding Indoor and Outdoor Movement CoRR, IEEE Transaction, 2015.

  16. M. Lin, Chen, S.Yan, and A.Yuille. A colour Compensation Vision System for Colour- blind People ,IEEE Transaction, 2018.

  17. Mukesh Prasad Agrawal and Atma Ram Gupta, Smart Stickfor the Blind and Visually Impaired People, IEEE Conference, vol.12 no. 5, 2018.

  18. Bonnie Yuk San Tsung, Ming Zhang, Yu Bo Fan, David Alan Boone, Quantitative comparison of plantar foot shapes under different weight-bearing conditions, IEEE Transaction vol. 40 no. 6,2016.

  19. G. Balakrishnan, G. Sainarayanan, R. Nagarajan and

    S. Yaacob, Wearable real-time stereo vision for the visually impaired, IEEE Conference, vol.14 no. 6, 2017.

  20. Baljeet S. Malhotra, Alex, Energy Efficient On-site Tracking of Mobile Target in Wireless Sensor Networks, IEEE Conference, 2017.

  21. Silveira, and H. Serdeira, Implementing Directional TX-Rx of High Modulation QAM Signaling with SDR Testbed, Ubiq, Comp, Elect, and Mob, IEEE Conference vol.6 no. 2, 2017.

  22. A.Shaha, D. H. N. Nguyen, S. Kumar, and N. W. Rowe,Real Time Video Transceiver using SDR test bed with Directional Antennas, Ubiq. Comp. Elect. And Mob. Comm. IEEE Conference, 2017.

  23. S. Chaurasia and K. Kavitha, An electronic walking stick for blinds, IEEE Info.Comm. and Emb IEEE Conference vol.12 no.9, 2014.

  24. P. Shrivastava, P. Anand, A. Singh, and V. Sagar Medico stick: An ease to blind & deaf, in Proc. IEEE Elec. and Comm. Sys. Conference. IEEE, 2015.

  25. A. Krishnan, G. Deepakraj, N. Nishanth, and K. Anandkumar, Au- tonomous walking stick for the blind using echolocation and image processing, in Proc. IEEE Contemporary Comp. and Info. Conf. IEEE, 2016.

  26. N. S. Mala, S. S. Thushara, and S. Subbiah, Navigation gadget for visually impaired based on iot, in Proc. IEEE Comp. and Comm. Tech. Conf. IEEE, 2017.

  27. M. Menikdiwela, K. Dharmasena, and A. H. S. Abeykoon, Haptic based walking stick for visually impaired people, in Proc. IEEE Circ. Cont. and Comm. Conf. IEEE, 2017.

  28. Milios Awad Jad El Haddad Intelligent Eye A Mobile Application for Assisting Blind People, IEEE Conference, 2018.

  29. Arnesh Sen Kaustav, Sen Jayoti Das, Ultrasonic Blind Stic For Completely Blind People To Avoid Any Kind Of Obstacles, IEEE Conference, vol. 12 no. 7, 2018.

  30. Akshay, Salil Arora, Blind Aid Stick: Hurdle Recognition, Simulated Perception, Android Integrated Voice Based Cooperation via GPS Alon With Panic Alert System, IEEE Conference,2017.

  31. Sanchez, J. Oyarzun, Mobile Assistance Based on Audio for Blind People Using Bus Services, IEEE Transaction, vol.20 no. 10, 2007.

  32. Mahdi Safaa A. Muhsin Asaad H. and Al-Mosawi Ali, Ultrasonic Sensor for Blind and Dea persons Combines Voice, IEEE Transaction, vol.13 no. 2, 2014.

  33. B. Mustapha, Zayegh, Begg , Reliable Wireless Obstacle Detection System for Elderly and Visually Impaired People with Multiple Alarm Units, IEEE Conference, vol. 4 no. 10, 2014.

  34. Enyang Xu, Medaglia C.M ,Target Tracking and Mobile Sensor Navigation in Wireless Sensor Networks, IEEE Transactions, 2013.

  35. Ing. Jarosla, Ceipidor U, The simulation of human gait in Solid Works, Department of production systems and robotics, IEEE Conference vol.2 no.4, 2017.

  36. Alessandro Dionisi , Emilio Sardini, Mauro Serpelloni , Wearable Object Detection System for the Blind, IEEE Conference, 2012.

  37. Sonnenblick, An Indoor Navigation System for Blind Individuals, IEEE Conference vol.10 no.5, 2015.

  38. Rashidah, Akshay Salil, Arora, Ultrasonic Blind Stick for Completely Blind People to Avoid Any kind of obstacles, IEEE Conference vol. 10 no. 9, 2017.

  39. Hanif, Kohichi, Blind Aid Stick: Hurdle Recognition, Simulated Perception, and Android Integrated Voice Based Cooperation via GPS Along With Panic alert System, IEEE Transaction 2015.

  40. Joseph, Krishnan, Study of vehicle monitoring application with wireless sensor network, IEEE Transaction vol.12 no. 4, 2016.

  41. Zeeshan Saquib, Proceedings of the 32th Annual International Conference of the IEEE Transactions vol.12 no. 5, 2010.

  42. Apurv Shaha Smart Walking Stick for the Visually Impaired People using Low Latency Communication,IEEE conference, 2018.

  43. Ayat A. Nada, Mahmoud A. Fakhr, Ahmed F. Seddik, Assistive Infrared Sensor Based Smart Stick for Blind People, IEEE Transaction vol.12 no.7, 2015.

  44. Varalakshmi and Mr. S. Kumarakrishnan, Navigation System for Visually Challenged Using Internet of Things, IEEE International Conference vol. 12 no 3, 2019.

  45. R. Lakshmanan and R. Senthilnathan, Depth map based reactive planning to aid in navigation for visually challenged, IEEE International Conference, 2016.

  46. M. Yusro and K.M. Houl, SEES: Concept and Design of a Smart Environment Explorer Stick, IEEE Conference vol. 10 no. 14, 2013.

  47. E. P. Lopes, E. P. Aude, J. T. Silveira, and H. Serdeira, Obstacle avoidance strategy based on adaptive potential fields generated by an electronic stick , IEEE Conference vol. 4 no. 5, 2010.

  48. D. T. Batarseh, T. N. Burcham, and G. M. McFadyen, An ultrasonic ranging system for the blind, in Proc. IEEE Bio. Eng. Conf. IEEE, 2005.

  49. S. Innet and N. Ritnoom, An application of infrared sensors for electronic white stick, in Proc. IEEE Inte. Sig. Proc. and Comm. Sys. Conf. IEEE, 2019.

  50. N. S. Mala, S. S. Thushara, and S. Subbiah, Navigation gadget for visually impaired based on IOT, in Proc. IEEE Comp. and Comm. Tech. Conf. IEEE, 2017.

  51. Parameshachari B D et. Al Optimized Neighbor Discovery in Internet of Things (IoT), 2017 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), PP 594-598, 978-1-5386-2361-9/17/$31.00 ©2017 IEEE.

Leave a Reply