Channelizing Audio Functions in a Smart Mobile by Concocting Project Soli Technology

DOI : 10.17577/IJERTCONV4IS34027

Download Full-Text PDF Cite this Publication

Text Only Version

Channelizing Audio Functions in a Smart Mobile by Concocting Project Soli Technology

Ramachandra Rao Kurada

Asst. Prof., Dept. of CSE

Shri Vishnu Engineering College for Women, Bhimavaram.

Vijaya Srikrishna D

Dept. of CSE

Shri Vishnu Engineering College for Women, Bhimavaram.

Swathi B

Dept. of CSE

Shri Vishnu Engineering College for Women, Bhimavaram.

Abstract Project Soli, developed at Google ATAP is a novel application that fits in the domain of Internet of Things (IoT) domain. It uses radar to enable new types of touch less interactions to control devices like computers, smart phones etc. IoT is at a stage where disparate networks and a multitude of sensors must come together and inter-operate under a common set of standards. In, this paper, we concoct the methodology of Project Soli, to a smart mobile phone and control its audio by adjusting basic sound functions like tuning volume from high to low and vice-versa and pertain mute or intensify volume operation with specific human gestures. This application uses a smart portable radar sensor to recognize human gestures. The intent to this work is to make hold true of concoct project soli application in a cost and energy efficient way, so that a common man can use and get benefited with the incredible developments of IoT.

KeywordsProject Soli; Internet of Things; Mobile phone; Arduino board; Gesture sensor.

  1. INTRODUCTION

    The concept of combining computers, sensors, and networks to monitor and control devices has existed for decades. The recent confluence of several technology market trends, however, is bringing the IoT closer to widespread reality [1-2]. IoT refers to the ever-growing network of physical objects that feature an IP address for Internet connectivity, and the communication that occurs between these objects and other Internet-enabled devices and systems. It is an environment in which objects, animals or people are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT implementations use different technical communications models, each with its own characteristics [3-4].

    Internet Architecture Board described four common communications models for IoT. It includes Device-to- Device, Device-to-Cloud, Device-to-Gateway, and Back-End Data-Sharing. In this paper we confine to use the first model Device-to-Device. The device-to-device communication model represents two or more devices that directly connect and communicate between one another, rather than through an intermediary application server. These devices communicate over many types of networks, including IP networks or the Internet. Often, however these devices use protocols like Bluetooth, ZigBee to establish direct device-to- device communications [8-10].

    In this work hand gestures are used as acquisition of data, and these gestures are processed into information. These shared volumes of information and building on discoveries are best be understood by identify trends and patterns in the data and together form knowledge by controlling the components. We address a real world application by concocting project soli technology in a smart phone. The objective of this application is to decrease work load of making the electrical gadgets within the control of hand gestures and make sure the work of the gadgets had been made easier and completed, thereby promises customer, to transform many aspects of the way we live.

    For meeting energy efficiency, this application does ubiquitous connectivity, which is a lowcost, highspeed; pervasive network connectivity makes almost everything connectible. Also, this work takes the manufacturing advances allow cutting-edge computing and communications technology to be incorporated into very small objects, with small and inexpensive sensor devices, to drive this IoT applications [5-6].

    This paper is organized as follows: Section 1 gives the introduction of IoT and the approach of this work in this direction. Section 2 describes the methodology, architecture of the proposed work and components that are used to channelize the audio function with project soli technology. Section 3 present the design issues and prototype used towards concocting this work and in Section 4 conclusion follows.

  2. CONCOCTING PROJECT SOLI TECHNOLOGY INTO A SMART MOBILE TO CHANNELIZE ITS AUDIO

    FUNCTIONS

    1. Methodology

      In a cost and energy effective way, to concoct project soli technology on a mobile phone for controlling the sound system operations, we allied Spart Fun RGB-gesture sensor- APDS-9960 and Arduino board to the smart phone. Both these are hardware components and a program is incorporated over the board to run this chip. Soli technology uses miniature radar to detect touch less gesture interactions; these interactions are more approachable and fell actual, when capturing human hand motion. Electromagnetic waves are emitted by the gesture sensor in a broad beam. The gesture sensor have the properties of reflected signals like time delay, frequency and hold the entities characteristics like shape, size,

      orientation, distance and is adaptable to bandwidth of low capacity.

      RGB-gesture sensor provides ambient light, RGB color, approximate sensing, and touch less gesture recognition for distances up to 8" (20cm) with a simple swipe of our hand. It is a small breakout board in APDS-9960 sensor, which utilizes 4 directional photodiodes to detect reflected IR energy from the built-in IR LED. The chip's gesture engine converts the photodiode signals to physical motion information (e.g. velocity, direction, distance) and interprets these values as up, down, left, right, near and far gestures. The functional diagram of this concocting project soli technology to a smart phone is shown in Fig. 1.

      Fig.1. Concocting Project Soli technology to a Smart Mobile

      Arduino is a open source computer hardware i.e. a micro controller board design with a system to provide sets of digital and analog I/O pins that can be interfaced to various expansion boards and other circuits. This boards feature serial communications interfaces for loading programs from personal computers or sensors. For programming the micro controllers, the Arduino project provides an integrated development environment (IDE) to support C programming language for processing the project.

      In the Sparkfun RGB-gesture sensor, a radar is enabled for touch-less type interactions, where human hand becomes a natural, intuitive interface for physical devices. The gesture sensor tracks sub-millimeter motions at high speed and accuracy. This sensor is fitted onto Arduino board to seeks translation of complex human hand gestures into simple swipe and apply the finesse of our actions to the virtual realm. The radar technology used in this work uses Direct Sequence Spread Spectrum (DSSS) technique. DSSM is a spread spectrum technique whereby the original data signal is multiplied with a pseudo random noise spreading code. This spreading code has a higher chip rate, which results in a wide band time continuous scrambled signal. DSSS significantly improves protection against jamming signals, especially narrowband and makes the signal less noticeable.

    2. Architectural Diagram

    Human finger gestures are captured by Sparkfun RGB- gesture sensor as raw inputs, and this sensor send these as signals to arduino board. The embedded program (concocted project soli) fabricated on Arduino board transforms the raw inputs as signals, learns, detect and infer these getures to control sound functions over a smart mobile. The primary intension of this work, to control the device with hand gestures rather than user interface tools is shown as Fig. 2.

    Fig.2. Functional diagram of Concocting Project Soli technology to a Smart Mobile

  3. DESIGN ISSUES AND PROTOTYPE

    1. Formatting the human hand finger gestures

      • To adjust the volume in a smart phone from low to high, we trained the system with the following finger gestures: Move the index figure towards thumb finger to reduce the volume of mobile from high to low and vice-versa to increase the level of volume.

      • To mute and intensify the level of volume, we trained the system with the following finger gesture: Move the thumb finger in sideway from right to left to mute the level of volume and vice-versa to intensify the volume level to its previous state.

      • To maintain sensing distance of about 20 cm from sensor for gesture recognition

    2. Prototype of the Embedded Program

      • Accept input as hand gestures from sensory radar

      • Convert gestures into signal transformations

      • Compute the physical motion information (velocity, direction, distance)

      • Interpret these values as gestures up, down, left, right

      • Apply machine learning primitive functions to detect and infer gestures

      • Perform touch-less interactions.

    A feasible gainsays trends towards IoT become reality, it to shift in thinking about the implications and issues with connected components, and the likelihood of fielding these devices in unsecure environments [9]. The extension to this project towards energy efficiency is by intertwining the sensor, embedded program and mobile chip as a baseline communication that enables the interactions between things, with internet for base computing and enabling embedded program to send and receive data.

  4. CONCLUSION

In this paper we presented a study of concocting project soli technology to adjust sound functions of a mobile phone without tendering its user interface tool as an energy efficient prototype. The simulated system is allied with the hardware components of sensory radar, aurdino board and the micro chip of the smart phone. The aurdino board is embedded with an algorithm to extract signals from sensory radar, detect and

track touch less interaction with gestures. The consequences of this study are still at infantry level producing arbitrary results. Despite this challenge we are extensively working towards generating a constructive solution by using machine learning primitive functions to detect and infer gestures and also to fix technicalities in embedding the energy efficient soli chip with the mobile processor.

REFERENCES

  1. Bowman, Doug, et al. 3D User Interfaces: Theory and Practice, CourseSmart eTextbook. Addison-Wesley, 2004.

  2. Kato, Hirokazu, et al. "Virtual object manipulation on a table-top AR environment." Augmented Reality, 2000.(ISAR 2000). Proceedings. IEEE and ACM International Symposium on. Ieee, 2000.

  3. Valerio, Pablo. Google: IoT Can Help The Disabled. InformationWeek,Marcp0,2015. http://www.informationweek.com/mobile/mobile-devices/google- iot-can-help-thedisabled/a/d-id/1319404.

  4. Danova, Tony. Morgan Stanley: 75 Billion Devices Will Be Connected To The Internet Of Things By 2020. Business Insider, October2, 2013. http://www.businessinsider.com/75-billion-devices- will-be-connected-to-the-internet-by-2020-2013-10.

  5. Manyika, James, Michael Chui, Peter Bisson, Jonathan Woetzel, Richard Dobbs, Jacques Bughin, and Dan Aharon. The Internet of Things: Mapping the Value Beyond the Hype. McKinsey Global Institute, June 2015.

  6. Cisco Visual Networking Index: Forecast and Methodology, 2014- 2019. Cisco, May 27, 2015. http://www.cisco.com/c/en/us/solutions/collateral/service- provider/ip-ngn-ip-next-generation-network/white_paper_c11- 481360.pdf.

  7. Rose, Karen, Scott Eldridge, and Lyman Chapin. "The internet of things: An overview." The Internet Society (ISOC) (2015): 1-50.

  8. Thaler, Dave, Hannes Tschofenig, and Mary Barnes. "Architectural Considerations in Smart Object Networking." IETF 92 Technical Plenary-IAB RFC 7452. 6 Sept. 2015. Web. https://www.ietf.org/proceedings/92/slides/slides-92-iab- techplenary-2.pdf.

  9. Li, Shancang, Li Da Xu, and Shanshan Zhao. "The internet of things: a survey." Information Systems Frontiers 17.2 (2015): 243- 259.

  10. Tschofenig, H., et. al., Architectural Considerations in Smart Object Networking. Tech. no. RFC 7452. Internet Architecture Board, Mar. 2015. Web. https://www.rfc-editor.org/rfc/rfc7452.txt

Leave a Reply