Model of Autonomous Car

DOI : 10.17577/IJERTV9IS050649
Download Full-Text PDF Cite this Publication

Text Only Version

 

Model of Autonomous Car

Uzma Kazi

Student,

Department of Computer Engineering, Rajiv Gandhi Institute of Technology, Mumbai, India,

Dr. Sharmila S. Gaikwad

Assistant Professor, Department of Computer Engineering, Rajiv Gandhi Institute of Technology,

Mumbai, India,

Richa Koli

Student,

Department of Computer Engineering, Rajiv Gandhi Institute of Technology, Mumbai, India,

Samiksha Mhadeshwar

Student,

Department of Computer Engineering, Rajiv Gandhi Institute of Technology, Mumbai, India,

Abstract The increasing traffic density on city roads in India has led to the requirement of automation in vehicles. Lack of proper road communication, inadequate driving skills and infidelity in the conditions is a major concern in India and introducing autonomy in vehicles will bring drastic changes in the current scenario. The idea described in the paper is based on Tesla and Google cars. The aim of the project is to build an autonomous car system that can be installed in majority of cars already on road instead of buying a whole new car. The machine learning algorithms are used to provide intelligence to the car and it is implemented by a combination of sensors and camera to provide the necessary control to the car. Apart from making it a highly advance enhancement in the transportation ways, the aim is also to make it as cost effective as possible to make it

Autonomous cars are created in such a way that their reflexes must be faster and their judgement must be more reliable than humans. This will make lives easier and safer by abiding to traffic flow regulations and rules better than humans. For a country like India, with ever increasing population, the number of private vehicles on road has also spiked exponentially in the last few years. Hence the autonomous vehicles can also help in reducing traffic along with other advantages and bring a big change in improving the lifestyle.

  1. REQUIREMNETS
    1. Hardware

      available for majority. Numerous challenges are faced on the Pi Camera

      road each day, taking intelligent decisions within required time has been under consideration in this paper.

      On implementing and testing this technology for all possible road and environmental conditions, the autonomous car will become a revolution in Indian Transportations.

      Keywords Autonomous, Raspberry PI, Convolution Neural Network.

      1. INTRODUCTION Autonomous vehicle is the future for all transportations.

        It captures images of the surrounding environment for making the dataset for the CNN to learn and train and then in the actual implementation, it uses to guide the car.

        • Raspberry Pi 3B+

          It is interfaced with the Pi camera to provide images(video) for the viewing of the car. The CNN is coded here and output for the direction and working for the car is sent as input to the controller.

          Be it auto-pilot mode for aeroplanes or for trains, relying on Arduino Microcontroller

          Artificial Intelligence has been a part of the system in different capacities. Even if people surpass the idea of relying fully on technology, the automated vehicles available today are the most expensive vehicles. Also, various human input

          It takes the output of the CNN network as input and is connected to the Dc brakes and sensors for the obstacle detection and actual moving and stopping of the car.

          mistakes such as speeding, talking over phone, drunk driving Ultrasonic Sensor

          have become the root for transportation problem and accidents. The statistics of these issues are at a high with increase in every day. Thus, driver error is one of the most

          Ultrasonic sensors are integrated with each other and the controller for detecting obstacles on the course based on echo system.

          • DC Motor
          • DC Motor

           

          common cause of accidents on the roads of India. With a

          constant increase in number of accidents, it has become

          crucial to take care of the human errors and ensure safety of people in the car as well as of those on the road. The main use of an autonomous car is to sense its environment and produce an optimal route for travelling without human

          It uses the electrical energy inputted by the Arduino Microcontroller and converts it into mechanical energy which causes the movement in the tyres.

    2. Software

    involvement. Various degrees of autonomy are available Arduino IDE

    ranging from some human involvement to absolutely none. All environmental conditions must be worked upon with minimal time and the best decision regarding safety and

    It is the platform where the programs are written for Arduino board functioning causing physical movement of the car.

    travel must be taken.

    • OpenCV and Spyder Environment

      It inputs the image from the Pi camera and converts it into grayscale and then resizes it and pass to the Neural Network.

    • Raspberry Pi Cam Interface

    It captures the live feed as high rate of images per second which is input to the Raspberry Pi CNN.

  2. PROPOSED ARCHITECTURE

    In the model, Pi Camera is mounted on top of the car which takes input images in high frame/second to feed the environment data live. These images are then processed in grayscale format using image processing to reduce the dimensional matrix required for RGB images. Using Convolutional Neural Network for image classification, the convolution layer finds patterns in the images which classify the various architectures of the road. These images are classified into Left, Right, Forward and Reverse for the movement of the car. These images are firstly used for training the system under various circumstances. The training is done to make the system capable of predicting or

    Fig. 2. CNN Architecture

    i. Input Layer: Greyscale image from the Pi Camera is the input for the CNN with width 28, height 28 and depth 1.

    classifying the actions to be taken for driving. The system ii. Convolution Layer: The filter is applied to the images and it

    uses combination of various sensors to detect object and the is convolved and feature maps are extracted.

    speed of the objects present on the road. After the decision isiii. ReLU Layer: This is the activation function layer which will

    made, the output I given as input to the Arduino Microcontroller. The Arduino controls the DC brake system and directs it or controls its speed. The software used in this project are Arduino IDE for writing the code for the Arduino board, OpenCV which will help to crop out the section of the

    apply element wise activation function to the output of convolution layer.

    RELU: R(z)= max (0, z)

    This introduces non-linearity to the network which handles complexity. It produces rectified feature maps as output.

    video from Raspberry-Pi cam interface and converts it toiv. Pooling Layer: This layer is periodically inserted in the

    grayscale, resize it and then pass it to the Convolution Neural Network, Spyder environment and Raspberry-Pi cam interface to remotely capture the live feed by just letting the IP address of the Raspberry-Pi.

    Fig. 1. Proposed System

    covnets to reduce the dimensionality of the rectified feature maps. We use a max pool with 2 x 2 filters and stride 2, the resultant volume will be of dimension 14x14x12.

    1. Fully-Connected Layer: This layer is the output layer which classifies the image.
  3. WORKING

    The hardware system is mounted on top f a board and connections are made using jumper cables. The ultrasonic sensor is mounted in the front of the car for detecting objects within 10m distance. Corresponding pins of the Arduino are connected to the sensor and the motor.

    The distance measuring formula for the sensor:

    distance = pulse / 29.387 / 2;

    The Pi camera is mounted on top of the board and is connected to the Pi Camera.

    The input of images taken by the camera is processed by the CNN and the image is classified into a direction of movement. This is sent via electrical energy to the Arduino Microcontroller which in turn controls the wheels of the car.

    Fig. 3. Hardware model

    The OpenCV environment is used for Image Processing where the lane of the road is processed so that the car moves only on the road and not somewhere else.

    followed by curved tracks. In this paper, a method to make a model of self-driving car is presented. The different hardware components along with software and neural network configuration are clearly described.

    • A Standard Driven Software Architecture for Fully Autonomous Vehicles:

      This paper proposes a functional software architecture for fully autonomous vehicles.

      A major issue of this approach is that requirements cannot be traced with respect to functional components and several components group most functionality. Therefore, it is often difficult to adopt the proposals.

    • A functional architecture for autonomous driving:

      This paper presents the principal components needed in a functional architecture for autonomous driving, along with reasoning for how they should be distributed across the architecture. A functional architecture integrating all the concepts and reasoning is also presented.

      The road is classified by the white lines marked at its ends

      and yellow line as a divider for movement in different AI in a Small Space: A Small Architecture Solution for

      directions.

      Fig. 4. Lane Detection

  4. ANALYSIS

    This research mainly focuses on making the travelers journey safer and convenient from its source to destination with minimal human intervention. This study proposes making a model of self-driving car which will help in minimizing the number of accidents occurring daily. The car model will have Raspberry Pi camera, Ultrasonic Sensors, Raspberry Pi and Arduino embedded in it.

    This review considers the following papers to get a proper view of the current updates and advances in the field and what solutions were suggested.

      • Working model of Self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino:

    The evolution of Artificial Intelligence has served as the catalyst in the field of technology. This paper proposes a working model of self-driving car which is capable of driving from one location to the other or to say on different types of tracks such as curved tracks, straight tracks and straight

    Self-Driving Enhancements to a Remote-Control Car:

    In this paper there is presentation of a small, inexpensive architecture for converting a remote-control race car into a self-driving car. Uses the Unity 3D prototype, the GoPiGo car, and the Traxxas Slash 4X4 car.

  5. CONCLUSION

Though the idea of relying completely on technology for our safety and comfort is a new beginning for us and sometimes new things can be difficult to adjust to a proper and full implementation of our proposed system will not only ease the hard-working lives of our people, but also help the environment big time. With emerging and increasing expectations, awareness on the right use of this technology and its benefits must be presented. By proving the right and reasonable cost for the right technology, consumers will tend to accept the revolution more and eventually become a part of it. The proposed system was developed, with the help of Machine Learning and Image Processing, keeping in mind how it will benefit in reducing the increasing number of accidents and make the journey more convenient.

REFERENCES

    1. Sharmila Gaikwad, Akshay Vishwanath, Lalit Bhosle, Rishabh Bhandari, Internet controlled vehicle, International Journal of Recent and Innovation Trends in Computing and Communication. 4, pp.17-21,2016.
    2. K. R. Menon, S. Menon, B. Menon, A. R. Menon, and M. Z. A. S. Syed, Real time Implementation of Path Planning Algorithm with Obstacle Avoidance for Autonomous Vehicle, in 3rd 2016 International Conference on Computing for Sustainable Global Development, New Delhi, India, 2016.
    3. P. Rau, Target Crash Population of Automated Vehicles, in in 24th International Technical Conference on the Enhanced Safety of Vehicles (ESV), 2015, pp. 111.
    4. E. Frink, D. Flippo, and A. Sharda, Invisible Leash: Object- Following Robot, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 10, no. 1, pp. 37, Feb. 2016.
    5. M. Frutiger and C. Kim, Digital Autopilot Test Platform with Reference Design and Implementation of a 2-Axis Autopilot for Small Airplanes, Department of Electrical and, pp. 124, 2003.
    6. M. Weber, Where to? A History of Autonomous Vehicles, 2014. [Online]. Available: http://www.computerhistory.org/atchm/where- to-a- history-of-autonomous-vehicles/
    7. D. Helbing, Traffic and related self-driven many-particle systems, Reviews of Modern Physics, vol. 73, no. 4, pp. 10671141, Dec 2001.
    8. https://www.geeksforgeeks.org/
    9. Swati Patil, Kshitij Gholap, Ashutosh Bhilare, Utkarsh Kondekar, Working Model of a Self-Driving Car using Convolution Neural Network, Raspberry Pi and Arduino, in International Journal of Research in Engineering, Science and Management Volume-2, Issue-12, December-2019.

Leave a Reply