3D Mapping using Lidar

DOI : 10.17577/IJERTCONV6IS13090

Download Full-Text PDF Cite this Publication

Text Only Version

3D Mapping using Lidar

Vandana V

Dept. of ECE, GSSSIETW,

Mysuru, India

Vidya B

Dept. of ECE, GSSSIETW, Mysuru, India

Yashaswini S

Dept. of ECE, GSSSIETW, Mysuru, India

Abstract:- This paper presents a simple and cost -effective 3- dimensional (3D) mapping of internal structures using Light Detection and Ranging (LiDAR). LiDAR is being controlled in such a way to measures the distance and angles from both servo motors simultaneously on which LiDAR is mounted. These values help to calculate and draw a 3D image of the internal structure.3D mapping using photogrammetry technique is very sophisticated, time-consuming and costly. The accuracy in photogrammetry process is not acceptable in some cases. 3D mapping provides a very realistic view that enhances the visualization. It has many applications in the field of research, survey, and engineering.

  1. INTRODUCTION

      1. Overview

        Radio Detection and Ranging (RADAR) and Sound Navigation and Ranging (SONAR) are not only used in military tasks but also for civilian purposes for decades. RADAR and SONAR are playing a vital role in applications regarding air and sea navigation and in weather forecast systems. The major setback of RADAR and SONAR is their low frequency and precision. In contrast to RADAR and SONAR, the new evolving technology of LiDAR has very high frequency and precision, especially for short distance measurement. LiDAR technology has several advantages over the pre- existing technologies of RADAR and SONAR as it is much more accurate and reliable. LiDAR measurement can be extended for the mapping of the interior structure .LiDAR modelling /mapping also finds its applications in preparing high-quality Digital Elevation Models (DEM) around the globe. Usually, LiDAR-based DEM has a precision of 0.1 m along with sufficient horizontal resolution. One most important issue related to LiDAR based DEM is LiDAR data density. If you consider some figures, 4000 to 35000 pulses per second are used by LiDAR, while each pulse has 2-7 returns collected for each laser pulse. Therefore, on average, we have 25000 points in the square mile.

      2. Objective

    The main purpose of the project is to create a simple and easily adjustable mobile 3D mapping system for making a l low-cost 3D image of internal structures of buildings

    .LIDAR works on the principle similar to RADAR and SONAR, but the main difference between LIDAR and

    these devices is that LIDAR uses infrared light waves. The frequency of LIDAR is

    about 1MHz.LIDAR is mainly used for ranging and light detection . The LIDAR device is a combination of three different technologies, Inertial Navigation System (INS), Global Positioning System (GPS) and LASER sensors. In this project, LIDAR is used to map objects lying in the range of LIDAR in 3D. This project covers complete sphere i.e., 360 degrees. SLAM comprises the simultaneous estimation of the state of a robot equipped with on-board sensors and the construction of a model (the map) of the environment that the sensors are perceiving. In simple instances, the robot state is described by its pose (position and orientation), although other quantities may be included in the state, such as robot velocity, sensor biases, and calibration parameters. The map, on the other hand, is a representation of aspects of interest (e.g., position of landmarks, obstacles) describing the environment in which the robot operates.

  2. LITERATURE SURVEY

    LiDAR, Radar and Sonar are the modern remote sensing techniques used by various professionals to collect and analyze data. They use different mediums to transmit various types of signals to and from the objects and then analyze the time taken to measure the distance between the transmitter and the objects. Below are some of the differences between the three remote sensing technologies. RADAR technology uses electromagnetic waves or radio signals to determine the distance and angle of inclination of objects on the surface. It does not allow the detection of smaller objects due to longer wavelength. This means that data regarding very tiny objects on the surface may be distorted or insufficient. It cannot provide an exact 3D image of the object due to the longer wavelength. This means that the image will be a representation of the object but not an exact replica of the object characteristics. Sonar stands for Sound Navigation and ranging. It transmits sound waves that are then returned in form of echoes which are used to analyze various qualities or attributes of the target or object. Sonar is mainly used to detect under water objects because the sound waves can penetrate the water depths to the bottom of the sea. Whereas LiDAR technology uses light pulses or laser beams to determine the distance between the sensor and the object. The laser travels to the object and is reflected back to the source and

    the time taken for the laser to be reflected back is then used to calculate the distance. Because of the nature of the laser pulses, LiDAR is mostly used to measure the exact distances of an object. The laser pulses travel at the speed of light which increases the accuracy of the measurements. LiDAR technology is capable of creating high resolution images of an object at any surface and this is why it is popularly used in mapping and other topographical uses. Based on the speed of the laser pulses from LiDAR sensors, the data is returned fast and with accurate results. Unlike RADAR, LiDAR data has a higher accuracy of measurement because of its speed and short wavelength. Also, LiDAR targets specific objects which contribute to the accuracy of the data relayed. LiDAR technology is cheaper when used in large scale applications. This is because it is fast and saves a lot of time and it is also not very labor-intensive unlike other methods of data collection.

    Landmark Extraction

    Odometry Change

  3. Methodology

    LiDAR

    Odometry Update

    Data Association

    Particle Filter Reobservation

    Particle Filter New Observation

    Figure 1: Block diagram of SLAM technique

    The SLAM process consists of a number of steps. The goal of the process is to use the environment to update the position of the robot. Since the odometry of the robot (which gives the robots position) is often erroneous we cannot rely directly on the odometry. We can use laser

    scans of the environment to correct the position of the robot. This is accomplished by extracting features from the environment and reobserving when the robot moves around.One of the most used approaches for scan matching is the Iterative Closest Point (ICP) algorithm. Many SLAM solutions rely on ICP algorithms to estimate the relative transformation between two overlapping point clouds. ICP was independently introduced by Besl and McKay , Chen and Medioni ,and Zhang. The ICP algorithm attempts to find transform parameters that minimize the Euclidean distance of corresponding points, which are assumed to be the nearest neighbor points. Several variants of ICP have been proposed corresponding to all steps of the algorithm from the points selection to the minimization strategy. The steps in this ICP algorithm are classified as:

    1. Selection of the set of points.

    2. Matching the points to the samples.

    3. Weighting corresponding pairs appropriately

    4. Rejecting certain pairs.

    5. Assigning an error metric.

    6. Minimizing the error metric.

      An outline of the SLAM process is given below.

      Figure.2: The robot is represented by the triangle. The stars represent landmarks. The robot initially measures using its sensors the location of the landmarks (sensor measurements illustrated with lightning).

      Figure.3: The robot moves so it now thinks it is here. The distance moved is given by the robots odometry.

      Figure 4 The robot once again measures the location of the landmarks using its sensors but finds out they dont match with where the robot thinks they should be (given the robots location).Thus the robot is not where it thinks it is.

      Figure 5 As the robot believes more its sensors than its odometry it now uses the information gained about where the landmarks actually are to determine where it is (the location the robot originally thought it was at is illustrated by the dashed triangle).

      Figure 6: In actual fact the robot is here. The sensors are not perfect so the robot will not precisely know where it is. However this estimate is better than relying on odometry alone. The dotted triangle represents where it thinks it is; the dashed triangle where odometry told it it was; and the last triangle where it actually is.

      Figure 7: 2D map of an environment

      This map is given to the Blender Software which is used to build the 3D map.

      1. REQUIREMENTS

          1. Software requirements

            • Operating system : Linux ROS(Robot operating System) :

              Robot Operating System (ROS) is robotics middleware.

              The main ROS client libraries are C++ and Python.

              For these client libraries linux is listed as supported.

            • MRPT (Mobile Robot Programming Toolkit) :

              The Mobile Robot Programming Toolkit (MRPT) is an open source C++ library used in robotics to design and implement algorithms related to computer vision and motion planning.

            • Blender is a professional ,free, and open source 3D computer graphics software toolset used for creating animated films,visual effects,art ,3D printed models , interactive 3D applications.

          2. Hardware requirements

            • LiDAR

            • FIREBIRD V

      2. CONCLUSION

        LIDAR has the superior performance than the pre-existing RADAR and SONAR technologies due to its high accuracy and resolution. 3D mapping is required particularly to build the map of any given surroundings. 3D-mapping using LIDAR is very accurate, reliable and cost-effective. LIDAR is mounted on a FIREBIRD V servo motor which maps the given environment in 3D.

      3. FUTURE EXTENSION

        In this project we have used the Blender software to build the 3D map by using the obtained 2D map of an environment. This can be improved wherein the 3D map can be built automatically by using programming language like python, by giving the obtained 2D map of a surrounding as the input. This avoids the usage of other softwares like unity and blender which are used to build the 3D map manually.

      4. REFERENCES

  1. Riisgard, Søren and Morten Rufus Blas. SLAM for Dummies: A Tutorial Approach to Simultaneous Localization and Mapping.

  2. Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif , Davide Scaramuzza , Jos´e Neira , Ian Reid , and John J.Leonard , Past , Present , and Future of Simultaneous Localization and Mapping : Toward the Robust-Perception Age ,IEEE TRANSACTIONS ON ROBOTICS, VOL. 32, NO. 6, DECEMBER 2016.

    Hugh durrant-whyte and Tim bailey, Simultaneous Localizationand Mapping,2006

  3. M. Hämmerle, B. Höfle, J. Fuchs, A. Schröder-Ritzrau, N. Vollweiler and N. Frank, "Comparison of Kinect and Terrestrial LiDAR Capturing Natural Karst Cave 3-D Objects," in IEEE Geoscience and Remote Sensing Letters, vol. 11, no.

    11, pp. 1896-1900, Nov. 2014. doi:

    10.1109/LGRS.2014.2313599.

  4. W. Shen, J. Zhang and F. Yuan, "A new algorithm of building boundary extraction based on LIDAR data," 2011 19th International Conference on Geoinformatics, Shanghai, 2011, pp. 1-4. doi: 10.1109/GeoInformatics.2011.5981049.

  5. F. Wang, "LiDAR data acquisition methods in emergency management applications," 2011 19th International Conference on Geoinformatics, Shanghai, 2011, pp. 1-4. doi: 10.1109/GeoInformatics.2011.5981054.

  6. Thang hoang and Viktor palmqvist berntsson, Localisation using LiDAR and Camera Department of Signals and Systems Chalmers university of technology Gothenburg, Sweden 2017.

  7. Marcus Olsson and Pontus Kielén, Mapping and localization using automotive lidar Department of Signals and Systems Chalmers university of technology Gothenburg, Sweden 2017.

  8. Ying He, Bin Liang, Jun Yang, Shunzhi Li and Jin He,An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

  9. Claus Brenner, Vehicle localization using landmarks obtained by a lidar mobile mapping system In: Paparoditis N., Pierrot- Deseilligny M., Mallet C., Tournaire O. (Eds), IAPRS, Vol. XXXVIII, Part 3A Saint-Mandé, France, September 1-3, 2010.

  10. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras, Iterative Closest Point Algorithm.

  11. Sebastian Thrun, Wolfram Burgard, Dieter Fox, A Real-Time Algorithm for Mobile Robot Mapping With Applications to Multi-Robot and 3D Mapping IEEE International Conference on Robotics and Automation, San Francisco, April 2000.

  12. J. Borenstein, H. R. Everett, Mobile Robot Positioning:Sensors and Techniques.

Leave a Reply