Feasibility of using Cost Effective LIDAR for Precision Horticulture

DOI : 10.17577/IJERTV7IS010168

Download Full-Text PDF Cite this Publication

Text Only Version

Feasibility of using Cost Effective LIDAR for Precision Horticulture

Prajankya Sonar

Automation and Robotics, BVBCET Hubballi, India.

Abstract LIDAR (Light Detection and Ranging) sensors are one of the biggest contributors to the cost of a mapping/navigation system in a robot. A good quality LIDAR for a simple autonomous robot may cost 20 80% of the robots cost. Hence proposed is a minimal setup for proving feasibility of using a very cost-effective LIDAR sensor to be used in advanced applications where high-end LIDAR sensors are normally used. Here the application demonstrated is with low computing resources and low-cost devices for demonstrating 2D mapping of a Horticulture field. The setup proposed has got an overall Error of 1.08% while the sensor itself had 1% error. Hence it can be approximated that the setup error is 0.08%.

Keywords LIDAR; Horticulture; Simultaneous localization and mapping; Robot Operating System; Agricultural Robotics; Mapping

  1. INTRODUCTION

    Horticulture crops play a significant role in economy, employment, national self-reliance, health, food and nutritional security of the country. In recent years, horticulture development is emerging as one of the major thrust area in agriculture sector in the developing countries [1]. For optimum utilization of available land resource, delineation of orchards(or horticulture in general) and spatial analysis using remote sensing can provide useful information for management and decision making for successful application of site-specific crop management, quantification and scheduling of precise and proper fertilizer, irrigation needs, application of pesticides for pest and disease management and has potential for increasing net returns and optimizing resource[2]. One of the key components of precision agriculture is data collection. At present, there are two primary approaches to data collection for precision agriculture: remote sensing and manual data collection. Satellite and aerial remote sensing are severely limited by cloud cover. The ability to interpret a model of the environment and to localize itself is one of the most important task for any robotic applications. Such interpretation and applications has become widespread in diverse fields like engineering[3], [4], military[5], [6], architecture[7], [8], bio-medical[9], [10], [11]. This interest in such applications is due to, in large part, to the increased availability and cost reduction of three-dimensional(3D) scanning and imaging sensor technologies. Laser scanning, also known as LIDAR (light detection and ranging) scanning or imaging is one of the most used technologies for those applications. Most of these sensors are used for man-made objects and environments, as it is a bit complicated to use such sensors in natural objects, especially trees and other vegetation sometimes occlude objects of interest. For some applications, methods have been developed to filter such unwanted objects

    from LIDAR scans to gain a better view of objects of interest[12].

    In contrast, the main interest of this study lies in modelling and measurement of horticulture fields using cost effective LIDAR. This modelling can be used to make a map of the trees in the field, which can be useful for precision horticulture applications like planning of irrigation, harvesting and fertilizer requirements. Although a variety of methods exists to do the same at various degree of accuracy, this method is proposed for large scale horticulture, which doesnt require labour-intensive or destructive methods.

    Because the interest in this paper involved only in the target trees, the study generally limited the extent of scans to include only the space immediately surrounding the target tree barks. Doing so expedited acquisition of the range images. To reduce the time needed for scan processing, the scans were clipped by removing 45-degree part on the back where the driver would hold the cart while acquisition. Additional scans were acquired to verify the precision of the setup. The data collecting was done after sunset, or before dawn because atmospheric conditions were ideal and the sensor required not to be in direct light of sun.

  2. METHODOLOGY

    1. Data Aquistion

      Scan acquisition was done from Feb 1 to 6, 2017 in a regular parking and lawn vegetation in the KLE Tech. university, Hubballi, India. In total two sites were considered in this study. Each site was chosen by considering the variety in vegetation and environments. After choosing the site, the trees to be scanned were named, then the cart was positioned in a place marking it as the origin, this is the point in reference to which all the measurements and scanning is done. As the origin is set, the mapping program is started on the on-board computer. The computer was connected to a laptop for remote display of the on- going mapping.

    2. Data Registration

      Scan registration is the process of transforming the map from individual LIDAR scans into a single Cartesian coordinate system. Registration is carried out by a 3D rotation and translation of scan x, surface data from the sensors unregistered or native coordinate system to a target or global coordinate system.

      () = + (1)

      Where () is the transformed(registered) surface point, A is the 3×3 rotation matrix, is the untransformed surface point from the sensor range scan, and is the translation 3-vector.

      All surface points obtained in one range image are transformed by the same rotation matrix (A) and translation vector (), thus a unique A, transformation is needed for each image to be registered. For this application, it was not necessary to georeference point clouds to a real-world coordinate system. When mounted on the cart, the scanners z axis was normal to the horizontal plane. Because the ground was level at the study

      Fig. 1: Software architecture

      ( + )

      = + 0

      (3)

      site, any of the scans obtained could serve as a suitable target coordinate system for registration.

      Accurate registration of LIDAR images generally depends on the extraction and matching of common surfaces or artificial control points in overlapping, unregistered scenes. This scanning matching is done using cart pose estimation with the data from odometry by the GSLAM [13]. This gives the estimate of cart

      Where,

      = cos() + 0 (4)

      = sin() + 0 (5)

      pose and generates a combined, registered scan map. The tree stems are used as natural targets for use in scene co-registration, noting that the accuracy of the segmenting and matching of features would directly affect the accuracy of any registration based on such features [3].

      The cart was pushed through the vegetation, while the computer simultaneously maps the trees, by collecting odometry data from the rotary encoders connected to the wheels in differential, and giving the scans to mapping algorithm[14], which registers the scan points to make a 2D map. The visualizer on the laptop(RViz [15]) simultaneously shows the trees getting detected as well as map being built. After a satisfactory map is taken, the same steps for getting a new map are repeated 10 times for each site, hence 20 maps were collected in the study. The mapping algorithm is gmapping/GSLAM[13], [14], which is part of ROS[16].

      For validation, all the named trees on both the sites were mapped in Cartesian coordinates with reference to the origin taken in scanning using Bosch GLM100C laser length finder. Tree readings for each measurement, and the average was taken for validation. Then the results wee compared and calculated.

      The rotary encoders, whenever there is a change in position, pulls the hardware interrupts on Arduino Uno, a microcontroller board, which in-turn would give those signals to the computer. As shown in Fig. 1 the software takes transformations from odometry and laser scans from LIDAR, which then gives out map and expected position of cart using scan matching. The tree detector program reads the given map and using the given minimum and maximum radius detects trees and gives their coordinates.

      Rotary Encoder sensor is used on both the wheels with half stepping, the resolution is 6°. Using 1:3 ratio gears for encoder to wheel adds up the encoder resolution to 2°, i.e. 180 points per revolution. Eq. (2), (3), (4), (5) were used to calculate odometry.

      , = displacement for left and right wheels respectively

      = Center of Displacement

      = Distance between wheels

      = Angle of the turn in radians

      Fig. 2: Trees recorded vs real positions of trees on Site A

      =

      ( + ) 2

      (2)

      Fig. 3: Trees recorded vs real positions of trees on Site B

      Fig. 2 and Fig. 3 shows registered sensor data generated by the cart setup over the same site (site A and site B respectively) 10 times each.

    3. Locating Tree center in map

    The generated map was given to the a tree detector node, which applies the simple blob detector filter and Canny edge detector filter[17] available in OpenCV[18], a computer vision library, to detect arcs as the bark of the tree, and hence find the centre. These center distances with reference to origin collected from that tree detector node were then compared with those collected physically with the laser distance finder.

  3. LIMITATIONS

    The sensor used in the research (RPLidar by RoboPeak[19]) cannot work well in daylight, and hence need to be used when direct sunlight is not falling on the sensor. Sensor accuracy is 1% of the actual distance, hence the whole system accuracy cannot go within 1%.

  4. RESULT AND ANALYSIS

    The analysis was done with two tailed test using Z-score from T-statistic method with 95% repeatability as null hypothesis.

    After drawing all the readings for all the trees (both X and

    Y) the distribution is obtained as shown in Fig. 4

    Fig. 4: Result- Errors

    Mean Error: 0.14 meters Maximum Error: ± 0.29 meters

    Which is calculated by taking margin of error for each x and y independent reading for every set. Here the error is large, i.e. 29cm. notice the plot near 0 is very high, as the readings have very near to each other as sensor error is denoted by 1% so reading nearer to zero will be nearly same.

    To get a combined error estimate, all the readings and data are converted to percentage error from the real value. As shown in Fig. 5.

    Fig. 5: Result- Percentage Error

    Mean Percent Error: 3.62% Maximum Percent Error: ± 7.7%

    This graph is showing percentage error of all the readings, using normal distribution for 1000 samples for prediction on each reading inside margin of error. That is maximum Error would be 7.7% of actual reading, also as the LIDAR has an error of 1%[19] then the remaining error comes from the system being used(Odometry, mapping algorithm, human errors, other errors). To get a single number for error estimate, all the above percentage errors are combined as shown in Fig. 6.

    Fig. 6 Result: Errors combined

    Percent Error: ± 1.08%

    This graph and error estimation is done with 220 readings or samples. with 1000 samples normal distribution. That is for any readings taken with this cart setup will give reading 95% of the times within ± 1.08% of the actual reading. Considering the sensor itself has ± 1% error as per the datasheet[19].

  5. CONCLUSION

It can be concluded from the results that the mapping technique can be effectively applied to horticulture and similar applications and that a cost-effective solution can be developed using RPLidar or similar low-cost LIDARs. The system developed can be easily adapted for different applications.

Any future improvements would be to motorize the cart to make it autonomous, and more study on many other different types of horticulture to prove the cart setups performance.

REFERENCES

  1. K. Usha and B. Singh, Potential applications of remote sensing in horticultureA review, Sci. Hortic., vol. 153, pp. 7183, Apr. 2013.

  2. S. S. Panda, G. Hoogenboom, and J. O. Paz, Remote Sensing and Geospatial Technological Applications for Site-specific Management of Fruit and Nut Crops: A Review, Remote Sens., vol. 2, no. 8, pp. 19731997, Aug. 2010.

  3. P. J. Besl and N. D. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239256, Feb. 1992.

  4. D. W. Eggert, A. W. Fitzgibbon, and R. B. Fisher, Simultaneous registration of multiple range views satisfying global consistency constraints for use in reverse engineering, DAI Res. Pap., 1996.

  5. P. Lin, G. Bekey, and K. Abney, Autonomous military robotics: Risk, ethics, and design, California Polytechnic State Univ San Luis Obispo, 2008.

  6. P. Sapaty, Military robotics: latest trends and spatial grasp solutions, Int. J. Adv. Res. Artif. Intell., vol. 4, no. 4, pp. 918, 2015.

  7. Z. Shang and Z. Shen, Real-time 3D Reconstruction on Construction Site using Visual SLAM and UAV, ArXiv Prepr. ArXiv171207122, 2017.

  8. G. Zhang, J. H. Lee, J. Lim, and I. H. Suh, Building a 3-D Line-Based Map Using Stereo SLAM, IEEE Trans. Robot., vol. 31, no. 6, pp. 13641377, Dec. 2015.

  9. H. Shimizu, S. A. Lee, and C. Y. She, High spectral resolution lidar system with atomic blocking filters for measuring atmospheric parameters, Appl Opt, vol. 22, no. 9, pp. 13731381, May 1983.

  10. K. Fredriksson, B. Galle, K. Nyström, and S. Svanberg, Mobile lidar system for environmental probing, Appl Opt, vol. 20, no. 24, pp. 4181 4189, Dec. 1981.

  11. R. T. Whitaker, A Level-Set Approach to 3D Reconstruction from Range Data, Int. J. Comput. Vis., vol. 29, no. 3, pp. 203231, Sep. 1998.

  12. R. Hardie, M. Vaidyanathan, and P. Mcmanamon, Spectral band selection and classifier design for a multispectral imaging laser radar, Opt. Eng., vol. 37, 1998.

  13. G. Grisettiyz, C. Stachniss, and W. Burgard, Improving grid-based slam with rao-blackwellized particle filters by adaptive proposals and selective resampling, in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, 2005, pp. 24322437.

  14. G. Grisetti, C. Stachniss, and W. Burgard, Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters, IEEE Trans. Robot., vol. 23, no. 1, pp. 3446, Feb. 2007.

  15. H. R. Kam, S.-H. Lee, T. Park, and C.-H. Kim, RViz: a toolkit for real domain data visualization, Telecommun. Syst., vol. 60, no. 2, pp. 337 345, 2015.

  16. M. Quigley et al., ROS: an open-source Robot Operating System, in ICRA workshop on open source software, 2009, vol. 3, p. 5.

  17. J. Canny, A Computational Approach to Edge Detection, IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679698, Nov. 1986.

  18. G. Bradski, The OpenCV Library, Dr Dobbs J. Softw. Tools, 2000.

  19. RoboPeak Inc., RPLidar-A1: rev.1.0 (https://www.robotshop.com/media/files/pdf/rplidar-a1m8-360-

degree-laser-scanner-development-kit-datasheet-1.pdf). Shanghai Slamtec.Co.,Ltd, 04-Jul-2016.

Leave a Reply