Modular Robot Buggy Utilizing Lidar for 2D Mapping in Home and Agriculture

DOI : 10.17577/IJERTCONV9IS15026

Download Full-Text PDF Cite this Publication

Text Only Version

Modular Robot Buggy Utilizing Lidar for 2D Mapping in Home and Agriculture

Mohammed Zohaib Kolkar1, M Lokesha2, Prathish Suvarna3, Muaaz4, Shaikh Mohammed Dayin5

1, 5Departments of Mechanical Engineering

2,3,4 Department of Mechatronics Engineering Mangalore Institute of Technology & Engineering Mangalore, India

Abstract Modular robotic systems are autonomous kinematic machines with variable morphology. The aim of the project is to Design and Develop a robot at an affordable price to address the major problems such as less usage of modern farming equipment, over watering and also to map areas such as houses and commercial spaces, in a small-scale situation, which can then be expanded into a larger role. The solution to this problem is to create an autonomous robot, which can map an area using its LiDAR scanner and then can take care of plants and crops by accordingly watering it with respect to the current water or moisture level in soil.

Since it is a modular robot, it is built in such a way that it can do multiple and independent jobs. It can be used for planting seeds, sprinkling water, cutting weeds or it can be used as a personal home or service robot and in many other areas. It has numerous future scope such as the ability to develop multiple attachments as operations are modular and can be changed easily, and implementation of AI & ML to fully automate the entire process.

KeywordsRobot, LiDAR, Mapping, Cartographer, Android Controller, RVIZ, ROS, Ubuntu, Attachments, Modular, Gripper, Watering

  1. INTRODUCTION

    The application of Autonomous Robots in the field of agriculture in India is far lower when compared with the developed countries such as the USA. To fix this, we decided to create an autonomous robot, which can map an area using its LiDAR scanner and then can take care of the plants and crops accordingly.

    This map of the farm area or any crop area, will be saved on the system itself. For instance, in an entire area of 3 hectare of farmland, we can then mark the parts of the map as this rectangular section of 1ha. is of sugarcane for example, and another hectare is of wheat. We also thought of implementing Image Recognition so as it can identify the types of crops and can differentiate between weeds (unwanted crops) and remove them. And water each crop depending upon the moisture level of the soil present.

    We have observed many times that in farms or even parks, gardens and home-scale plantations, water is not given properly to plants. There is overwatering or underwatering. Across the globe, almost 30% of irrigation water is wasted in over watering.

    We can use our robot in solving watering issues in gardens, parks, and home-scale plantations as well.Our robot can also attach different kinds of attachment in future, since we have now developed a prototype model for house-scale plantation, which can then be expanded to large-scale farming.

    The solution to this problem is to create an autonomous robot, which can map an area using its LiDAR scanner and then can take care of plants and crops by accordingly watering it with respect to the current water or moisture level in soil.

    This map of the farm area or any crop area, will be saved on the system itself. Like for instance, in a large farmland, we can then mark the parts of map and divide in sections depending upon types of crops planted

    The robot can do multiple jobs depending on the attached attachment. The robot can navigate manually or self- automated by using an Ultrasonic Sensor. It creates its own path where it wants to navigate by detecting obstacles on its way and avoiding it or creating alternate paths. We can attach different parts to it depending on the different jobs its supposed to do.

    Since it is a modular robot, it is built in such a way that it can do multiple and independent jobs. It can be used for planting seeds, sprinkling water, cutting weeds or it can be used as a personal home or service robot and in many other areas.

    A robot is an electromechanical device which is capable of reacting in some way to its environment and taking autonomous decisions or actions in order to achieve a specific task.

    Setting this definition, we started our journey on making a robot which can sense the environment and react to it via remote commands or by its coded programming.

    We want to create a functional robot using Raspberry Pi 4 along with YDLiDAR and Pi Camera.

    Our robots main function is to map the terrain using 2D Mapping of SLAM Operation (Simultaneous Localization and Mapping).

    The other functions include using it as a buggy to go in tight areas (inaccessible by humans), gripping objects with the help of a mechanical gripper, and sprinkling water on home- scale planting.

    Some info on Raspberry Pi 4 It is a tiny and affordable computer which boasts Broadcom BCM2711, Quad Core CPU with 4GB of RAM. Along with LAN and Bluetooth. It runs on very little power of 5V and 2.5A, in the range of 12.5

    -15W.

    YDLiDAR X2L LiDAR means Light Detection and Ranging. This LiDAR has a range of 8m with a ranging frequency of 3000 Hz and canning frequency of 7Hz.

    LiDAR uses time of flight (TOF) measurement for a laser pulse to be reflected off to a target to determine its range.

    Modular robot systems are composed of repeated robot elements that connect together to form larger robotic

    structures and can change the connective arrangement of their own modules to form different structures with different capabilities. Modular robots would be able to enter unknown environments, assess their surroundings, and self-reconfigure to take on a form suitable to the task and environment at hand.

    Modular robots are flexible structures that offer versatility and configuration options for carrying out different types of movements. Today, modularity is present in numerous areas of industry and robotics, therefore modular systems offer benefits such as versatility, robustness and low-cost manufacturing compared to fixed-parameter conventional designs. Some of these tasks include simple movements such as spinning or moving forward and complex movements such as walking or crawling. The scope and movements of the robotic structure depend on the shape and number of degrees of freedom of each module, since these variables can increase the processing capacity required to synchronize the articulations of each module.

    The development of modular robots has been a prominent direction of robot research in recent years. The core value of modular robots lies in their variety of structural form, which enables the production of many types of movements and adaptation to different environments. Many researchers have developed a series of rigid modular robots, including Polybot, CONRO, M-TRAN, Micro-Unit, SuperBot, Yamor, UBot and Seremo. These modular robots can alter their structure by connecting different faces of their modules. Rigid modular robots have finite degrees of freedom.

  2. LITERATURE REVIEW

    1. K Chailoet and E Pengwang – Assembly of Modular Robot for Cleaning Various Lengths of Solar Panels, IOP Conference Series: Material Science and Engineering, 2019

      Components include Brush, wheels, support wheel, motors, aluminum pipe frame, Photoelectric sensors. They implemented edge falling detection while cleaning a surface. Hence two photoelectric sensors were installed in a front and rear position near the middle supporting wheel. Brush Compression system to adjust the distance between brush and the surface.

    2. Nils Rottmann, Nico Studt, Floris Ernst and Elmar Rueckert- ROS-Mobile: An Android application for the Robot Operating System, Institute of Robotics and Cognitive Systems

      ROS application for Android operated devices for remote control and to monitor mobile robotic systems. The basic architecture the app is based on is MVVM (Model View ViewModel).

    3. Redmond R. Shamshiri, Cornelia Weltzien, Ibrahim A Hameed, Ian James Yule – Research and development in agricultural robotics: A perspective of digital farming, July 2018 International Journal of Agriculture and Biological Engineering 11(4):1-14

      Digital farming is the practice of modern technologies such as sensors, robotics, and data analysis for shifting from

      tedious operations to continuously automated processes. This paper reviews some of the latest achievements in agricultural robotics, specifically those that are used for autonomous weed control, field scouting, and harvesting. The concepts of multi-robots, human-robot collaboration, and environment reconstruction from aerial images and ground-based sensors for the creation of virtual farms were highlighted as some of the gateways of digital farming.

    4. Kartika Wisnudhanti, Feri Candra – Image Classification of Pandawa Figures Using Convolutional Neural Network on Raspberry Pi 4, Journal of Physics: Conference Series, Volume 1655 Pages 2-7, Pages 3-10, September 2020

      CNN is used to process two-dimensional data. The classification results in the training data having an accuracy rate of 97.88%, while the test data having an accuracy rate of 96.5%. Methodologies used include Classification, RelU Activation Function, Max Pooling, Adam Optimization Algorithm and Softmax Classifier.

    5. C Aakash and V Manoj Kumar – Path planning with the help of Lidar for Slam Application, IOP Conference Series: Materials Science and Engineering, Volume 912, Pages 1-8, 2020

      Path planning is a term used to move robot from source to objective. Green Fire Algorithm, Dijkstra, and A*(A-star) algorithms for path planning. Environment mapping with Rviz (Robot Visualizer)

      Figure 2.1 RVIZ Window

    6. Anh Nguyen, Quang D. Tran – Autonomous Navigation with Mobile Robots using Deep Learning and the Robot Operating System, arXiv:2012.02417v2 [cs.RO], Pages 18, 7 December 2020

      Collecting data using ROS and Gazebo. Deep model for navigation using RGB images as input. Trained model deployment

    7. Shahrizal Saat, WN Abd Rashid, MZM Tumari and MS Saealal – HectorSLAM 2D Mapping for Simultaneous Localization and Mapping (SLAM), Journal of Physics: Conference Series, Volume 1529, Pages 1-9, November 2019

      SLAM implementation to provide localization estimates in environment. SLAM, used to build unknown environment

      map. HectorSLAM constructs good quality 2D mapping

    8. YU ZHANG, TIANJIO ZHENG, JIZHUANG FAN, GE LI, YANHE ZHU, and JIE ZHAO, Non-linear Modeling and Docking Test of a Soft Modular Robot, IEEE, Pages 1-10, December 2018

      Manufacturing process of the modules. The module is manufactured with silicon material, which has hyper-elastic characteristics and can vulcanize at room temperature. The modules docking motions are controlled by the main controller, air supply and solenoid valves. The main controller realizes ON-OFF control for the solenoid valves to pass or block the high-pressure air forced into modules. The robot uses magnetism to achieve infinite degrees of freedom. Three experiments were conducted: module plane bending, fixed point docking and two modules docking.

    9. Jonathan Daudelin, Gangyuan Jing, Tarik Tosun, Mark Yim, Hadas Kress-Gazit and Mark Campbell, An Integrated System for Perception-Driven Autonomy with Modular Robots, arXiv:1709.05435v2 [cs.RO] 13 Dec 2018, Pages 3-10,

      Demonstration of autonomous, perception-informed, modular robot system that can adapt to unknown environments. The system hardware consists of a set of robot modules and a sensor module that contains multiple cameras and a small computer for collecting data from the environment. This paper describes how the robot can autonomously complete high-level tasks by reactively reconfiguring in response to its perceived environment and task requirements. The demonstrations showcase a range of different ways the modular robot can interact with environments and objects: movement over flat ground, fitting into tight spaces, reaching up high, climbing over rough terrain and manipulating objects.

    10. Benoit Piranda, Pawel Chodkiewicz, Pawel Holobut, Stephane Bordas, Julien Bourgeois and Jakub Lengiewicz, Distributed prediction of unsafe reconfiguration scenarios of modular robotic Programmable Matter, arXiv:2006.11071v1 [cs.RO] 19 June 2020, Pages 1-16.

    Framework for predicting whether a planned reconfiguration step of a modular robot will mechanically overload the structure, causing it to break or lose stability under its own weight. The algorithm is based on a distributive iterative solution of mechanical equilibrium equations derived from a simplified model of the robot. The model treats inter-modular connections as beams and utilizes a non-sliding unilateral contact between the modules and the ground. This algorithm can be used to assess the mechanical feasibility of a reconfiguration step to be made by a self-reconfigurable robot.

  3. METHODOLOGY

    Here, we are using YD LiDAR to continuously scan the environment and create a 2D map. This map is displayed on RVIZ on a computer which is connected to the Raspberry Pi 4 via Secure Shell Protocol on Wi-Fi.

    The movement of the robot can be either controlled manually or automatically by using Ultrasonic Sensor. Manual Control

    works by programming the Arduino Motor Board to accept commands from an Android Phone. Automatic Control also works with programming the Motor Board to change into Self-Automated with a single button click on the Android Phone.

    When in Automatic Mode, the system uses the Ultrasonic Sensor and continuously measures the distance in front of it, when it detects that a collision will occur, the program then changes automatically to tell the sensor to measure both right and left distances. This distance from both right and left is compared, and the robot moves by itself in the direction of the greater distance (For e.g., if left distance is more than right distance, it turns left or vice-versa). But if both distances are equal then it turns around and takes a U-turn.

    All the time when the robot runs or takes directions in manual or automatic, the YD LiDAR continuously scans the environment and creates a 2D map, which is continuously updated on the computer connected to the Raspberry Pi via Wi-Fifths 2D map is updated real-time and there is no delay in updating. It uses technology of Machine Learning and Artificial Intelligence, specifically Google Cartographer to map the room, or area. By estimating or approximating the place, if it detects obstacles blocking the laser of LiDAR to hit the wall or a path.

    After we create a 2D map of the place, we can then use it and divide the map into areas as we need. For example, a farmer may scan the farmland, and mark the different areas of its farm according to the different types of crops grown. This map is always stored on the system and the computer, so there is no need to do the process again and again.

    Then comes yet another aspect of the robot: the modular body of the robot which the name suggests is a device which can take different types of module attachment and work accordingly by the job given.

    The next part of the device are the attachments, these attachments can be attached to the robot and gives it the ability to work on fields for watering crops depending upon the moisture level in soil, for removing weeds, for pick and place. It can also be used in commercial and home-use for watering small garden or house-grown plants and taking care of it and pick and place.

    All mapping and Simultaneous Localization and Mapping (SLAM operation) is done on the Raspberry Pi system.

    Vision System, Raspberry Pi 4, Motor Drivers.

    The Vision System consists of the Pi Camera, which sends a live feed of what the robot is seeing to any browser on any computer/phone. It also consists of LiDAR. This allows the operator to remotely control the system and see what the robot is doing or seeing.

    Raspberry Pi 4 along with Arduino Uno also takes care about the functions to be done by the attached part via in-built programming.

    The Motor Driver consists of six sub-motors, they receive the signal only from Arduino UNO. The core body of the robot is made using high grade aluminum and the side panels are riveted on it. All the rest from the attachment holders and the attachments to the LiDAR and Camera Holder were initially planned to be 3D Printed for greater accuracy and ease of

    production. But due to Covid, we improvised our plans and made them using scrap materials we found in our own homes. Raspberry Pi 4 is used as the brains of the whole operation. LIDAR is integrated with the Raspberry PI 4 using ROS Melodic.

    We make use of the Robot Operating System (ROS) as our robotics middleware which is used for collecting sensor data, robot control, planning and visualization and simulation using RVIZ which is a part of ROS.

    A high-definition camera along with Proximity and LiDAR makes the Vision System of the Robot.

    10000mAh battery is used for the Raspberry Pi 4 and all the systems except motors. And another 11.1V Li-Ion Battery is used to drive the motors. Attachment modules fitted with servo motors SG90 to rotate and move in the needed path and direction.

  4. EXPERIMENTATION

    Figure 4.1. shows a high-level description of the entire setup. YDLIDAR is connected to the Raspberry Pi, and Cartographer is installed and running on the Raspberry Pi.

    Figure 4.1 Raspberry Pi/ Laptop Master-Slave Connection

    Laptop is used for visualization.

    All the collection of data, processing of lidar sensor data and operations takes place on the Raspberry Pi. The data is then transmitted to the laptop in Graphical Format for our visualization of the 2D

    Figure 4.2. shows the 2D mapping done by the LiDAR of an area.

    Figure 4.2 2D Mapping in RVIZ

    In the figure we can see the green dotted lines. These green dotted lines are the walls being tracked by the LiDAR

    Figure 4.3 Final Product

    Figure 4.3 shows the final version of our model. The model has a moisture sensor attached to it along with water storage and a pipe for watering.

    Figure 4.4 – Robot

    The robot can now be used to measure the moisture levels of the soil where the plants are planted and based on the moisture level it waters the plants.

  5. WORKING

    6 BO Motors of 230 rpm(max), 2 Servos are connected to the Arduino for control signals and powered by the Li-Ion Battery. The Bluetooth Module is connected to Arduino. The RC Control app is installed on Android phones. Arduino is programmed using an IDE and powered by the 10000mAH Power bank. Raspberry PI feeds a stream of PiCamera via WiFi on any browser. Raspberry Pi and Lidar powered by a 10000 mAH battery continuously scans the environment and presents a 2D Map of the Floor Area on the Laptop.

    An Ultrasonic Sensor and SG90 servo motor are added to the turn sensor. Sensors and SG90 servo motors powered by a Li- Ion Battery. It reads the distance in front and then takes the decision to turn left or right. If distance is equal in both, it turns around (U-Turn) It continuously scans the distance and stops only when the distance to collision is 30cm.

    In this, gripper claws are attached to Servo Motor SG90.The Servo Motor is controlled by an Android App, which instructs it to open and close the claws. When an object is gripped, we can then close the claw and the object is held in place. It has the ability to pick objects of up to 11cm width. The object is then carried away with the robot and can be dropped by losing grip by opening the claws.

    In this we have attached the moisture sensor and tubing for the water flow from the pump on a wooden stick. This wooden stick is attached to the SG90 Servo Motor which is controlled by an Android Phone and can make the sensor go up and down.

  6. RESULTS AND APPLICATIONS

We have a functional 2D Mapping Robot with the ability to take navigation commands using Android Mobile and PC. We can use the gripper attachment to hold small things of 10 cm width.

Watering attachment for watering house-grown plants in backyard & moisture testing.

Good Accuracy in creating a 2D Model of environment by using SLAM operation

Agriculture Use Water Sprinkler, Pesticide Sprinkler. Home and Commercial use for pick and place and watering plants with the help of grippers and mapping of area by LiDAR. It operates on battery and reduces labor. It is able to perform the needed jobs.

This has many functions and can be used in the following:

  1. In construction sites, to map an area.

  2. In construction sites, to pick up small debris.

  3. In residential sites to map the floor area for real estate.

  4. Agriculture Use Water Sprinkler, Pesticide Sprinkler.

  5. Home and Commercial use for pick and place and watering plants with the help of grippers and mapping of area by LiDAR. It operates on battery and reduces labor. It is able to perform the needed jobs.

  6. As a fun and innovative way to do small human chores.

    ACKNOWLEDGEMENT

    We would like to thank Mangalore Institute of Technology & Engineering (MITE), Moodabidri for extending their help and providing support in our endeavors. Our deepest gratitude to all the teaching and non-teaching staff for their support in completion of this research paper.

    REFERENCES

    For Published Papers:

    [1.] K Chailoet and E Pengwang – Assembly of Modular Robot for Cleaning Various Lengths of Solar Panels, IOP Conference Series:

    Material Science and Engineering, 2019

    [2.] Nils Rottmann, Nico Studt, Floris Ernst and Elmar Rueckert – ROS- Mobile: An Android application for the Robot Operating System, Institute of Robotics and Cognitive Systems

    [3.] C Aakash and V Manoj Kumar – Path planning with the help of Lidar for Slam Application, IOP Conference Series: Materials Science and Engineering, Volume 912, Pages 1-8, 2020

    [4.] Kartika Wisnudhanti, Feri Candra – Image Classification of Pandawa Figures Using Convolutional Neural Network on Raspberry Pi 4, Journal of Physics: Conference Series, Volume 1655 Pages 2-7, Pages 3-10, September 2020

    [5.] Anh Nguyen, Quang D. Tran – Autonomous Navigation with Mobile Robots using Deep Learning and the Robot Operating System, arXiv:2012.02417v2 [cs.RO], Pages 18, 7 December 2020

    [6.] Shahrizal Saat, WN Abd Rashid, MZM Tumari and MS Saealal – HectorSLAM 2D Mapping for Simultaneous Localization and Mapping (SLAM), Journal of Physics: Conference Series, Volume 1529, Pages 1-9, November 2019

    [7.] YU ZHANG, TIANJIO ZHENG, JIZHUANG FAN, GE LI, YANHE

    ZHU, and JIE ZHAO, Non-linear Modeling and Docking Test of a Soft Modular Robot, IEEE, Pages 1-10, December 2018

    [8.] Jonathan Daudelin, Gangyuan Jing, Tarik Tosun, Mark Yim, Hadas Kress-Gazit and Mark Campbell, An Integrated System for Perception-Driven Autonomy with Modular Robots, arXiv:1709.05435v2 [cs.RO] 13 Dec 2018, Pages 3-10

    [9.] Benoit Piranda, Pawel Chodkiewicz, Pawel Holobut, Stephane Bordas, Julien Bourgeois and Jakub Lengiewicz, Distributed prediction of unsafe reconfiguration scenarios of modular robotic Programmable Matter, arXiv:2006.11071v1 [cs.RO] 19 June 2020, Pages 1-16.

    [10.] Risto Kojcev, Nora Etxezarreta, Alejandro Hernandez and Victor Mayoral, Hierarchical Learning for Modular Robots, arXiv: 1802.04132v1 [cs.RO] 12 Feb 2018, Pages 1-5.

    [11.] Luis Fernando Pedraza, Henry Alberto Hernandez and Cesar Augusto Hernandez, Artificial Neural Networ Controller for a Modular Robot using a Defined Radio Communication System, Journal of Electronics, Pages 1-14, 2nd October 2020.

    [12.] Mark Yim, D.G. Duff and K.D. Roufas, PolyBot: A Modular Reconfigurable Robot, Conference: Robotics and Automation, IEEE International Conference, Volume 1, Pages 1-8, February 2000.

    [13.] A. Behar, Andes Castano and Peter Will, The CONRO Modules for reconfigurable robots, IEEE/ASME Transactions on Mechatronics, IEEE Xplore, Pages 1-8, January 2003.

    [14.] Satoshi Murata, Eiichi Yoshida, Akya Kamimura and Haruhisa Kurokawa, M-TRAN: Self-Reconfigurable Modular Robotic System, IEEE/ASME Transactions on Mechatronics, Pages 1-12, January 2003.

    [15.] Eiichi Yoshida, Satoshi Murata, Shigeru Kokaji and Kohji Tomita, Micros-Sized Self-Reconfigurable Modular Robot using Shape Memory Alloy, IEEE/ASME Transactions on Mechatronics, Pages 2- 10, January 2001.

    [16.] Behnam Salemi, Mark Moll and Wei-min Shen, SuperBot: A Deployable, Multi-Functional and Modular Self-Reconfigurable Robotic System, IEEE/RSJ International Conference on Intelligent Robots and Systems, Pages 1-7, October 2006.

    [17.] Rico Mockel, Cyril Jaquier, Kevin Drapel and Elmar Dittrich, YaMoR and Bluemove: An Autonomous Modular Robot with Bluetooth Interface for Exploring Adaptive Locomotion, Conference: Climbing and Walking Robots – Proceedings of the 8th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, CLAWAR 2005, Pages 1-9, September 2005.

    [18.] Jie Zhao, Cui Xindan, Yanhe Zhu and Shufeng Tang, UBot: A New Reconfigurable Modular Robotic System with Multimode Locomotion Ability, IEEE/Journal of Industrial Robotics, Pages 1-14, March 2012.

    Online Resources:

    • Robotkits India – Robokits – https://robokits.co.in/raspberry-pi –

      Accessed in August 2020

    • Caroline Dunn – How to Train your Raspberry Pi for Facial Recognition – https://www.tomshardware.com/how-to/raspberry- pi-facial-recognition, Accessed in September 2020

    • Google Brain Team – TensorFlow Machine Learning – https://www.tensorflow.org Accessed in September 2020

    • Redmond R. Shamshiri, Cornelia Weltzien, Ibrahim A Hameed, Ian James Yule – Research and development in agricultural robotics: A perspective of digital farming – https://www.researchgate.net/publication/326929441_Research_a nd_development_in_agricultural_robotics_A_perspective_of_digi tal_farming – Accessed in Nov 2020

    • Sami Salama Hussen Hajjaj, Ksm Sahari – Review of agriculture robotics: Practicality and feasibility – https://www.researchgate.net/publication/320366473_Review_of

      _agriculture_robotics_Practicality_and_feasibility Accessed in October 2020

    • Guido van Rossum – Python Programming – https://www.python.org Accessed in August 2020

    • Raspberry Pi Foundation Raspberry Pi – https://www.raspberrypi.org – Accessed in August 2020

    • Jacob Padley – Building a Facial Recognition Robot in Less Than 2 Weeks – https://blog.dataiku.com/building-a-facial- recognition-robot-in-less-than-2-weeks – Accessed in August 2020

Leave a Reply