A 3 – DOF Robot Arm for Drawing Application

DOI : 10.17577/IJERTCONV4IS26018

Download Full-Text PDF Cite this Publication

Text Only Version

A 3 – DOF Robot Arm for Drawing Application

1Pratik Baid, 2Manoj Kumar. V, Department of Mechanical Engineering, S R M University,

Kattankulathur, Chennai, India 603202.

Abstract This work presents the modelling and fabrication of a robot arm system solely dedicated to the drawing activity. The arms configuration used here is of a three-joint planar arm, with an assigned joint to allow for lifting or bringing the pen in contact with the drawing paper. The basic prototype of the drawing robot was fabricated using wood fibre material and a combination of Arduino UNO board and servo motors is used to control the link motions. The input image file is provided using a laptop computer system and is processed using MATLAB software.

Keywords Robot arm; SCARA; Canny edge; Edge detection; Calibration; Link length; Joint Angle.

  1. INTRODUCTION

    Previously there has been a lot of researches in recreational uses of robots. Recent research on robots has devoted significant effort on developing robots that can match human behavior on high level tasks that require integration of sensing, physical motion and intelligence. This makes the robot behavior more human-like. The drawing robot is one of them. There have been several demonstrations of drawing robots in past few years.

    A drawing robot is a robot arm which is capable of drawing the image provided to it via a computer system making use of image processing techniques. It is solely dedicated to the drawing activity. It can be defined as a robot that reads the pixel values of an image provided to it and then draws the same. It consists of mechanical links driven by servomotors which may be controlled using microcontrollers. The drawing robot must be capable of obtaining a fine portrait of the input image with good pixel readings and high stroke length such that it is closely similar to the given image. A drawing robot requires human-specific skills which are challenging tasks in the field of robotics.

    Drawing robots can possess serial manipulators or parallel manipulators with sufficient number of degrees of freedom in accordance to work space and the robot movement in it. Degrees of freedom refer to the number of possible free movements available for the robot. Practically, a SCARA manipulator with three degrees of freedom is preferred on basis of feasibility and operation cost.

    The SCARA [1] acronym stands for Selective Compliance Assembly Robot Arm or Selective Compliance Articulated Robot Arm. SCARAs can be more expensive than comparable Cartesian systems and the controlling software requires inverse kinematics for linear interpolated moves. This software typically comes with the SCARA though and is usually transparent to the end-user. Most SCARA robots are based on serial architectures, which means that the first motor should

    carry all other motors. There also exists a so-called double-arm SCARA robot architecture, in which two of the motors are fixed at the base.

    The working of the drawing robot involves two major phases. The first phase involves extraction of the required features from the image to be drawn fed using a computer system and converting it into appropriate code using MATLAB software. In the second phase the MATLAB program is transferred to a robotic arm in terms of coordinates which are drawn or traced by the robot on the drawing surface. The drawing robot requires a pre calibration step before the robot can start drawing. If the drawing board or surface is moved or disturbed, again a fresh calibration is required. Moreover, the drawing cannot be done over arbitrarily shaped surfaces or the surfaces whose calibration is difficult to be performed. Thus available techniques can work only for flat pre-calibrated surfaces. To address this problem, the robot can be equipped with force sensing capability.

  2. BACKGROUND AND RELATED WORKS Robot Paul, [2] a robotic installation that can produce

    observational face drawings of people. Paul is a naive drawer and it does not have high level knowledge of the structures constitutive of the human face such as the mouth, nose and eyes nor has the capability of learning expertise based on experience as a human can. However, Paul was able to draw using the equivalent of an artists stylistic signature based on a number of processes mimicking drawing skills and technique, which together formed a drawing cycle.

    A demonstration of a robot equipped with force sensing capability that can draw portraits on a non-calibrated, arbitrarily shaped surface was done at BARC, Mumbai [3]. This robot was able to draw on a non-calibrated surface by orienting its drawing pen normal to the drawing surface, the pens orientation being computed from the forces being sensed. In this way, the robot is also able to draw portraits on arbitrarily shaped surfaces.

    Many methodologies in this field have been implemented. An intelligent robot that recognizes and assembles three dimensional objects by means of vidicon cameras, an articulated mechanical hand, and a digital computer was reported in [4]. Its problem-solving functions included three essential parts: the recognition of macro- instructions from a human master, the recognition of the objects to be handled, and the decision making for executing the necessary tasks.

    A theory that it is possible to improve safety by guaranteeing that the robot will never exhibit any unstable behavior was published in [5]. During human-robot

    interaction, the resulting cooperative motion should be truly intuitive and should not restrict in any way the human performance. For this purpose, he had designed a new variable admittance control law that guarantees the stability of the robot during constrained motion and also provides a very intuitive human interaction.

    Robot Draw, [6] combined recently developed internet based programming tools to generate three dimensional virtual models of robot manipulators from a DH parameter table. Robot-Draw combines hypertext markup language (HTML), practical extraction and report language (PERL), and virtual reality modeling language (VRML).

    Betty, [7] the portrait drawing humanoid robot solved line drawing problems presented via a modified Theta graph, called Furthest Neighbour Theta graph, which is computable in time. The results showed that the number of edges in the resulting drawing is significantly reduced without degrading the detail of the final output image.

  3. METHODOLOGY

    An apt methodology is followed to select suitable mechanism, kinematic concept and design for the drawing robot. First initialize the drawing robot design, select suitable link lengths and calculate the joint angle according to the workspace limitations. Model the drawing robot using SOLIDWORKS software. The control unit is formed using an arduino UNO board and a computer system which makes use of softwares like processing talk, MATLAB and arduino IDE. The drawing robot is programmed using the softwares mentioned and various experiments are performed to test the robot for various proximities and to evaluate the drawing robots performance. Fig.1 depicts the methodology process.

  4. KINEMATIC ANALYSIS

    The d-H table for the drawing robot of RRP configuration is-

    The overall transformation matrix for forward kinematics can be represented as,

    d-H Parameters

    a

    d

    X1

    +90

    X2

    1

    0

    -90

    0

    2

    0

    0

    d3

    0

    T

    =

    0T1 X

    1T2 X 2T3

    C1C2

    -S1

    -C1S2

    -C1S2/d3 + x1C1

    T

    =

    S1C2

    C1

    -S1S2

    -S1S2/d3 + x1S1

    S2

    0

    C2

    C2/d3 x2

    0

    0

    0

    1

    Let, 1 = 2 = 0 and d3 = 0.05

    1

    0

    0

    250

    T

    =

    0

    1

    0

    0

    0

    0

    1

    -130

    0

    0

    0

    1

    -C1S2d3 + x1C1 D = -S1S2d3 + x1S1

    d3C2 x2

    250

    D = 0

    -149.95

    Where, D Translation vector For inverse kinematics,

    T = C1C2 -S1 -C1S2 -C1S2/d3 + x1C1 S1C2 C1 -S1S2 -S1S2/d3 + x1S1 S2 0 C2 C2/d3 x2

    0 0 0 1

    T = r11 r12 r13 r14

    r21 r22 r23 r24 r31 r32 r33 r34 r41 r42 r43 r44

    1 = Atan2( -r24 , -r14 )

    2 = Atan2( ± (r2 + r2

    ) , -r34 )

    14 24

    D3 = (r2 + r2 + r2 )

    14 24 34

    Fig. 1. Methodology flow chart

    Let, -100 < 1 < +100

    -30 < 2 < +30 0.05m < d3 < 0.5m

    0.354 0.866 0.354 0.106

    T = -0.612 0.500 -0.612 -0.184

    0.307 0 0.308 0.212

    0 0 0 1

    On substituting the appropriate values we get,

    1 = 60° 2 = 45° d3 = 0.30

  5. SYSTEM DESCRIPTION

    1. Installation

      The drawing robot setup is composed of a left-handed robotic arm with a tool holder as an end-effector all attached to a wooden table and a laptop computer. Always present at the installation is a human assistant; their role is to change the paper and give the signal to the drawing robot that an image need to be drawn. The human operator also controls the reset button and is capable of initiating the pre-calibration process if required.

    2. Hardware

      The hardware of the drawing robot consists of three links in RRP configuration fabricated using wood fibre material. This material has less weight and optimal stress bearing capacity. The robot links are attached to servomotors. The servomotors are connected to an arduino UNO board via which the motion of the links is controlled. The CAD model of the prototype is done in Solid works 2013 which is shown in Fig.

      2. The length of the three links are 250mm, 150mm and 40mm respectively. A uniform breadth of 40mm and thickness of 15mm is maintained for all the links. The physical model of the robot is shown in Fig. 3.

      Fig. 2. CAD model of the prototype

      Fig. 3. Physical model of the prototype

    3. Robotic control and software architecture

    An arduino UNO board and a laptop computer system which makes use of softwares like MATLAB and arduino IDE forms the control unit. Arduino IDE is used to control the servomotors via which the motion of the robotic links is controlled. MATLAB is used to process the image using canny edge detection technique.

  6. IMAGE PROCESSING

    The input image is provided using a laptop computer and is processed using MATLAB software using canny edge detection algorithm [8], [9]. The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny's edge detection algorithm is a classical and robust method for edge detection in gray-scale images. The two significant features of this method are introduction of NMS (Non-Maximum Suppression) and double thresholding of the gradient image.

    1. Pre-Processing

      The canny edge detection algorithm is applicable only to gray scale images. Hence a control block to convert images of other formats to gray scale must me implied. The MATLAB 2013 software has inbuilt commands to convert images from one format to another. Using these commands a control block for image conversion is formed.

    2. Edge detection

      Whenever we draw an image, we try to find out the important edges or features that need to be drawn. The edge detection is a famous image processing technique and there are several edge detection algorithms available. We tried with various algorithms and found that canny edge detection algorithm was best suited for our task. The Canny edge detector finds out real and localized edges without being affected by noise. The output of edge detection algorithm via MATLAB is shown in Fig. 4.

      Fig. 4. Output of canny detection algorithm

    3. Post-Processing

      After the important features of an image are detected in terms of edges, some processing needs to be performed before these edges can be transferred to the robot for drawing. This post processing is explained in the following subsections.

      • Branch Removal: There should be no branches in an edge so that the robot can draw that edge in a single go from one end of the edge to the other. For this, the branch-points were first detected. The pixel at such a point was then removed i.e. it was made 0 from 1 so that the branches were simply separated from each other.

      • Removal of Small Objects: The edges which had too small number of pixels or the area covered was very small, were removed from the final output so that the noisy and unnecessary features are not drawn.

      • Differentiation of Edges: The edges extracted by canny edge detection algorithm are in the form of binary pixels. What we have is a binary image in which all the pixels are set to 1 where an edge exists and the remaining are set to 0. From this image, pixels belonging to a single edge need to be grouped together so that we have a collection of edges rather than a collection of pixels. For this purpose we used 8 connectivity of the pixels and 8 connected neighbours were grouped together into a single edge. After this step, we have a set of edges that need to be drawn by the robot.

      • End-Point detection: The end points of each of the edges were detected so that robot can draw that edge from start point to end point. Circular or cyclic edges in an image have no end points. For such edges, we arbitrarily removed one pixel to break the cycle, and then detected end-points of the broken cycle.

      • Storage of Edges: The edges obtained were finally stored in a sorted manner from the longest edge to the shortest one so that the robot draws the most important features first moving towards the smaller ones.

  7. CONCLUSION AND FUTURE WORK

A brief study over various drawing robots has been done. The drawing robot is capable of drawing various shapes, alphabets and numbers. The drawing robot is also capable of processing human faces with a good output. Design trails were made and rectified using SOLIDWORKS software. This robot can draw on any even surface based on tool mounted. Fabrication of the design was done and experiments were carried out on the drawing robot to analyze its efficiency. Further a force measuring sensor can be equipped to the end effector of the drawing a robot which will indicate if the tool is in contact with surface where the image is to be drawn. As a result of this the drawing robot will be capable of drawing on irregular surfaces too. The laptop computer can be connected to a real time camera and images captured by it can be processed. Also it is possible to change the drawing robot tool such that the robot is capable of carving images on hard surfaces.

REFERENCES

  1. Escobar, Rodríguez., C.Gutierrez., C.Hernández., F. RLemus., S.Díaz. and Y.Ledeneva., Simulation of Control of a SCARA Robot Actuated by Pneumatic Artificial Muscles Using RNAPM, Journal of Applied Research and Technology, 2014, Vol. 12, pp.939 946.

  2. Patrick, Tresset. and Frederic, Fol, Leymarie., Portrait drawing by Paul the robot, Proceedings of Computers & Graphics, 2013, Vol. 37, pp.348 363.

  3. Shubham, Jain., Prashant, Gupta. and Vikash, Kumar., A Force- Controlled Portrait Drawing Robot, IEEE Transactions of robot, 2015, Vol. 15, pp.3160 3165.

  4. Masakazu, Ejiri., Haruo, Yoda., Kiyoo, Takeyasu., Takeshi, Uno. and Tatsuo, Goto., A Prototype Intelligent Robot that Assembles Objects from Plan Drawings, IEEE Transaction on computers, 1972, Vol. C- 21, pp.161 170.

  5. Vincent, Duchaine., Boris, Mayer, St-Onge., Clement, Gosselin. and Dalong Gao., Stable and Intuitive Control of an Intelligent Assist Device, IEEE Transactions on haptics, 2012, Vol. 5, No. 2, pp.148 159.

  6. Melinda, F. Robinette., Robot – Draw an Internet – Based Visualization Tool for Robotics Education, Transactions on education, 2011, Vol. 44, No. 1, pp.448 459.

  7. Meng, Cheng, Lau., Jacky, Baltes., John, Anderson. and Stephane, Durocher., A Portrait Drawing Robot Using a Geometric Graph Approach: Furthest Neighbour Theta Graphs, IEEE Transactions of robots, 2013, Vol. 123, pp.930 935.

  8. Ranita, Biswas. and Jaya, Sil., An Improved Canny Edge Detection Algorithm Based on Type-2 Fuzzy Sets, Proceedings of Procedia Technology, 2012, Vol. 4, pp.820 824.

  9. Haibin, Di. and Dengliang, Gao and Gray-level transformation and canny edge detection for 3D seismic discontinuity enhancement, proceedings of Computers & Geosciences, 2014, Vol. 72, pp.192 200.

Leave a Reply