Research on Multi-sensor Positioning System for Mobile Robot

DOI : 10.17577/IJERTV7IS100022

Download Full-Text PDF Cite this Publication

Text Only Version

Research on Multi-sensor Positioning System for Mobile Robot

Research on Multi-sensor Positioning System for Mobile Robot

Wang Lei ,Qi Yu Ming

Institute of Robotics and Intelligent Equipment Tianjin University of Technology and Education Tianjin, China

AbstractAiming at the localization problem of mobile robot, the localization algorithm based on multi-sensor is researched: In this paper, the positioning method based on vision sensor and the positioning method based on inertial measurement unit are studied, and the positioning system based on the vision sensor is used as the main positioning system, and the positioning system based on inertial measurement unit is used as the auxiliary positioning System to ensure that the mobile robot can still work properly under the condition that the primary positioning system is not working properly.

Keywordspositioning, multi-sensor, vision, inertial, robot

  1. INTRODUCTION

    The localization of mobile robot is one of the important branches of mobile robot research, inaccurate positioning not only leads to the inability of mobile robots to perform their tasks properly, but even major accidents occur. Therefore, in the production of life, the mobile robot positioning accuracy requirements are also increasing. The positioning of mobile robots depends on the sensor. There are many kinds of sensors [1] for mobile robot positioning, such as vision sensor, inertial measurement unit, LIDAR, wheeled odometer, etc. The vision sensor has the advantages of low cost, large amount of information and high degree of freedom, but its precision is

    this paper studies the multi-sensor [2] positioning system of mobile robot so as to obtain high precision mobile robot positioning and make it have a wide range of application scenarios.

  2. VISION SENSOR POSITIONING

    1. Estimation of binocular vision motion based on feature points

      Binocular Vision Stereo Positioning measurement is based on the Parallax principle: If the feature point on the primary camera's image face (the default left camera) is located on the secondary camera (the default right camera), the corresponding matching point can be found on the image plane, then the stereo information of the point can be determined to obtain the three-dimensional coordinates of the point. Through the change of the image between the frame and the frame, the transformation relation between the target points is established to solve the moving parameters of the mobile robot. The vision sensor used in this paper is a pinhole camera, so its imaging principle satisfies:

      The f f are the component of the camera's focal length

      greatly influenced by illumination and the computational x y

      amount of visual processing is more difficult to guarantee the real-time of information processing. The Inertial measurement unit has the advantages of small size, low cost, high precision, but because of its measurement during the measurement of the signal contains a lot of noise and the existence of temperature drift, 0 drift and other problems so that it cannot long-distance high-precision positioning. Laser-guided precision is highly flexible but does not work and is expensive in outdoor or low visibility environments. The wheel-type odometer acquires the wheel angle in order to obtain the robot pose estimation by the encoder installed on the wheel motor, which is characterized by stability and is only related to the ground environment, but when the wheel is relatively sliding on the ground, the position and pose estimation error of the data obtained by the from Samsara-type mileage meter is large. The data obtained from the above sensors can be applied to the localization algorithm

      on the x-axis and the y-axis, and the cxcy are the Camera

      Light Center coordinates. Focal length, center of light

      coordinates are called internal ( fx fy cx cy were fixed after the production of the camera), the matrix K is called the

      reference matrix.

      As a world coordinate system, the first frame in which the camera starts moving is the initial matrix of the posture at this time:

      by estimating adjacent inter-frame motion TCt1 , using the

      C

      t

      formula (2.3) for the cumulative calculation to get the pose of

      the current frame.

      1

      of mobile robot independently, but because each sensor has a

      is usually obtained by the eight-point method. There

      single characteristic, it can be used to locate the mobile robot

      using a sensor alone, which is very limited. Based on the characteristics of vision sensor and inertial measurement unit,

      This paper is supported by

      the National Key Technology R&D Program(2015BAK06B04); the key technologies R&D program of Tianjin(13ZCZDGX01500,

      14ZCZDSF00022,15ZXZNGX00260).

      is a characteristic point (SURF Feature Point [3]) ' in the image of t-1 moment, and at the moment of , there is the associated feature point corresponding to , then the relationship between ' in can be expressed in (2.4):

      The is called an essential matrix, it is advisable to set:

      Then:

      The formula (2.6) is converted to a vector form:

      8 groups of associated feature points are selected from t-1 and T-moment images, which are:

      rotational relationship between the two coordinate systems. Define two quaternions:

      And:

      Multiplication of two quaternions is expressed as:

      The matrix form is:

      If is full rank i.e. r (A) =9 then (2.7) has a unique solution to obtain the essential matrix E (E is the 1

      essential matrix E is singular decomposed to obtain a

      translation and rotation matrix

    2. Culling of feature points with mis-match

    ), the

    It is advisable to set the , , as the angular velocity of the mobile robot relative to the three axes of the previous

    time, then establish a differential equation based on the

    In the process of matching the feature points, there will be a mismatch, and the feature point matching error will result in the incorrect estimation of the intrinsic matrix and the error of the pose, so it is necessary to eliminate the mismatching feature points.

    You might want to set the match point pair to n, Which matches the correct point logarithm to k, the probability of

    extracting a pair of correct points from pair is k/n, the

    probability of extracting 8 pairs of matching points each time

    is (/)8, then there are 8 pairs of points that are put back

    from n pairs of matching points, and if L is taken at least once,

    the probability of a 8-to-point full pair is:

    quaternion number:

    And:

    Bring (3.8) into (3.7):

    The elimination of the wrong match point method is as follows: The corresponding essential matrix is obtained by randomly extracting 8 pairs of matching points from N pairs of matching points, and the number of matching points satisfying the essential matrix is calculated, if the number of matches is larger than the number of previous matches, the value is taken as k to(2.10) if P is greater than 0.9 then the algorithm terminates, and the essential matrix of the motion estimation as the motion between the two frames, satisfies the matching point pair of the matrix for the correct matching point pair to complete the false match point elimination.

  3. AUTONOMOUS POSITIONING OF INERTIAL NAVIGATION

    The experiment adopts Strapdown [4] inertial navigation system for autonomous positioning, the advantage is that it does not rely on he external information and the external radiation energy is not easy to be disturbed and destroyed, its composition is usually composed of three-axis gyroscope and three-axis accelerometer.

    1. Attitude Calculation Based on Quaternion method

      The commonly used attitude updating algorithms are: Euler angle method, directional cosine matrix method, four- element method and equivalent rotational vector method. In order to ensure the real-time and validity of the algorithm, the quaternion method is used to solve the attitude calculation. The physical meaning of the four-dollar number is the

      The initial quaternion of mobile robot can be obtained from the Runge-Kutta method [5]:

      0, 0 and 0 are initial attitude angles. The recurrence formula of formula (3.7) is:

      T is the solution cycle and:

      The pose matrix is expressed in quaternion is:

    2. Speed and position calculation

    The output of the three axis accelerometer at time is:

    The acceleration in the world coordinate system is:

    Set to is the gravitational acceleration constant then the

    moving robot in the world coordinate system goes down

    except acceleration of the gravitational acceleration is:

    The speed of a mobile robot in a world coordinate system

    is available for () integral:

    Re-integration can get the position of the mobile robot in

    the world coordinate system:

  4. MULTI-SENSOR FUSION STRATEGY

    In order to ensure that the mobile robot cannot obtain enough match point logarithms in the camera to ensure the precise positioning of the mobile robot, it will still be able to move normally according to the established route. In view of this situation, the system chooses the judgment: sets the match number threshold worth the match point logarithm is less than the threshold value based on the Vision sensor localization system failure uses the inertial navigation locates the system, when obtains the match number is above the threshold value to be based on the Vision sensor localization system recovery.

  5. EXPERIMENTAL DESIGN AND EXPERIMENTAL RESULTS

    According to the above positioning method, the following two experiments are designed to evaluate the method:

    1. Experiment One

      Experimental method: at the same time, two positioning methods are used to estimate the trajectory of a mobile robot going straight for 2 meters.

      Experimental result: the experimental results show that the trajectory of the mobile robot moving forward two meters can be estimated based on the two positioning methods, as shown in Fig.1:

      Fig.1. The running trajectory of experiment one

      Conclusion analysis: the Fig.1 shows that the trajectory estimation results obtained by the positioning system based on inertial measurement unit (brown line) is more stable but inaccurate, and the positioning system based on the vision sensor (blue line) is more accurate but not stable.

    2. Experiment Two

    Experimental method: the locus of the mobile robot is obtained in the same way as the experiment one, the moving method of the mobile robot is as follows: the starting point of the mobile robot is set at 2.5 meters away from the white wall, so that the mobile robot can go straight for two meters and turn right at 0.5 meters away from the wall and continue to go straight for two meters.

    Experimental result: the experimental results show that the trajectory of the mobile robot moving forward two meters can be estimated based on the two positioning methods, as shown in Fig.2:

    Fig.2. The running trajectory of experiment one

    Conclusion analysis: the Fig.1 shows and the positioning system based on the vision sensor (blue line) is no longer accurate, from the running data we can know that the number of feature matching points is significantly reduced at the distance of 0.6 m to 0.5 m from the wall surface, so that it cannot accurately locate and eventually lead to large estimation error.

  6. CONCLUTIONS

Depending on the visual sensor alone, when there are complex objects and white walls, when the camera takes pictures with white walls, the number of matching points is significantly reduced or less than 8 at this time, it is impossible to guarantee the correct calculation of the essential matrix, and the positioning accuracy of the mobile robot is not guaranteed. The positioning system based on inertial measurement unit can ensure the correct operation of mobile robot.

REFERENCES

[1] Jianxing Liang. A Multi-Sensor Data Fusion Method by Combining Near-neighbor Method and Fuzzy Inference[A]. Wuhan Zhicheng Times Cultural Development Co., Ltd.Proceedings of 2017 International Conference on Advances in Materials,Machinery,Electrical Engineering(AMMEE 2017)[C].Wuhan Zhicheng Times Cultural Development Co., Ltd:,2017:6.

[2] SHI Zhiguo,WEI Junming,LIU Xu,WANG Zhiliang,TU Jun School of Computer and Communication Engineering,University of Science and Technology Beijing,100083,P.R.China. IRGS Protocol Based Mobile Service Robot Positioning and Multi-robot Collaboration for Smart Home.

[3] Syed Ali Gohar Naqvi,Hafiz Muhammad Faisal Zafar,Ihsanul Haq.Hard exudates referral system in eye fundus utilizing speeded up robust features[J].International Journal of Ophthalmology,2017,10(07):1171- 1174.

[4] Wang Zhenhuan,Chen Xijun,Zeng Qingshuang.Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion[J].Chinese Journal of Aeronautics,2013,26(02):442-448.

[5] Yi WEI,Zichen DENG,Qingjun LI,Bo WANG.Projected Runge-Kutta methods for constrained Hamiltonian systems[J].Applied Mathematics and Mechanics(English Edition),2016,37(08):1077-1094.

Leave a Reply