Research on Multi-sensor Positioning System for Mobile Robot

Aiming at the localization problem of mobile robot, the localization algorithm based on multi-sensor is researched: In this paper, the positioning method based on vision sensor and the positioning method based on inertial measurement unit are studied, and the positioning system based on the vision sensor is used as the main positioning system, and the positioning system based on inertial measurement unit is used as the auxiliary positioning System to ensure that the mobile robot can still work properly under the condition that the primary positioning system is not working properly. Keywords—positioning, multi-sensor, vision, inertial, robot


INTRODUCTION
The localization of mobile robot is one of the important branches of mobile robot research, inaccurate positioning not only leads to the inability of mobile robots to perform their tasks properly, but even major accidents occur. Therefore, in the production of life, the mobile robot positioning accuracy requirements are also increasing. The positioning of mobile robots depends on the sensor. There are many kinds of sensors [1] for mobile robot positioning, such as vision sensor, inertial measurement unit, LIDAR, wheeled odometer, etc. The vision sensor has the advantages of low cost, large amount of information and high degree of freedom, but its precision is greatly influenced by illumination and the computational amount of visual processing is more difficult to guarantee the real-time of information processing. The Inertial measurement unit has the advantages of small size, low cost, high precision, but because of its measurement during the measurement of the signal contains a lot of noise and the existence of temperature drift, 0 drift and other problems so that it cannot long-distance high-precision positioning. Laser-guided precision is highly flexible but does not work and is expensive in outdoor or low visibility environments. The wheel-type odometer acquires the wheel angle in order to obtain the robot pose estimation by the encoder installed on the wheel motor, which is characterized by stability and is only related to the ground environment, but when the wheel is relatively sliding on the ground, the position and pose estimation error of the data obtained by the from Samsara-type mileage meter is large. The data obtained from the above sensors can be applied to the localization algorithm of mobile robot independently, but because each sensor has a single characteristic, it can be used to locate the mobile robot using a sensor alone, which is very limited. Based on the characteristics of vision sensor and inertial measurement unit, this paper studies the multi-sensor [2] positioning system of mobile robot so as to obtain high precision mobile robot positioning and make it have a wide range of application scenarios.

A. Estimation of binocular vision motion based on feature points
Binocular Vision Stereo Positioning measurement is based on the Parallax principle: If the feature point on the primary camera's image face (the default left camera) is located on the secondary camera (the default right camera), the corresponding matching point can be found on the image plane, then the stereo information of the point can be determined to obtain the three-dimensional coordinates of the point. Through the change of the image between the frame and the frame, the transformation relation between the target points is established to solve the moving parameters of the mobile robot. The vision sensor used in this paper is a pinhole camera, so its imaging principle satisfies: The f x 、f y are the component of the camera's focal length on the x-axis and the y-axis, and the c x 、c y are the Camera Light Center coordinates. Focal length, center of light coordinates are called internal ( f x 、f y 、c x 、c y were fixed after the production of the camera), the matrix K is called the reference matrix.
As a world coordinate system, the first frame in which the camera starts moving is the initial matrix of the posture at this time: by estimating adjacent inter-frame motion T C t C t−1 , using the formula (2.3) for the cumulative calculation to get the pose of the current frame.
−1 is usually obtained by the eight-point method. There is a characteristic point (SURF Feature Point [3] ) ' in the image of t-1 moment, and at the moment of , there is the associated feature point corresponding to , then the relationship between ' in can be expressed in (2.4): The is called an essential matrix, it is advisable to set: Then: The formula (2.6) is converted to a vector form: 8 groups of associated feature points are selected from t-1 and T-moment images, which are: If is full rank i.e. r (A) =9 then (2.7) has a unique solution to obtain the essential matrix E (E is the −1 ), the essential matrix E is singular decomposed to obtain a translation and rotation matrix。

B. Culling of feature points with mis-match
In the process of matching the feature points, there will be a mismatch, and the feature point matching error will result in the incorrect estimation of the intrinsic matrix and the error of the pose, so it is necessary to eliminate the mismatching feature points.
You might want to set the match point pair to n, Which matches the correct point logarithm to k, the probability of extracting a pair of correct points from pair is k/n, the probability of extracting 8 pairs of matching points each time is ( / ) 8 , then there are 8 pairs of points that are put back from n pairs of matching points, and if L is taken at least once, the probability of a 8-to-point full pair is: The elimination of the wrong match point method is as follows: The corresponding essential matrix is obtained by randomly extracting 8 pairs of matching points from N pairs of matching points, and the number of matching points satisfying the essential matrix is calculated, if the number of matches is larger than the number of previous matches, the value is taken as k to(2.10) if P is greater than 0.9 then the algorithm terminates, and the essential matrix of the motion estimation as the motion between the two frames, satisfies the matching point pair of the matrix for the correct matching point pair to complete the false match point elimination.

III. AUTONOMOUS POSITIONING OF INERTIAL NAVIGATION
The experiment adopts Strapdown [4] inertial navigation system for autonomous positioning, the advantage is that it does not rely on the external information and the external radiation energy is not easy to be disturbed and destroyed, its composition is usually composed of three-axis gyroscope and three-axis accelerometer.

A. Attitude Calculation Based on Quaternion method
The commonly used attitude updating algorithms are: Euler angle method, directional cosine matrix method, fourelement method and equivalent rotational vector method. In order to ensure the real-time and validity of the algorithm, the quaternion method is used to solve the attitude calculation. The physical meaning of the four-dollar number is the rotational relationship between the two coordinate systems. Define two quaternions: And: Multiplication of two quaternions is expressed as: The matrix form is: It is advisable to set the , , as the angular velocity of the mobile robot relative to the three axes of the previous time, then establish a differential equation based on the quaternion number: And: The initial quaternion of mobile robot can be obtained from the Runge-Kutta method [5] : 0 , 0 and 0 are initial attitude angles. The recurrence formula of formula (3.7) is: Set to is the gravitational acceleration constant then the moving robot in the world coordinate system goes down except acceleration of the gravitational acceleration is: The speed of a mobile robot in a world coordinate system is available for ( ) integral: Re-integration can get the position of the mobile robot in the world coordinate system: IV. MULTI-SENSOR FUSION STRATEGY In order to ensure that the mobile robot cannot obtain enough match point logarithms in the camera to ensure the precise positioning of the mobile robot, it will still be able to move normally according to the established route. In view of this situation, the system chooses the judgment: sets the match number threshold worth the match point logarithm is less than the threshold value based on the Vision sensor localization system failure uses the inertial navigation locates the system, when obtains the match number is above the threshold value to be based on the Vision sensor localization system recovery.

V. EXPERIMENTAL DESIGN AND EXPERIMENTAL RESULTS
According to the above positioning method, the following two experiments are designed to evaluate the method:

A. Experiment One
Experimental method: at the same time, two positioning methods are used to estimate the trajectory of a mobile robot going straight for 2 meters.
Experimental result: the experimental results show that the trajectory of the mobile robot moving forward two meters can be estimated based on the two positioning methods, as shown in Fig.1:  Fig.1 shows that the trajectory estimation results obtained by the positioning system based on inertial measurement unit (brown line) is more stable but inaccurate, and the positioning system based on the vision sensor (blue line) is more accurate but not stable.

B. Experiment Two
Experimental method: the locus of the mobile robot is obtained in the same way as the experiment one, the moving method of the mobile robot is as follows: the starting point of the mobile robot is set at 2.5 meters away from the white wall, so that the mobile robot can go straight for two meters and turn right at 0.5 meters away from the wall and continue to go straight for two meters.
Experimental result: the experimental results show that the trajectory of the mobile robot moving forward two meters can be estimated based on the two positioning methods, as shown in Fig.2:  Fig.1 shows and the positioning system based on the vision sensor (blue line) is no longer accurate, from the running data we can know that the number of feature matching points is significantly reduced at the distance of 0.6 m to 0.5 m from the wall surface, so that it cannot accurately locate and eventually lead to large estimation error.
VI. CONCLUTIONS Depending on the visual sensor alone, when there are complex objects and white walls, when the camera takes pictures with white walls, the number of matching points is significantly reduced or less than 8 at this time, it is impossible to guarantee the correct calculation of the essential matrix, and the positioning accuracy of the mobile robot is not guaranteed. The positioning system based on inertial measurement unit can ensure the correct operation of mobile robot.