- Open Access
- Total Downloads : 39
- Authors : Mrs. M. Usharani, N. Shwetha, Y. Soundarya
- Paper ID : IJERTCONV7IS06021
- Volume & Issue : ETEDM
- Published (First Online): 18-05-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Detection of Pest in Agricultural Crops by SVM Algorithm
Mrs. M. Usharani1
M.E, (PhD), Assistant Professor, Department of Electronics and Communication,
Velammal Engineering College, Surapet, Chennai, India.
(Affiliated to Anna University, Chennai.)
N. Shwetha2, Y. Soundarya2
Bachelor of Engineering, Department of Electronics and Communication, Velammal Engineering College, Surapet, Chennai, India.
(Affiliated to Anna University, Chennai.)
In general real time monitoring of crops under a dynamic environment is very difficult. This difficulty can be solved by using ground bot. The ground bot can be used to detect the pest in the crops very accurately. The existing method using ground bot involves more manual calculations and computations. The main idea behind it uses camera to capture the image and by using machine learning it process the data into useful information by comparing the captured image by those that of the data contained in the database. This comparison can be done by more specifically with colors. In typical case, red shows soil deficiencies, blue can be used to detect bare soil, white is meant for know about the presence of weeds, etc. This color coding technique creates a complete map of the situation of crop and the soil. As we know that Gamayas technology is capable of comparing and maps the weeds in plants. In addition to, it can also be useful to know other status of plants like ailment or starvation, as well as chemical inputs in the soil. The exiting method uses BRIEF algorithm that uses multi-inertial sensing data to know the status of the plantation crops. In our proposed system we use SVM algorithm to obtain the accurate result. This SVM processing is done at the backend of the server and the result is obtained in the LCD display and notification is provided in all farmers handset.
Keywords: Monitoring, Image Processing, Classification, Mapping, Pest.
Object tracking using ground bot is used in many real- time applications. Here we have proposed a bot, video based pest tracking in plantation crop field with a cascade object detection model using histogram of gradient (HOG) features to detect the object. KLT tracker is used to track throughout the video. Matlab is used to implement this technique with high accuracy on less amount of train data. An pest tracking method is proposed that fuses inertial measurement unit data, global positioning system data and the moving the bot for detection of pest around the fields. Agriculture bot will help farmers on precision agriculture such as sowing, spraying and plant protection. It is also used to monitor disease, pest, fertilizer conditions and
irrigation. Ground bot can be used to make accurate 3-D maps that is used to conduct soil analysis on soil property, moisture content and soil erosion. This is very useful in planning the plantation patterns, irrigation and management of the nitrogen level in the soil. Many types of operational data for spraying pesticides and seeds can also be recorded.
A complete package data are collected and analyzed for crop growth information. This will help farmers to use fertilizers and pesticides and improve the environment.
figure 1: Pest infected leaf
Plants health can be affected even before any visible signs like leaf discoloration. Since these stresses are invisible to the naked eye, cameras using special filters are used to detect these changes. The Department of Life Sciences and Computing are combining with the agriculture services company Agro to provide cameras mounted on ground bot that will detect early disease stages and tell farmers when to spray the pesticide or fungicide before the crops being affected. The best results can be obtained by spraying at early stages. This concept could later be adapted for other diseases too. TTA system is based on monitoring the crop nutritional status and pests by multispectral cameras carried by UAV.
A lightweight object detection algorithm on a drone is proposed that uses an Oriented FAST and Rotated Binary Robust Independent Elementary Features (ORB) algorithm under an outdoor environment for feature extraction, LDB for feature binary descriptors, and k- nearest neighbor (KNN) to match the image descriptors. Assume that the intensity of the light of a corner point from the center of the image is offset. Since it is offset, the direction of the surrounding points are fused to calculate the direction of the corner point. The intensity moment of a patch is given by:
mpq= xp yq I(x, y) x,y
where I(x,y) is the intensity of the location of the image. The intensity centroid of the patch can be calculated by:
C= (m10/m00, m01/m00)
The direction of the patch,
Thus the rotational invariance feature of ORB to capture the feature points is obtained. The image descriptor are
symptoms varies from one plant to another, features optimization is needed, more training samples are needed in order to cover more cases and to predict the disease more accurately.
Operating of drone plays a major disadvantage to layman. They need more knowledge about the working and features of drone inorder to operate it correctly.
Cost to design a drone is very difficult and integrating of various algorithm makes the system complex.
Proposed Method Block Diagram Hardware overview
The videos of the crop field are captured by Ground bot which does the processing of the images in real time. Digital imaging technique is used to provide a clear image of the fields.
built by Local Discriminant Basis (LDB) after extracting key points of the patch from the image. The five steps of LDB includes: Gaussian pyramid construction, dominant orientation estimation, integral image construction, binary tests, and bit selection. The ORB have the directional capability to eliminate the dominant orientation estimation. The invariance provided to the LDB scale is done by constructing the Gaussian pyramid and its feature points on the corresponding pyramid level of LDB descriptor are calculated as:
=2 exp(-x2+y2*2 2), 1<=i<=L
i i figure 2: Overall block diagram
where G(x,y,i) is the Gaussian filter for the image intensity I(x,y), i is the standard deviation of the Gaussian distribution, L indicates level of the pyramid.
The LDB must be calculated for each feature point in every frame of pyramid because the feature extraction methods does not have a particular scale estimations. An integral graph is plotted to calculate average intensity and gradient information of the cells. If the image is rotated, then rotated integral graph is constructed by accumulating the pixels in the dominant orientation.
In the existing method although ORB algorithm apply image pyramids for scale invariance and centroid calculation for rotational invariance but it is not as robust as accurate as SVM algorithm.
Limitations of existing method
The implementation still lacks in accuracy of result in some cases and hence more optimization is needed.
Priori information is needed for segmentation./p>
Database extension is needed in order to reach more accuracy.
Very few diseases have been covered. So, work needs to be extended to cover more diseases.
The possible reasons that can lead to misclassifications are as follows: disease
Agricultural bot have large space for its growth. With the constant improvement in technology, development in the imaging of the crops is also needed. The farmers will be able to analyze the crops and make accurate decisions on crop analyzing and productivity from the data that is recorded by the drones. The ground bot is made to go around the field. It has to identify an issue in a particular area and also find a solution to solve it. This will help the farmers to increase their productivity rather than monitoring their crops. Different types of sensors are used to capture the state of the crops like moisture level, humidity and PIR sensors that provides security to the fields. These information from the sensors and captured images are directly given to the central server. Sensor information are saved in server and then it is given to the famers handsets through GSM. The captured images in the server are processed by image pre-processing techniques.
Algorithm for processing
Algorithm used illustrated the step by step approach for the proposed image recognition and segmentation processes:
dot product, radial basis, polynomial
figure 3: Algorithm flow diagram
Image acquisition is the very first step that requires capturing an image with the help of a digital camera.
Pre-processing of input image to improve the quality of image and to remove the undesired distortion from the image.
Clipping of the leaf image is performed to get the interested image region and then image smoothing is done using the smoothing filter.
To increase the contrast Image enhancement is also done.
In this, we computed a threshold value that is used for these pixels. Then in the following way mostly green pixels are masked: if pixel intensity of the green component is less than the pre- computed threshold value, then zero value is assigned to the red, green and blue components of this pixel.
In the infected clusters, inside the boundaries, remove the masked cells.
Obtain the useful segments to classify the leaf diseases. Segment the components using SVM algorithm.
Classification using SVM
Support Vector Machines (SVM) are a class of linear learning machines which is used for classification and regression. In binary classification problems, SVM constructs a maximal margin which separates hyper plane
to separate the input data points into classes. Since it is a binary classification problem, the two classes can be denoted with +1 and -1. Maximizing the distance is done by selection of two hyper planes, ensuring that there are no points between them. SVM will use linear model to implement nonlinear class boundaries through some nonlinear mapping of the input vectors x into the high- dimensional feature space. A nonlinear decision boundary in the original space can be represented by a linear model which is constructed in the new space. There is a construction of an optimal separating hyper plane in the new space. Hence, SVM is the algorithm that helps in finding a special kind of linear model with the maximum margin hyper plane. The decision classes has the maximum separation in between them and is given by maximum margin hyper plane.
figure 4: SVM classifier where x1 and x2 are two different datasets. Results and Discussion
In our proposed system, the accuracy level is improved and the computation time is reduced to train and test the data. We have used 30 samples of neem and mango leaf each to detect the pest infected in it. Neem leaf produces 90% of accuracy and mango leaf produces 92.3% of accuracy and the computation time taken by the neem leaf is 9.7 seconds and by mango leaf is 8.1 seconds in SVM algorithm. While using BRIEF K-means algorithm, neem leaf produces 71% of accuracy and mango leaf produces
76.5% of accuracy and the computation time taken by the neem leaf is 15.74 seconds and by mango leaf is 12.08 seconds. While using extended CAMSHIFT, neem leaf produces 60% of accuracy and mango leaf produces 75.80% of accuracy and the computation time taken by the neem leaf is 34.46 seconds and by mango leaf is 29.92 seconds. On comparing the results of SVM algorithm with the other algorithms, it is clear that SVM algorithm is more efficient and provides accurate results of pest infected leaf.
figure 5: Results of the pest detected leaf
figure 7: GSM notification in farmers handset
Our project is based on the real time agricultural field monitoring of crops and to estimate the amount of pest to be present in the crops. This can effectively improve the productivity. Also we can know the proper amount of the pest to be present in the plant which improves the plant growth and produces healthier results. SVM classifier produces very accurate and best results with minimum computational time. In addition to it all our sensor used presents the
N. Michael, D. Mellinger, Q. Lindsey, and V. Kumar, The GRASP multiple micro-UAV testbed, IEEE Robot. Autom. Mag., vol. 17, no. 3, pp. 5665, Sep. 2010.
I. Colomina and P. Molina, Unmanned aerial systems for photogrammetry and remote sensing: A review, ISPRS J. Photogramm. Remote Sens., vol. 92, no. 2, pp. 7997, 2014.
H. Ergezer and K. Leblebicioglu, Path planning for UAVs for maximum information collection, IEEE Trans. Aerosp. Electron. Syst., vol. 49, no. 1, pp. 502520, Jan. 2013.
H. Lim and S. N. Sinha, Monocular Localization of a moving person onboard a quadrotor MAV, in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Seattle, WA, USA, May 2015, pp. 21822189.
S. Darma, J. L. Buessler, G. Hermann, J. P. Urban, and B. Kusumoputro, Visual servoing quadrotor control in autonomous target search, in Proc. IEEE Int. Conf. Syst. Eng. Technol. (ICSET), Shah Alam, Malaysia, Aug.
2013, pp. 319324.
B. Andrievsky and A. L. Fradkov, Adaptive coding for maneuvering UAV tracking over the digital communication channel, in Proc. Int. Congr. Ultra Mod. Telecommun. Control Syst. Workshops (ICUMT), Saint Petersburg, Russia, Oct. 2014, pp. 236241.
Y. Yuan, Y. Lu, and Q. Wang, Tracking as a whole: Multi-target tracking by modeling group behavior with sequential detection, IEEE Trans. Intel. Transp. Syst., to be published. [Online]. Available: http://ieeexplore.ieee.org/document/7896601
Y. Yuan, Z. Xiong, and Q. Wang, An incremental framework for video based traffic sign detection, tracking, and recognition, IEEE Trans. Intell. Transp. Syst., vol. 18, no. 7, pp. 19181929, Jul. 2017.
Q. Wang, J. Fang, and Y. Yuan, Multi-cue based tracking, Neurocomputing, vol. 131, pp. 227236, May 2014.
H. Bay, T. Tuytelaars, L. Van Gool, SURF: Speeded up robust features, in Proc. Eur. Conf. Comput. Vis. (ECCV), 2006, pp. 404417