Detection of Weed using Neural Networks

DOI : 10.17577/IJERTCONV6IS08040

Download Full-Text PDF Cite this Publication

Text Only Version

Detection of Weed using Neural Networks

Aravinth K

UG scholar-ECE

V.S.B Engineering college Karur, Tamilnadu.

Gowtham D

UG scholar-ECE

V.S.B Engineering college Karur, Tamilnadu.

Mrs. R. Dhayabarani

(Guide) Head of the Department-ECE

V.S.B Engineering College Karur, Tamilnadu.

Gowtham M

UG scholar-ECE

V.S.B Engineering college Karur, Tamilnadu.

Balakrishnan

G UG scholar-ECE

      1. Engineering college Karur, Tamilnadu.

        Abstract Improving the efficiency of the agriculture fields will increase the food resources in the response of the huge population in the world. In the fields, we are facing various difficulties such as weeds and plant diseases. We should detect the weeds and remove it. Nowadays we do not get more man power to work in the fields. Implementing machine learning methods such as convolutional neural networks on agriculture has gained immense attention in recent years. We are using convolutional neural networks to classify the plants. Automatic plant type identification process could offer a great help for application of pesticides, fertilization and harvesting of different species on-time in order to improve the production processes of food and drug industries. It could reduce the labor cost. Following the preprocessing step, Convolutional Neural Network architecture is employed to extract the features of images.

        1. INTRODUCTION

          Few nations are focusing on emerging technologies. Most of the technologies are focused on sophisticated life for humans. But the influence of technology is very poor. Most of the farmers are searching for labours. Due to insufficient labours, many of the farmers are fail to cultivate a crop on agricultural field. If the cultivation of crop is decreased, then there will be great demand for food products. The basic necessity of living organisms is food and water. The ultimate aim of the project is to detect the weed in agricultural land and also detect the diseased crop. By the identification of the diseased it will be removed, before affecting the neighboring plant or crop. If the neighboring crop is not detected and removed means diseased crop will affect the neighboring crop, then the yielding of crop will be dramatically reduced. For machine learning process, deep learning method is used. By converting color image i.e. RBG image into grey scale image the process will be performed. Then the image will be classified based on the image segmentation process. By using Convolution neural networks image segmentation has been performed. CNN will classify the different type of plants which is collected from the agricultural farming in the form of image sequences.

          Convolutional neural network

          Convolutional neural network is a special algorithm used for deep learning. It is used to detect the object and image segmentation. They learn to extract features of the image without manual help. A convolutional neural network can have tens or hundreds of layers that each learn to detect different features of an image. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as the input to the next layer. The filters can start as very simple features, such as brightness and edges, and increase in complexity to features that uniquely define the object as the layers progress.

          The segmented image will be processed with CNN using ReLU and pooling to differentiate the density of segmented part of the image. Deep learning is easier in MATLAB. There will be minimal code require for built a code in deep learning. You can quickly import pretrained models. MATLAB enables users to interactively label objects within images and can automate ground truth labeling within videos for training and testing deep learning models. This interactive and automated approach can lead to better results in less time.

        2. EXISTING METHOD

          CNN is one of the important methods to classify the image of the plants. In existing method, they have use the CNN algorithm for the plant classification. Deep learning method is automatically extracts attributes from two- dimensional plant images is proposed. The CNN models

          are an extension of the deep grind with artificial mesh. These consist of multiple convolution layering, pooling, and softmax with fully connected layers. In general, the feature maps of the previous layers are transformed into informative maps in the conformational layer, and activation functions are provided to form the first feature maps. The feature will be extracted with the help of deep learning algorithm. Generally, a large amount of data is required for the deep learning approaches to perform well. Apart from this, it is also important to use methods such as decreasing the weight of the code implementation and data duplication and also in the combination of memorization problems and to increase the efficiency of the machine learning algorithm. Many plants with similar kinds of images have been taken into account to classify the plants.

          Sometimes plants from different spots are observing similar colour distributions, sometimes with different appearances at the same plant growth stages. The modern methods of farming on plants are of great importance both to the national economy and to the individual hegemony, in view of the growing world population and concerns about the insufficient supply of resources in a world with global climate change [1] – [2]. Conventional plant recognition approaches are both expensive and time consuming, requiring manual intervention by specialists. Lately, image analysis techniques have been developed to automate the plant monitoring process, which has led to changes in lighting, lighting changes, leaf shifts, camera shake, zoom changes, unexpected changes in camera parameters, plant shedding [5]. Related to our method, Haug et al. [8] present a method to classify carrot plants and weeds in RGB- and NIR-images without needing a pre-segmentation of the scenes into agglutinative objects. They achieve an average accuracy of 94% for carrot plants on an evaluation set of

          70 images where both, intra- and inter-row overlap is present.

          Related work

          The training data set consists of sixteen different varieties including barley, sunflower, pepper, blade, tomato, apple, bean, ground fly, cherry, mandarin, lentil, moon, pomegranate, cotton and grape. It consists of 4800 separated images. Each of these observations contains multiple images at the same time, but the images are from different growth stages of the plant. Thus, we should able to calculate the feature extraction value of the image of any type of plant. On the regards of plant classification, the main thing is to identify the diseases to accumulate efficient plant growth and crop production.

          This section summarize about the recently published research papers related to our work. In the field of agriculture, weed detection plays a vital role. The presence of weeds may lead to loss in productivity and in crop yield. Florian et al suggested a paper namely investigation of different plant root exit point vector search algorithms in organic farming. In their approach, they proposed different algorithms to search the plant root exit point. This paper overcomes the drawbacks of

          three major methods of obtaining 3- dimensional information. The methods are: Stereo- vision systems, time of flight cameras and laser range scanners. Florian et al included four different phases in their algorithm. Phase 1 is to extract the plants from their background. Phase 2 states about finding the root exit ponts in extracted plants. Phase 3 determines the root distance from the exit point to the camera. Phase 4 includes the calculation part. This four phase system was found to be the better choice for reusability. Su et al proposed a paper namely weed and crop segmentation and classification using area thresholding. This paper comprises of three major parts. They are segmentation, classification and error calculation. In Segmentation phase, they used different methods to remove the background. The methods are colour based segmentation, edge based segmentation, threshold based segmentation and watershed segmentation. In classification phase, the segmented image is used as the input. Based on the threshold value, the weed plants are identified. This method is based on image processing.

        3. PROPOSED SYSTEM

          In our proposed system, we are working on the identifying of diseases among the identifying plant crops with feature extraction value. Also, by using the CNN value of feature mapping technique, we are ranging the feature values of different plants by analysing with the deep learning algorithms. The complete process is divided into several necessary stages in subsections below, starting with gathering images for classification process using deep neural networks.

          Dataset

          Datasets are required for the image recognition process. The images will be downloaded over the internet. It will be already complete the process of training phase to evaluating the performance of recognition algorithms. Images will grouped in various classes. To differentiate healthy leaves from diseased leaves, another one class is added to the dataset. It contains a healthy leaves. There is extra class of dataset with background images to get accurate classification of an image. The neural network could be trained with differentiate leaves, which are available in the surroundings.

          Image pre-processing and labelling

          Over the internet we have downloaded the various images with different quality and in different resolutions. We have taken few images over the mobile phones also. For better feature extraction, the final images will intended to use as a dataset for the deep neural network classifier. Furthermore, procedure of image pre-processing involved cropping of all the images manually, making the square around the leaves, in order to highlight the region of plant leaves. In that way, it was ensured that images contain all the needed information for feature learning. Images used for the dataset were image resized to 256*256 i.e. the maximum pixel size of an image L.

          where L=0 to 255. Pixel size=L-1 to reduce the time of training. Then the further process will be followed by augmentation process.

          Augmentation Process

          Pre-process images use random rotation, so that the trained convolutional neural network has rotational invariance. This example uses the augmented Image Source function to create an augmented image data store object. See the example Train Network with Augmented Images to see the recommended workflow, uses the augmented Image Data store function to create an augmented image data store object.

          Load the sample data, which consists of synthetic images of handwritten numbers.

          [XTrain,YTrain] = digitTrain4DArrayData;

          Network Training

          Prepare Training and Test Image Sets

          Split the sets into training and validation data. Pick 30% of images from each set for the training data and the remainder, 70%, for the validation data. Randomize the split to avoid biasing the results. The training and test sets will be processed by the CNN model.

          [trainingSet, testSet] = splitEachLabel(imds, 0.3, 'randomize');

          Pre-process Images For CNN

          As mentioned earlier, net can only process RGB images that are 224-by-224. To avoid re-saving all the images in Caltech 101 to this format, use an augmented Image Data store to resize and convert any grayscale images to RGB on-the-fly. The augmented Image Data store can be used for additional data augmentation as well when used for network training. imageSize = net.Layers(1).InputSize; augmentedTrainingSet= augmentedImageDatastore(imageSize, trainingSet, 'ColorPreprocessing', 'gray2rgb');

          From the above code, we have to test the datasets that has been created in the images. It will be able to convert the grayscale images from RGB images.

          Extract Training Features Using CNN

          Each layer of a CNN produces a response, or activation, to an input image. However, there are only a few layers within a CNN that are suitable for image feature extraction. The layers at the beginning of the network capture basic image features, such as edges and blobs. To see this, visualize the network filter weights from the first convolutional layer. This can help build up an intuition as to why the features extracted from CNNs work so well for image recognition tasks. Note that

          visualizing features from deeper layer weights can be done using deepDreamImage from Neural Network Toolbox.

          You can easily extract features from one of the deeper layers using the activations method. Selecting which of the deep layers to choose is a design choice, but typically starting with the layer right before the classification layer is a good place to start. In net, this layer is named 'fc1000'. Let's extract training features using that layer.

          Train a Multiclass SVM Classifier Using CNN Features

          Next, use the CNN image features to train a multiclass SVM classifier. A fast Stochastic Gradient Descent solver is used for training by setting the function's 'Learners' parameter to 'Linear'. This helps speed-up the training when working with high- dimensional CNN feature vectors.

          % Get training labels from the trainingSet trainingLabels = trainingSet.Labels;

          % Train multiclass SVM classifier using a fast linear solver, and set

          % 'ObservationsIn' to 'columns' to match the arrangement used for training

          %

          feature s.

          classifier = fitcecoc(trainingFeatures, trainingLabels,

          .

          .

          .

          'Learners', 'Linear', 'Coding', 'onevsall', 'ObservationsIn',

          'columns'); Evaluate Classifier

          Repeat the procedure used earlier to extract image features from test Set. The test features can then be passed to the classifier to measure the accuracy of the trained classifier.

          % Pass CNN image features to trained classifier predictedLabels = predict(classifier, testFeatures, 'ObservationsIn',

          'columns');

          % Get the known labels testLabels = testSet.Labels;

          % Display the mean accuracy mean(diag(confMat)) ans =

          0.9929

          Store the feature extracted value of each image and need to be analyze with the datasets. The features will be extracted in the field and to be analyze with the already stored value. The range will be mentioned in the code with the feature values.

          Try the Newly Trained Classifier on Test Images % Create augmented Image Data store to automatically resize the image when image features are extracted using activations.

          ds = augmentedImageDatastore(imageSize, newImage, 'ColorPreprocessing', 'gray2rgb');

          % Extract image features using the CNN imageFeatures = activations(net, ds, featureLayer, 'OutputAs',

          'columns');

          % Make a prediction using the classifier

          label = predict(classifier, imageFeatures, 'ObservationsIn',

          'columns')

          Fine-tuning seeks to increase the effectiveness or efficiency of a process or function by making small modifications to improve or optimize the outcome. The classification function in the original CaffeNet model is softmax classifier that computes probability of 1,000 classes of the ImageNet dataset. The process of fine- tuning was repeated changing parameters of hidden layers and hyper parameters. The best suited model for plant disease detection was achieved through the process of experimental adjustment of the parameters. The results of the model fine-tuning are presented and explained further. An excellent style manual for science writers is [7].

        4. EXPERIMENTAL DETAILS

From the above system, we analyzed the diseases of the plants using the neural networks. The experimental results will show about the identification of weeds and diseases of the plants. The below diagram is related to the detection of healthy plant.

paper to solve the problems in the agriculture. So, we have to detect the leaf and need to be updated with the disease detection system with the help of convolutional neural network.

There is a distraction in the above image will show us that it is affected by disease. It should be analyzed and we will get result as perfect image enhancement. It is the detection of the disease for various images. Moreover, the weed is also detected with the help of deep learning algorithms.

The next image is the output of detected weed using feature extracted value of the image. The noise will be removed in the image and shows he real output as Diseased leaf. Weed detection is the main moto of the

REFERENCES

  1. Broich, M., Huete, A., "A spatially explicit land surface phenology data product for science, monitoring and natural resources management applications," Environmental Modelling & Software, 64: 191-204, 2015.

  2. Cleland, E.E.; Chuine, I.; Menzel, A.; Mooney, H.A.Schwartz,

    M.D. Shifting plant phenology in response to global change. Trends Ecol. Evol., 22, 357 365, 2007.

  3. Srbinovska, M., Gavrovski, C., Dimcev, V., Krkoleva, A., Borozan, V., "Environmental parameters monitoring in precision agriculture using wireless sensor networks," Journal of Cleaner Production, 88:297-307, 2015.

  4. Xiao, X., Li, J., He, H., Fei, T., Zhou, Y., "Framework for phenology analyses from observation data of digital images and meteorological data," IEEE Int. Conf. on Computer Science and Automation Engineering, pages: 373 – 377, 2012.

  5. P. Lottes M. Hoeferlin S. Sander M. M¨uter P. Schulze Lammers

    C. Stachniss, An Effective Classication System for Separating Sugar Beets and Weeds for Precision Farming Applications, IEEE International Conference on Robotics and Automation (ICRA) Stockholm, Sweden, May 16-21, 2016

  6. H. Yalcin, "Phenology Monitoring of Agricultural Plants Using Texture Analysis," Signal Processing and Communications Applications Conference (SIU'16), Zonguldak, May 2016. (doi: 10.1109/SIU.2016. 7495926 )

  7. S. Razavi, H. Yalcin, "Plant Classification using Group of Features," Signal Processing and Communications Applications Conference (SIU'16), Zonguldak, May 2016. (doi: 10.1109/SIU.2016.7496150)

  8. H. Yalcin, "Phenology Monitoring Of Agricultural Plants Using Texture Analysis," Fourth International Conference on Agro- Geoinformatics, Istanbul, 2015.

  9. N. Kumar, P.N. Belhumeur, A. Biswas, D.W. Jacobs, W.J. Kress,

    1. Lopez, and J.V.B. Soares. Leafsnap: A computer vision system for automatic plant species identication. In Proc. of the European Conference on Computer Vision (ECCV), 2012.

  10. M. M¨uter, P. Schulze Lammers, and L. Damerow. Development of an intra-row weeding system using electric servo drives and machine vision for plant detection. In Proc. of the Agricultural Engineering Conference, 2013.

  11. A.T. Nieuwenhuizen. Automated detection and control of Volunteer potato plants. PhD thesis, Wageningen University, 2009.

  12. T. Ojala and M. Pietikinen. Unsupervised texture segmentation using feature distributions. Pattern Recognition, (32):477486, 1999.

  13. A. Ranganathan and F. Dellaert. Semantic modeling of places using objects. In Proc. of Robotics: Science and Systems (RSS), 2007.

Leave a Reply