Homing Pigeon Classification Based on Wing Pattern using Image Processing

DOI : 10.17577/IJERTCONV8IS11006

Download Full-Text PDF Cite this Publication

Text Only Version

Homing Pigeon Classification Based on Wing Pattern using Image Processing

Prakhyath N Raj

Dept f ECE,

MIT Mysore, Karnataka, India

Rajesh Nayak M G

Dept f ECE,

MIT Mysore, Karnataka, India

Raghavendra R

Dept f ECE,

MIT Mysore, Karnataka, India

Shashank Patel B A

Dept f ECE,

MIT Mysore, Karnataka, India

Lethan M N Assistant professor, Dept f ECE, MIT Mysore, Karnataka, India

Abstract This project gives a brief solution to a real time based problem stated by the fanciers of the Karnataka Racing Pigeon Society (krps.co) which is a state government recognized body. The problem statement was that there were two categories of homing pigeons (Lon and short distance flying birds) which were difficult for the fanciers to differentiate. Since this problem there arose still more like the diet were not maintained properly since both the genre have different diet plan. The performance in the race was degraded due to wrong basketing. In order to bring in a solution choosing wing pattern was appropriate because any flying thing should have a greater lift against the gravity(Archimedes principle for flights). To have a greater lift the wing was supposed to bigger. Having foci in the wings and classifying them based on their flights was conveyable. Thus emerged inculcation of MATLAB in this project work.

  1. INTRODUCTION

    The homing pigeon is a variety of domestic pigeon derived from the rock pigeon, selectively bred for its ability to find its way to home over extremely long distances. The wild rock pigeon has an innate homing ability, meaning that it will generally return to its nest, (it is believed) using magneto reception. The science behind a bird/ pigeon flying back home is being proven in many cases but the technology used by the bird that is the magneto reception is assumed to be true based on the research conducted by the professors of North Eastern University, Boston, USA. This made it relatively easy to breed from the birds that repeatedly found their way home over long distances. Flights as long as 1800 km (1,100 miles) have been recorded by birds in competitive pigeon racing Their average flying speed over moderate 965 km (600 miles) distances is around 97 km/h (60 miles per hour) and speeds of up to 140 km/h (90 miles per hour) have been observed in top racers for short distances. It was also affirmative that pigeons capable of flying these long distances were used for communication in ancient time.

    Considering the above to be truthful, there started a craze of breeding these pigeons or growing these pigeons for racing purpose. Pigeon racing grew at most across the globe irrespective of places, creed, country, profession etc. The racers participating in the race are called the Fanciers. This craze has now grown to an extent level that the fanciers are across the globe. Considering only India, we have 13,000 fanciers in India as of now. All these fanciers are the members of the clubs depending on the location of their houses. This project is a solution provided for a real time

    problem statement stated by one of these clubs called the

    KARNATAKA RACING PIGEON SOCIETY. The

    problem statement is being discussed in the further sub chapters. As soon as the young birds are born they will be given a ring to be worn for the legs of the young pigeons for identification purpose.

  2. LITERATURE REVIEW

    1. AUTOMATIC FRUITS AND VEGETABLES CLASSIFICATION FROM IMAGES by Anderson Rochaa, Daniel C. Hauagge, Jacques Wainer, Siome Goldenstein, New York, USA

      This paper has spoken about, given the variety and the impossibility of predicting which kinds of fruit/vegetables are sold, training must be done on-site by someone with little or no technical knowledge. Therefore, the system must be able to achieve a high level of precision with only a few training examples (e.g., up to 30 images). Often, one needs to deal with complex classification problems. In such scenarios, using just one feature descriptor to capture the classes separability might not be enough and feature fusion may become necessary. Although normal feature fusion is quite effective for some problems, it can yield unexpected classification results when the different features are not properly normalized and preprocessed. Besides it has the drawback of increasing the dimensionality of the data which might require more training examples. This paper presents a unified approach that can combine many features and classifiers. It requires less training and is more adequate to some problems than a naïve method, where all features are simply concatenated and fed independently to each classification algorithm. We expect that this solution will endure beyond the problem solved in this paper.

      The introduced fusion approach is validated using an image data set collected from the local fruits and vegetables distribution center and made public. The image data set contains 15 produce categories of fruits fed as the training samples to the machine to learn the data set. The data set comprises of 2633 samples to which the required image processing technique is applied and then the throughput is given as the classified images of fruits or images.

      In any image processing system, like classification of images, segregation of images etc,, there are two phases.

      1. Training phase

      2. Testing Phase

    In the training phase we train the system to understand the samples of the images given based on the data set. Suppose if we send in an image of a Granny Smith apple and wee aske the system to read all the features of that image and we tell the system that it was a granny smith apple and if any of such images are found during the testing the system has to recognize that as a granny smith apple.

    Database of images is nothing but the pile of images stored in the memory which is captured by a simple camera. But the cameras have their own specifications. Here in this technical work the camera used is 786 kilo pixels. The captured images are down sampled for the reduction of memory size. The 786 kilo pixels converted into rows and columns gives 1024 × 768 but that image is down sampled to 307 kilo pixels which; when converted into rows and columns gives 640 × 480. The pile of images is then stored in RGB color space at 8 bits per channel. Which means to say that 8 bits per RED, 8 bits per GREEN, 8 bits per BLUE. So an entire image sums up to a 24 bit data. So, this is also called as the image acquisition phase.

    In pre-processing there are few steps which is mandatorily carried to bring the image captured by the camera into the readable format for the MATLAB data processor. In this block, there happens to be an important step which is called the Background removal step. This background removal algorithm removes the background of the image captured based on the region of interest. Background removal is important because most of the images have the background of an other object like fruit, human, etc.. which are not required for the system to process further. So it does a background removal process based on the background removal algorithm. The system is fed with the image of a granny smith apple and then the system is asked to process the pre processing so that it removes the background and just extracts the image as shown above. Later after the preprocessing, the image is sent for the feature extraction. Few important features like the diameter of the apple, the amount of specular lights in the apple, the diameter or the radius of the patches on the apple if any, the size of the apple etc,, ll these are being extracted by the system and these are stored in the trained features.

    Trained features are stored in the name given by the analyst while training which is called the ground truth. Ground Truth is the actual classification level given by the analyst and this ground truth is used to prove r disprove the hypothesis. To mean that the image is stored as the Image1.jpg Apple Granny Smith where Image1 is the name of the file, jpg is the format of the file, Apple Granny Smith is the ground truth given by the analyst. So all the features of Fig 1 image is being read by the system and stored in the name as Apple Granny Smith which can also be told as the system has learnt/the machine has learnt. It is not possible for the system to recognize the class just y getting trained by 1 image that is why the author here has taken 2633 sample images of just 15 different genre of fruits and vegetables. Likewise all the images are read in different angles, intensity values, different numbers etc.

    The image is classified based on many classification techniques like Manhattan distance, SVM classifier, nearest neighbor technique etc. This technical paper has more stressed on the nearest neighbor technique.

    Blocks of flow

    Training Testing

    Browsing image

    Browsing image

    Browsing image

    Browsing image

    Pre- processing

    Conversions and filters

    Pre- processing

    Pre- processing

    Conversions and filters

    Pre- processing

    Segment

    Extracting ROI

    Segment

    Segment

    Extracting ROI

    Segment

    Boundary detection

    Boundary detection

    Boundary detection

    Boundary detection

    Feature extraction

    Bounding box

    Feature extraction

    Feature extraction

    Bounding box

    Feature extraction

    Features of trained samples

    Features of testing sample

    Features of trained samples

    Features of testing sample

    Binary SVM classifier

    Binary SVM classifier

    Predicted class display

    Predicted class display

    Methodology and process flow

    The beside is the image sample of a wing which has been taken for the processing purpose. The wing of the bird is held against a white KG cardboard sheet by the fancier itself. Whilst the image is taken, the ring number of the bird is also noted down. Both male and female pigeon wings are taken and stored in a memory, in the name of the fancier.

  3. HARDWARE IMPLEMENTATION

    In the hardware implementation we capture images of the wings of the pigeon against a white background which makes easy to detect the edges of the region of interest of the image. The fancier was requested to stretch the wing of the bird for the better detailing of the image. Since we wanted all the information of the image to be recorded, the image should be more detailed. So we stuck the white background to a wall and asked the fancier to hold the wing of the bird against it. We were successful in collecting the images of the three fanciers. The research would still have been more accurate if we had an opportunity of taking few more samples. It is sure that the system would have learnt the features still more better by having more number of samples. Since we had no option on that we gathered around 68 samples from the three fanciers. All these three fanciers are the members of the Mandya Homing Pigeon Society (MHPS) which is a branch of the KRPS. MHPS is limited with its own number of fanciers in it. Like wise we have different clubs in different districts of the state.

    The three fanciers are: Mr. Lethan M N(Mandya), Mr. Giri Gowda(Mandya), Mr. Raju(Nagamangala).

    Mr. Lethan M N is our project guide and the assistant professor at MIT Mysore. We contacted all these fanciers and took the images and also collected the Ground truth. They knew the bird if it was a short or a long distance bird based on the race and the placement of the bird. So based on that we took the images of the birds from all these fanciers and roughly noted down the ground truth. After capturing the images we sorted the images according to the names of the fanciers and stored them in an accessible path in the PC.

  4. SOFTWARE IMPLEMENTATION

    The software implementation here plays a major role since the project is completely based on the image processing platform. As explained in the block diagram section we definitely have two main phases:

    1. Training phase

    2. Testing phase

    In training phase and the testing phase we have a few common processing techniques which has to be done to get the classification done.

    There are also other processing techniques which are either in the training or the testing phase.

    Most of the processes would want a path from where it can pick the data, so the default directory which we have used is E:\MIT\Project\Pigeon\Bird feathers detection\Dataset, so this path takes the system into the directory where the samples of the images are stored.

    The main parts of the flow or the code is being discussed in the next explanations

    In the training phase, the image is read from a directory and then pre-processed. The pre-processing phase included the image resizing. After resizing the image then the image was converted from RGB to GRAY and later to binary image. A black and white area filter is passed to the binary images to remove the noises and later the image is filled with the holes to make it just two colors (ie, white and black). The filled image was segmented based on the region of interest. The boundary is detected using the jet function in the MATLAB.

    Later the features of the image like the length, width, height of the boundary detected image is being calculated using the bounding box. Where bounding box creates a box around the boundary detected image and then calculates the required features in pixels and then lets the system learn the features.

    The learnt feature is stored in the birddata.mat which is a MATLAB file.

    Later in the testing phase. All the process of extracting features was supposed to be done in order to compare. Thus the entire process of feature learning is looped back even in the testing phase. The extracted features is the stored in the testeddata.mat which is a MATLAB file.

    Classification here is done using a supervised binary SVM classifier which actually classifies based on the supporting vectors. All the features learnt are in the pixel format. These pixels are converted into vectors by taking histogram and then these vectors are plotted on a 2-D graph called the SVM plot. If the pixels are in 1-D then a polynomial kernel is used to convert 1-D to 2-D.

    The left side figure shows the plot of SVM graph where circles are vectors of trained and triangles are vectors of testing sample. The right sided figure shows how a supporting hyper plane is drawn to differentiate between the trained and the testing vectors. Additional lines parallel to hyperplane is drawn which defines the vectors which is called the support vectors thus the name Support Vector Machine. The line below the hyper plane is called the d- margin and the line above the hyperplane is called the d= margin. Now, the distance between the d- margin to hyperplane and d+ margin to hyper plane is calculated separately in MANHATTAN distances called the d+ and d- distances.

    Equations

    If,

    d- d+(k) then it falls to the class Short else if,

    d- d+(l) then it falls to the class Long

    where,

    k is the nearest vector of short distance bird in SVM

    plot

    l is the nearest vector of long distance bird in SVM

    plot

  5. RESULT

    The above dialogue box shows the result of the hypothesis. This means that a bird wing was read and then the classification is done as short distance bird.

  6. CONCLUSION AND FUTURE WORK

    This technical work has fixed the problem statements given by the fanciers of the krps.co (DRB1/SOR/71/2019- 2020). Now the fanciers are capabe of differentiating between both genre of birds and can keep up the good diet.

    Since we have considered only the wing pattern, there are also other factors which affect he classification. Even the body style and the weight and the rump of the bird matter much

    Not all the fanciers can have a PC and MATLAB installed with them, so inculcating this entire project on APK (android platform) would be still more easy for the fanciers.

    Also using neural network can fetch attributes more accurately than features learning in MATLAB

    There are two portion of feathers in a pigeon wing. The feathers below the white line or the feathers toward the body of the bird is the primary feathers, the feathers above the white line or away from the body of the bird is the secondary feathers. There was an hypothesis found by the fanciers that, if the ratio of primary and secondary feathers are equal then they were short, if the secondary ratio is more than primary then they were long. This paper has just taken the entire height and width of the wing instead.

  7. ACKNOWLEDGMENT

Acknowledged by the UGC as the first technical paper in this area of research ever since and personally recognized by Prof. Dr. Mahesh Rao (USA) HOD, ECE department MIT Mysore, Prof. Lethan M N(also member of MHPS) , Mr. Ravi (President of krps.co (DRB1/SOR/71/2019-2020), Mr. Chetan B(krps), Mr. Raghupathi(krps), Mr. Giri Gowda(President of Mandya Homing Pigeon Society,MHPS), Harshith(MHPS)

REFERENCES

  1. Automatic fruits and vegetables classification from images by Anderson Rochaa, Daniel C. Hauagge, Jacques Wainer, Siome Goldenstein, New York, USA

  2. Image segmentation techniques by Rajeshwar Dass, Priyanka, Department of ECE, DCR University of Science & Technology,

    Murthal, Sonepat, Haryana, India Swapna Devi

  3. A detaimed review of feature extraction in MATLAB by Gaurav Kumar, Department of Information Technology, Panipat Institute of Engg. & Technology,

    Panipat, Haryana, India

  4. Concealed Weapon Detection Using Image Processing by, Bhavna Khajone, Prof. V. K. Shandilya, International Journal of Scientific & Engineering Research, Volume 3, Issue 6, June-2012

  5. Familiar route loyalty implies visual pilotage in the homing pigeon by D Biro, J Meade

  6. An edge detection approach to investing pigeon navigation by KK Lau, S Roberts

  7. Expoliting Machine Learning for automatic semantic feature assignment by K Bilek, V Kubon

  8. Image segmentation with a bounding box prior by P Kohli, C Rother, V Lempitsky

  9. Multi class and binary SVM classification: implications for training and classification users by A Mathur, GM Foody

  10. A double threshold image bnarization method based on edge detector by Q Chen, Q Sun, PA Heng

  11. A theory based on conversion of RGB image to GRAY image by T Kumar, K Verma

  12. Image acquisition method and apparatus by SE Reichenbach, SK Park

  13. EXPGUI, a graphical user interface for GSAS by BH Toby,

    Journal of applied crystallogrophy

  14. MATLAB: A language for parallel computing by G Sharma, J Martin- International Journal of Parallel Programming

  15. AUTOMATIC IDENTIFICATION SYSTEM OF SILKWORM COCOON BASED ON COMPUTER VISION METHOD by School of Information Engineering, Harbin University, Harbin 150086, Heilongjiang, ChinaSchool of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001,

    Heilongjiang,China

  16. Parameshachari B D et. al Secure Transfer of Images Using Pixel Level and Bit Level Permutation Based on Knight Tour Path Scan Pattern and Henon Map1st International Conference on Recent Trends in Electronics & Communication Engineering (ICRTECE)organized by REVA university in association with TIE on 11-12 June 2020.

Leave a Reply