Improving Forging Inspection Efficiency through Image Processing

DOI : 10.17577/IJERTV3IS041663

Download Full-Text PDF Cite this Publication

Text Only Version

Improving Forging Inspection Efficiency through Image Processing

Bhargav R.Dave PG Research Scholar

Department of Electronics and Communication RK University, School of Engineering

Rajkot, India

Dharti R. Domadiya Assistant Professor

Department of Electronics and Communication RK University, School of Engineering

Rajkot, India

Abstract–In the field of morphology the basic component are the raw image and the subsequent reference image. Taking image recognition to the next level and introducing artificial intelligence in the process, in this paper we have explained the usage of line laser and camera using empirical technique to know the differences in the dimension of the object at nearly 1200 C temperature.

KeywordsEmpirical technique, image detection, matlab, edge detection technique, machine vision

  1. INTRODUCTION

    Edge detection is the fundamental rule in image processing, machine vision and computer vision. Edge detection is the technique for a set of mathematical methods which identifies points in a digital image at which the image brightness changes sharply or, has discontinuities in the image. The points at which image brightness changes sharply or moderately are typically organized into a set of curved line segments termed edges. It is basically categorized into two, search based and zero crossing.

    A new method was supposed to be derived to checking the dimensions of an object forged as infrared or direct visualization is not possible due to high temperature of the object and the moving assembly line with numerous noises.

    Empirical method is developed by rigorous experiments with the objects using laser surface mapping and range based mapping. This helps to know the dimensions of the object in two dimension. An efficient approach is presented for modeling 3D surface map of an object employing an un- calibrated camera. Class II Line Laser helps in detecting object at high temperature. When the image is decomposed by bi-dimensional empirical mode decomposition, the first result image has a very good characterization of edge or in this case difference in pixel values with the reference image. After extracting the edge pixels from the first IMF image by applying a suitable threshold value, the obtained edge image is as clear as the edge image by other methods. Simulation results with real images demonstrate the efficacy of the proposed algorithm for edge detection.

    Imabsdiff is the tool which is used to check differences between two pictures taken from camera. After comparing the images with a reference image we get a maximum value of the differences between the pixels of the images which we had assumed to be of perfect dimension and to cent percent accuracy.

    Figure 1 Laser projection for reference coordination

    The x-position of the laser line is determined by finding the column with the most intense measure of red light for each row. The distance between the camera and the laser line gives the x-coordinate for each y-coordinate.

    This is an effective way to do it, because even if the laser angle is slightly inaccurate, it will cause an anamorphic transformation of the x-coordinates, which can be easily adjusted for later on. If the angle is slightly inaccurate, it will be offset by the same amount in each recorded image, which can be easily adjusted by the aforementioned anamorphic transformation of the x-coordinates.

    Deriving a generic algorithm for the ease of use in Matlab provides stability in construction of the system which is as follows,

    obj = videoinput('winvideo', n, 'RGB24_2304x1728'); xn=getsnapshot(obj);

    xn =rgb2gray(xn); cn=imabsdiff(n, n+1); zn = sum(sum(cn))

    if (ma<zn) ma= zn; End

    display(ma)

    obj = videoinput('winvideo', 1, 'RGB24_2304x1728'); i=getsnapshot(obj);

    i=rgb2gray(i); p=imabsdiff(i, xn); q=sum(sum(p)) disp('output:-')

    if q>ma

    disp('They have flaws and cannot be moved ahead in the assembly line')

    Else

    disp(They are allowed') End

    Figure 2 Empirical Technique algorithm for checking the dimensions of object and displaying results

  2. ISSUSES

    The very first problem is the closed environment in a forging industry and moving assembly line with noises. A dark environment is created to project laser on red hot object which is recently forged.

    In order to simplify the task on experimental level and eliminating hazards of hot elements, we are using a notebook for our scanning process. The below image shows the same.

    Figure 3 Line laser projected on a book

    One major problem is measuring dimensions in 3- dimensions, with this system one is only able to project results in 2-dimensions. If one needs to project results in 3- dimensions he needs to deploy the system is twice in appropriate directions.

  3. ELUCIDATION

    By not complicating the situation for the end user, a simple GUI model is developed in Matlab in which one simply needs to place captured images in the given space and can get results with simple clicks. The same is displayed in the below two figures.

    Figure 4 Structure of GUI model for ease

    Figure 5 GUI model used by an assembly line worker

  4. RESULT ANALYSIS

  5. CONCLUSION

With this technology we can use the simple edge detection technique with imposed artificial intelligence along with line laser and un-calibrated camera to know the dimension of a red hot forged object.

REFERENCES

  1. 1WenshuoGAO,et.al.:An improved edge detection, Computer Science & Information Technology(ICCSIT),2010 3rd IEEE International Conference,China,Volume:5,pp.67-71,9-11 July 2010.

  2. A. Akbarzadeh, J. M. Frahm, P. Mordohai, B. Clipp, C. Engels, D. Gallup, P.Merrell, M. Phelps, S. Sinha, B. Talton, L. Wang, Q. Yang, D.Nist`er and M. Pollefeys,Towards urban 3D reconstruction from video.In Proc. Of the Third International Symposium on 3D Data Processing, Visualization and Transmission(3DPVT),2006.

  3. Microsoft. http://www.xbox.com/en-US/kinect.

  4. C. Wu. SiftGPU: A GPU implementation of scale invariant feature transform(SIFT). http://cs.unc.edu/ccwu/siftgpu,2007.

  5. D. Cole and P. Newman,Using laser range data for 3D SLAM in outdoor environments, in proc. of the IEEE Int.conf. on robotics and Automation (ICRA),2006, pp. 1556-1563.

  6. Yang C, Ngaile G(2010) Preform design for forging and extrusion processes based on geometrical resemblance.Proc Inst Mech Eng Part B J Eng Manuf 224(9):1409-1423.

  7. P. Newman, G.Sibley, M. Smith, M. Cummins, A. harrison, C. Meri, I. Posner, R. Shade, D. Schroter, L. Murphy, W. Churchill, D. Cole and I. Reid Navigating, recognising and describing urban spaces with vision and laser.International Journal of Robotics Research(IJSR), 28(11-12)

    ,2009.

  8. J.Canny,A Computational approach to edge detection, pattern Analysis and machine Intelligence,IEEE transactions on,vol PAMI- 8,no.6,pp.679-698,nov 1986

  9. R. C. Gonzalez, R. E. Woods, S. L. Eddins, Digital Image Processing using MATLAB, ISBN 81-297-0515-X, 2005.

  10. J. Ryde and H. Hu, 3D mapping with multi-resolution occupied voxel lists, Autonomous Robots, pp. 117, 2010.

Leave a Reply