Vision Enhancement of Night Surveillance Robot using Re-Configurable Computing

DOI : 10.17577/IJERTV3IS031038

Download Full-Text PDF Cite this Publication

Text Only Version

Vision Enhancement of Night Surveillance Robot using Re-Configurable Computing

L. M. I. Leo Joseph

Research Scholar, Sathyabama University, Chennai, India.

S. Rajarajan

Principal, Sri Aravindar Engineering College, Vanur, Villupuram Dt, India.

Abstract This paper outlines, an efficient FPGA based hardware design for night vision enhancement in image and video processing. The approach used is reconfigurable computing technique which works very effectively for images and videos captured under any kind of environment. To meet the speed and area constraints, it is important to quantify the reduction in processing speed as well as FPGA resources that can be achieved if a component of the image/video processing system is embedded onto a hardware based platform like an FPGA. A flexible field programmable gate array device lets develop the image processing application so that the same logic substrate is reconfigured and reused by several custom accelerators during the different illumination levels by means of the sequential computational chain. The results obtained with this technology reveal that a reconfigurable FPGA faces both real-time and parallel compute-intensive demands of the vision enhancement process.

Keywords: Vision Enhancement, Edge detection, FPGA, Snakes and Ladder algorithm, Blur Identification and Elimination, Illumination, User Defined Modules, Threshold detector and Estimator, Computational Chain ,Accelerators.

  1. INTRODUCTION

    There are a lot of research topics in the field of video enhancement, such as removing noise in videos, highlighting some specified features and improving the appearance or visibility of video content. In many applications, the acquired video is not clear to be processed. Some of reasons are low lamination and noise. However, high-quality videos are required in a wide range of applications, including video surveillance, video tracking, etc. Thus, effective video enhancement techniques, which enhance the original dim video obtained by ordinary camera, are highly sought after.

    A lot of work has been done in enhancing low exposure videos. One of important dim video enhancement methods is provided by Bennett et al [1]. They proposed a VEC (Virtual Exposure Camera) model to processed underexposed, low dynamic range videos. They use ASTA (Adaptive Spatio-Temporal Accumulation) filter to reduce noise, and tone mapping approach to enhance low range videos. Although their method is effective and outstanding, it needs a lot of computation for ASTA, and many parameters need to be decided. So it is not suitable for real time process. In order to enhance the quality of low illumination videos in real time application, we simplified and develop an intelligent video processing chain to enhance the night images. This intelligent chain is to be processed and activated by a FPGA (Field Programmable Gate Array) processor which analyzes the image and select the suitable module for vision

    enhancement. The rest of paper is organized as follows. The role of FPGA processor is discussed in Section II. The details about development of proposed intelligent processing chain in Section III .Experimental results are provided in Section IV and Section V concludes the paper with final remarks.

  2. FPGA PROCESSOR

    The mechanisms described in the previous section for controlling design complexity have ramifications for the physical design of the system. The processing element modules are instanced within a reconfigurable FPGA fabric. The structure of this fabric needs to be able to support variable

    numbers and combinations of various sized modules. Since modules are fully placed and routed internally at design-time, the completed configurations must be relocatable within the fabric. The provision of mechanisms for connecting PE configurations to the buses and external RAM is required.

    The physical system structure is illustrated in Figure 1. The processing elements are implemented as partial configurations within the FPGA fabric. The PEs occupy the full height of the fabric, but may vary in width by discrete steps.

    Fig. 1. A diagram representing the physical structure of the proposed system, with the reconfigurable fabric (shaded light grey) configured into five processing elements

    The structure and nature of the reconfigurable fabric is based on the Virtex-II Pro FPGA family from Xilinx, Inc. It is heterogeneous, incorporating not only CLBs but RAM block elements and other dedicated hardware elements such as

    multipliers. However, it exhibits translational symmetry in the horizontal dimension, choice of a one-dimensional fabric simplifies module design, resource allocation and connectivity.

    The global bus is not constructed from FPGA primitives; it has a dedicated wiring structure, with discrete connection points to the FPGA fabric. The advantage of this strategy is that the electrical characteristics of the global bus wiring can be optimized, leading to high speeds and low power [2, 3]. In addition, the wiring can be denser than could otherwise be achieved. Each processing element must have chain bus connections to the neighboring PEs; this is accomplished through the use of virtual sockets, implemented as hard macros. The chain bus signals are routed as antenna wires to specified locations along the left and right edges of the module. When two configurations are loaded adjacently into the array, these wires are aligned, and the signal paths may be completed by configuring the pass transistors (programmable interconnect points) separating the wires. Thus, each module provides sockets into which other modules can plug into. Similar ideas has been proposed previously [4, 5], although the connection point chosen in previous work is a CLB programmed as a buffer. A similar concept is used in connecting processing modules to the external RAM banks. Since the processing elements are variable-sized and relocatable, it is not possible to have direct- wired connections to the external RAM. The solution to this is to wire the external RAM to routing modules which can then be configured to route the RAM signals to several possible socket points.

    This allows the registration between the RAM routing module and the processing element to be varied by discrete steps, within limits. If external RAM is not required by a particular processing element, such as PE 2 in Figure 1, the RAM resources may be assigned to an adjacent PE, depending on the relative placements.

    The transfer and storage of data are significant sources of power consumption in custom computations [6], so warrant specific attention. In the preceding system, Sonic, data transfer is systolic; each clock cycle one pixel value is clocked into the engine, and one is clocked out.

    This limits the pixel-level parallelism possible within the engine, and constrains algorithm design. In particular, data reuse must be explicitly handled within the engine itself, by storing pixel values in local registers.

    This becomes a significant issue for the engine design when several lines of image data must be stored, which can total tens of kilobytes.

    In the proposed architecture the input stream buffer efficiently deals with data reuse. Being constructed from embedded RAM block elements (rather than from CLBs) a high bit density can be achieved. Image data is streamed into the buffer in a serial, FIFO-like manner, filling it with several lines of a frame. The engine may access any valid pixel entry in the buffer; addressing is relative to the pixel at the front of

    the queue. Buffer space is freed when the engine ndicates it has finished with the data at the front of the queue.

    This system enables greater design flexibility than a purely systolic data movement scheme while constraining the data access pattern sufficiently to achieve the full speed and power benefits of serial streaming data transfer.

    This is particularly beneficial when data are sourced from external RAM, where a sequential access pattern can take advantage of the burst mode transfer capability of standard RAM devices.

    The input and output stream buffers are physically constructed from a number of smaller RAM elements for two reasons. Firstly, a wide data-path bit-width between the buffers and the engine can be achieved by connecting the RAM elements in parallel, enabling several pixels to be processed in parallel within the PE.

    The second important benefit is the ability to rearrange the input buffer RAM elements into two (or more) parallel stream buffers, when the engine requires more than one input data stream, such as in a merge operation. Likewise, The output buffer may be subdivided into several output streams, if the engine produces more than one output. We label each stream buffer input or output from an engine a port. In addition to allowing efficient data reuse and fine- grained parallelism, the stream buffers create flexibility in the transfer of data over the global bus. Instead of a systolic, constant rate data-flow, data can be transferred from an output port buffer of one PE to the input port buffer of another PE in bursts, which allows the global bus to be shared between several logical communication channels.

    The arbitration between the various logically concurrent channels is handled by a reconfigurable arbitration unit within the system controller. This enables a range of arbitration strategies to be employed depending on the application, with the objective of preventing processing stalls from an input buffer under-run or output buffer overrun.

  3. INTELLIGENT PROCESSING CHAIN

    In this article as a trial approach, we are implementing two modules in the processing chain. The details of two modules are discussed in the following sections.

    1. First module

      This module uses background subtraction technique in which the image, taken in good day light condition is taken as a reference image. The night image captured by a camera is converted into grey scale image and the dark background of the night image is subtracted and the objects present in the night vision is extracted.

      The extracted object is now fused with the background of the image taken as reference .Now, we get object captured in night along with day light background.

      The module is based on the correlations between main frame and the reference frame.

      Fig. 2.A diagram representing framework of the algorithm

      The block diagram of first module is represented in fig.2

    2. Second Module

      This module uses snakes and ladder algorithm technique in which ladder contours indicates regions of common direction (i.e.) pixels in the main image that are having same pixel intensity as that of the reference image. The snake contours indicates regions of different direction (i.e.) pixels in the main image that are having different pixel intensity as that of the reference image. There is a small, but non-significant increase in sensitivity once the carriers of the elements defining snake contours are set in motion, and we find that observers remain significantly more sensitive to snakes than ladders in all conditions. The pixel orientation is shown in fig 3. In this technique the computational time seems to be reduced since this technique analyses disturbed pixels (snake) instead of analysing the entire frame. The snake contour are taken as reference and the threshold of the pixel identified as snake is increased to the level of pixel intensity of the main frame .The main drawback of snake and ladder technique is analysis time is more but accuracy is good.

      Fig. 3 Representation of pixel orientation using snake and ladder technique

  4. EXPERIMENTAL RESULTS

A real time night video enhancement system based on the reconfigurable computing has been developed. The system is implemented on standard PC hardware (Pentium IV at 3.0GHz). The algorithm has been tested in various environments, and the performance is satisfied. We show an example of outdoor scene combined from a daytime background and a night picture (see Fig.4). Notice that an image in dark area is correctly extracted and fused in the final result (see Fig.4d). The enhancement using snake and ladder technique is shown in Fig.4c.

Fig. 4 Enhanced results by reconfigurable computing

a) day light image b) Night vision c) result of module 1

d) result of module 2 e) histogram of day light image

f) histogram of module 1 g) histogram of module 2

The histogram comparison results for video enhancement using reconfigurable computing is shown (see fig 4e ,4f, 4g) What's more, we do many experiments using different

techniques that are to be placed in the intelligent computing chain as modules, and the results show that this reconfigurable computing does well.

CONCLUSIONS

A night vision enhancement using reconfigurable computing algorithm is presented which could extract and fuse meaningful information from multiple images.

A real time night video enhancement system based on the Presented has been developed and tested with long time Video in various environments. Experiment results demonstrate that the system is highly computationally cost Effective. Moreover, the enhanced video is visually Significant and contains more information than the Original night vision images.

REFERENCES

    1. E. P. Bennett, L. McMillan. Video enhancement using per-pixel virtual exposures. ACM Transaction on Graphics, 2005,

    2. Benini, L., De Micheli, G.: Networks on chips: A new SoC paradigm IEEE Computer 35 (2002) 7078

    3. Dally, W.J., Towles, B.: Route packets, not wires: On-chip interconnection networks.In: Design Automation Conference.

    4. Dyer, M., Plessl, C., Platzner, and M.: Partially reconfigurable cores for Xilinx Virtex In: FieldProgrammable Logic and Applications. (2002)

    5. Horta, E.L., Lockwood, J.W., Taylor, D.E., Parlour, D.: Dynamic hardware plugins in an FPGA with partial run-time reconfiguration. In: Design Automation Conference. (2002)

    6. Soudris, D., Zervas, N.D., Argyriou, A., Dasygenis, M., Tatas, K., Goutis,

    7. C.Thanailakis, A.: Data-reuse and parallel embedded architectures for low-power,real-time multimedia applications. In: International Workshop – Power and Timing Modeling, Optimization and Simulation. (2000)

    8. Adelson, E. H., & Movshon, J. A. (1982). Phenomenal coherence of moving visual patterns. Nature, 300, 523525.

    9. Alais, D., Blake, R., & Lee, S. H. (1998). Visual features that vary together over time group together over space. Nature Neuroscience, 1, 160164.

    10. Anderson, S. J., & Burr, D. C. (1987). Receptive field size of human motion detection units. Vision Research, 27, 621635.

    11. Bex, P. J., Metha, A. B., & Makous, W. (1998). Psychophysical evidence for a functional hierarchy of motion processing mechanisms. Journal of the Optical Society of America, A 15, 769776.

    12. Bex, P. J., Metha, A. B., & Makous, W. (1999). Enhanced motion aftereffect for complex motions. Vision Research, 39, 22292238.

    13. Castet, E., Lorenceau, J., Shiffrar, M., & Bonnet, C. (1993). Perceived speed of moving lines depends on orientation, length, speed and luminance. Vision Research, 33, 19211936.

    14. Castet, E., & Zanker, J. (1999). Long-range interactions in the spatial integration of motion signals. Spatial Vision, 12, 287307.

    15. Dakin, S. C., & Hess, R. F. (1998). Spatial-frequency tuning of visual contour integration. Journal of he Optical Society of America, A15, 14861499.

    16. Dakin, S. C., & Hess, R. F. (1999). Contour integration and scale combination processes in visual edge detection. Spatial Vision, 12,309 327.

    17. Devalois, R. L., & Devalois, K. K. (1991). Vernier acuity with stationary moving gabors. Vision Research, 31, 16191626.

    18. Field, D. J., Hayes, A., & Hess, R. F. (1993). Contour integration by the human visual system: evidence for a local association field.Vision Research, 33, 173193.

    19. Gurney, K., & Wright, M. J. (1996). Rotation and radial motion thresholds support a two-stage model of differential-motion analysis.Perception, 25, 526.

    20. Hayes, A. (2000). Apparent position governs contour-element binding by the visual system. Proceedings of the Royal Society of London Series B-Biological Sciences, 267, 13411345.

Leave a Reply