An Enhancement Process for Gray-Scale Images Resulted from Image Fusion

DOI : 10.17577/IJERTV9IS090054

Download Full-Text PDF Cite this Publication

Text Only Version

An Enhancement Process for Gray-Scale Images Resulted from Image Fusion

Rami Nahas

Software Engineering Department University of Business and Technology Jeddah, Saudi Arabia

Abstract:- Many applications were introduced to do image fusion. The idea of image fusion is to fuse different images together to get more information from the images. In our application we fused a thermal image with a regular colorized image. The resultant image was in gray scale. In this paper we are introducing a system that take any image that has been processed using fusion algorithms in gray scale and enhance the image. The system can be used to improve night vision for navigation. This system consist of different stages enhance, colorize, segment, restoration and fuse the image. The resultant image will have more information and will be colorized. After applying the proposed process the results indicates the increases of the mutual information (MI) of the final images by 40 present and in some images were more than that. Also, we made a comparison between two fusion algorithms a simple algorithm that we introduced and Shift Invariant Discrete Wavelet Transform (SIDWT) and we saw that our algorithm gave better results and was more suitable for our application.

Key words:- Image Fusion, Night Vision, Augmented Reality, Thermal Imagery

  1. INTRODUCTION

    Darkness limits what a vehicle operator can see and causes an unsafe driving condition due to poor lighting conditions (such as driving at night and in bad weather), as well as poor visibility caused by human aging or illness. Improving night vision capability will increase the safety of nighttime vehicle navigation. Ideally, a system that can enhance low-visibility situations and poor lighting conditions, such as driving at night, and make it appear as if they are happening in daylight. Their exists many system that help the vehicle operator in low-visibility situations and poor lighting conditions by using thermal camera. These system can identify any warm objects at night for example identifying animals and pedestrians [1][2]. By using the thermal camera this will distinguish warm objects and it can be displayed on screen and also it can be segmented. Also the use of augmented systems will help in these kind of situations by identifying lanes and signs. These systems may be used for various applications such vehicle navigation, surveillance, and monitoring applications to improve safety. The idea behind these systems is to provide the user with an image that is easy to interpret. Using these kind of systems is helpful but they concentrate on a special situation. So the need of a more general solution is wanted that can address different conditions.

    Various night vision systems have been developed to improve the ability to see at night [3][4][5]. Thermal cameras respond according to temperature, but they

    provide only limited data to describe a scene. Adding additional sensors has been one traditional approach to this problem, and combining images using these sensors can enhance the results. One of these fusion systems was a system that we introduced before [6]. It is a simple fusion algorithm that which uses pixel values from the thermal image and the database depending on a threshold. So the information from both used images are important but the real time image (thermal image) information is more vital because the warm objects is the main concern but it will give us more vegetation areas. Results from this system will be used for the introduced enhancement system. In the Following figure the original images that has been used for the image fusions.

  2. OBJECTIVES

    The introduced system objective is to take the resulted fused image from the fusion algorithms that has been used as an input and enter it to the system that will enhance, colorize, segment, restoration and fuse the image. This will add more information to the image, decreases the area of vegetation and increases the mutual information (MI) of the image.

    The paper will start with the methodology were the introduced process will be discussed as a whole and results will be given this will work as a summary for the new process. Then the following sections each stage will be introduced and discussed. In each section the algorithms that has been used is introduced.

  3. METHODOLOGY

    In the resulted images from the previous algorithm the introduced system will remove any areas of vegetation without losing information from the thermal image and increase the mutual information of the images. This will allow us to have an easy to interpret image with all important information from the thermal image. This led us to develop a procedure that will help us get improved results. The process consists of six stages which are:

    • Image fusion.

    • Colorization.

    • Segmentation.

    • Colorization of the most important cluster.

    • Restoration from clusters.

    1. (b)

      (c) (d)

      (e) (f)

      Fig. 1: Images used in experiments (a) thermal image of Scene 1 with pedestrian (b) thermal image of Scene 2 with pedestrian

      1. visible image of Scene 1 from database (d) visible image of Scene 2 from database (e) thermal image of Scene 1 without pedestrian (f) thermal image of Scene 2 without pedestrian

        • Image fusion.

          Fig. 2 shows the block diagram of this process.

          Fig. 2: system

          The new process will have different stages of image enhancement.

        • First stage: colorization of the fused image. This stage were attempted to be able to perform segmentation for the fused image. The image will be shown with colorization and without and clusters that will be produced:

          Without colorization, image segmented by intensity. the clusters from the segmentation shows that:

          • Cluster one is primarily from the daytime image.

          • Cluster two is primarily from the fused image

          • Cluster three is primarily from the fused image

            AS shown in Fig. 3. Therefore, the restoration give little improvement. AS shown in Fig. 4.

            1. Cluster 1

            2. Cluster 2

            3. Cluster 3

            Fig. 3: clusters from segmentation

            Fig. 4: Resulted restored image

            With colorization, image segmented by color so clusters give a mix of visible and thermal pixels. Therefore, restoration will give an image different from the original image. we can see the clusters in Fig. 5. and the restored image shown in Fig. 6.

            Doing the colorization will also help us in the later steps such as:

          • Restoration step: to restore the image using colored images and that will produce a colored image different from the original colorized image.

          • Image fusion step: to perform a colored image fusion.

        • Second stage: additional colorization of the cluster that has most information. This stage is to enhance the cluster before the restoration. This stage gave a small improvement but it is used because in the restoration step it gave pixels that are different from the original clusters.

        • Third stage: restoration. Restored the segment according to:

          • Segment 1-will be retrieved from thermal image.

          • Segment 2-will be retrieved from daytime image.

          • Segment 3-will be retrieved from colorized cluster.

      • Fourth stage: image fusion. In this stage the resulted

        1. Cluster 1

        2. Cluster 2

        3. Cluster 3

        Fig. 5: clusters from segmentation (colorized)

        Fig. 6: Resulted restored image (colorized)

        resorted image will be fused with the daytime image.

  4. IMAGE FUSION

    In this section we will use two fusion algorithms the simple image fusion algorithms and Shift Invariant Discrete Wavelet Transform (SIDWT). The fusion algorithms will get enough information from both the thermal images and the daytime images. Fig. 7 shows the fused image using the simple algorithm and Fig. 8 will show the fused image using the SIDWT algorithm.

    Fig. 7: Image fusion results of simple algorithm (A) Scene 1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    Fig. 8: Image fusion results of SIDWT algorithm (A) Scene 1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

  5. COLORIZATION

    In this section a gray-scale image to Red-Blue-Green (RGB) image converter will be used by using the colors of the reference image. The algorithm can be described in these following steps:

    • Convert all images to YCbCr color space, Where Y is the luminance component, Cb is the blue difference and Cr is the red difference chroma components.

    • Use jittered sampling to select a subset of pixel in the reference image. By using the jittered sampling that will divide image into grid and randomly assign one pixel to each grid.

    • Find a pixel in the grayscale image that best match the sample in the reference image. We can determine the best match if the two pixels in both images have similar luminance and slandered deviation.

    • The chromaticity values (Cb,Cr channels) of the best matching pixel are then transferred to the gray-scale image to form the final image.

    • Convert the final image to RGB from the YCbCr color space.

    The better the destination image match with the source gray image, the better the coloring will become. This step is needed for the following step, segmentation. Some examples of the colorization process will be shown in Fig. 9 and Fig. 10 which is the colorization of the images in Fig. 7 and Fig. 8.

    Fig. 9: Results from Fig.7 colorized (A) Scene 1 far pedestrian

    (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    The colors of the new images look alike for both algorithms because we used the same image as a reference image to preform the colorization.

  6. SEGMENTATION

    In this process we will use K-means clustering algorithm with k=3.

    K-means is an algorithm that partitions a data set into k subsets. Then, an index of each cluster is returned which has each observation that can be described as:

    Fig. 10: Results from Fig.8 colorized (A) Scene 1 far pedestrian

    (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    X kX

    J = |xn µj |2 (1)

    j=1 nSj

    Where Sj is the disjoint subsets, xn is a vector of n, µj is the mean of points in Sj .

    The algorithm consists of a simple re-estimation as follow:

    • Observation will be divided into k subset.

    • In each subset a center is computed.

    • Each observation is assigned to a cluster whose center is closest to the observation.

    • Step two and three is repeated until there is no change in the assignment of the observation.

      The difference between the k-means and hierarchical clustering is that k-means is used on the observations. Also, k- means is often more suitable than hierarchical clustering for large amounts of data.

      The k-means algorithm looks at each observation as an object that has a location in space. So, when k-means partitions and create clusters it looks for objects that are close to each other and as far from objects in other clusters as possible.

      Each cluster is assigned with member objects and by its centroid. The centroid for each cluster is its mean.

      The k-means algorithm uses an iterative algorithm that minimizes the sum of distances from each object to its cluster centroid, over all clusters. This algorithm moves objects between clusters until the sum cannot be decreased further. The result is a set of clusters that are as compact and well- separated as possible.

      The resultant images from the colorization were clustered according to intensity with k=3. Fig. 11 shows the clusters for Fig 9(A);. Fig. 12 shows the clusters for Fig. 9(B); Fig. 13 shows the clusters for Fig. 9(C); Fig. 14 shows the clusters for Fig 10(A); Fig.15 shows the clusters for Fig. 10(B); and Fig. 16 shows the clusters for Fig. 10(C).

      Fig. 11: Results of simple algorithm clustering with k=3 for Scene 1 far pedestrian (Fig. 9 (A))

      Fig. 12: Results of simple algorithm clustering with k=3 for Scene 2 far pedestrian (Fig. 9 (B))

      Fig. 13: Results of simple algorithm clustering with k=3 for Scene 3 far pedestrian (Fig. 9 (C))

      Fig. 14: Results of SIDWT algorithm clustering with k=3 for Scene 1 far pedestrian (Fig. 10 (A))

      Fig. 15: Results of SIDWT algorithm clustering with k=3 for Scene 2 far pedestrian (Fig. 10 (B))

      Fig. 16: Results of SIDWT algorithm clustering with k=3 for Scene 3 far pedestrian (Fig. 10 (C))

  7. COLORIZATION OF THE MOST IMPORTANT CLUSTER

    In this section we used the same colorization algorithm that we used in step two. Also, from step three we will color cluster C of each image, because cluster C is the cluster that contains the most information. This step is required because we needed to enhance the cluster for the restoration. This stage gave a small improvement but we used it because in the

    restoration

    step it gave pixels that are different from the original clusters. The resultant image is shown in Fig. 17 and Fig. 18.

    Fig. 17: Colorization of cluster C simple algorithm colorization of cluster C (A) Scene 1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    Fig. 18: Colorization of cluster C SIDWT algorithm colorization of cluster C (A) Scene 1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

  8. RESTORATION FROM CLUSTERS

    In this section we restored images from the clusters. Step three produced three clusters: cluster (A), cluster (B) and cluster (C). Therefore, when restoring the images we will try to retrieve cluster (A) from the source thermal image, retrieve cluster (B) from the source daytime and retrieve cluster (C) from the colorized image in step four. We did that because:

      • Looking at cluster (A) usually it contains the warm stuff.

      • Looking at cluster (B) usually it contains background such as trees and roads.

      • Looking at cluster (C) usually it contains more pixels than the others.

    The resultant images are shown in Fig. 19 and Fig. 20.

    Fig. 19: Restoration results of simple algorithm (A) Scene

    Fig. 20: Restoration results of SIDWT algorithm (A) Scene 1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    and the second fusion as shown in Tables I, II, II IV, V and VI. These results indicate that the process that we used will help improve the result by decreasing the area of vegetation and increasing the MI of the final images. we also obtained colored image from this process.

    Fig. 21: Results of simple algorithm (A) Scene 1 far pedestrian

    (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    Fig. 22: Results of SIDWT algorithm (A) Scene 1 far pedestrian

    (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    MIX

    0.2699

    0.3724

    MIY

    0.2698

    0.3719

    1.0005

    1.0015

    MIX

    0.2699

    0.3724

    MIY

    0.2698

    0.3719

    1.0005

    1.0015

    TABLE I: Scene 1 far pedetrian using simple algorithm

    Step one fusion Step six fusion MIT OT AL 1.0794 1.4886

    TABLE II: Scene 1 far pedestrian using SIDWT algorithm

    Step one fusion Step six fusion MIT OT AL 0.7461 1.3008

    MIX 0.1867 0.3299

    1 far pedestrian (B) Scene 2 far pedestrian (C) Scene 3 far pedestrian

    MIY

    0.1863

    1.0021

    0.3204

    1.0297

  9. IMAGE FUSION

    In this section we used the simple algorithm to fuse the

    TABLE III: Scene 2 far pedestrian using simple algorithm

    Step one fusion Step six fusion MIT OT AL 1.0666 1.5107

    MIX 0.2698 0.3861

    images from step five and the daytime image. The results for this step are shown in Fig 21 and Fig 22. We see from the results that there is an improvement between the first fusion

    MIY

    0.2635

    1.0236

    0.3692

    1.0459

    TABLE IV: Scene 2 far pedestrian using SIDWT algorithm

    Step one fusion Step six fusion MIT OT AL 0.4185 1.4564

    MIX 0.1094 0.3673

    MIY 0.0998 0.3608

    1.0961 1.0179

    TABLE V: Scene 3 far pedestrian using simple algorithm

    Step one fusion Step six fusion MIT OT AL 1.0078 1.279

    MIX 0.2593 0.3305

    MIY 0.2446 0.309

    1.0599 1.0696

    TABLE VI: Scene 3 far pedestrian using SIDWT algorithm

    Step one fusion Step six fusion MIT OT AL 0.6793 1.2705

    MIX 0.1699 0.3199

    Fig. 24: Second fusion

    simple background replacement algorithm gave the best results

    MIY

    0.1697

    1.0019

    0.3154

    1.0140

    suggesting that one or the other image dominated a particular region.

    We obtained improved results by colorizing the fused image.

  10. DISCUSSION OF RESULTS

    Results show that in the first step we got a fused image in gray scale and in the sixth step we got a fused image in color with a higher value of MI. Fig. 23 Shows the MI of the first step of the process and Fig. 24 shows the MI of the last step of the process we can see that we have an increase in the MI about 40 present and more in other images if we compare between the first step and the final step. That shows that the proposed process gave better results. Different results were found by the simple algorithm and the other algorithm such as SIDWT. Some of images gave better results for the simple algorithm and some gave better for the other algorithm such as SIDWT.

    Fig. 23: First fusion

  11. CONCLUSION

An image fusion system were developed that can lead to improved safety for nighttime vehicle operation. The use of a public database gives results not possible with other approaches. We found that by giving the database image priority over the thermal image, the fused image appeared the most daytime-like and easily showed a pedestrian. As the priority of the thermal image increased, the background of the image was formed by both thermal and database images. A

The process consists of six stages which are:

  • Image fusion.

  • Colorization.

  • Segmentation.

  • Colorization of the most important cluster.

  • Restoration from clusters.

  • Image fusion.

After adding this improved process we were able to better visualize the images. They become more realistic because some vegetation was introduced. This was supported by an increase in the value of the MI after adding this process by

40 present and more.

In the end we can say that we were able to make driving at night comparable to driving at daytime.

REFERENCES

  1. J. Li, W. Gong, W. Li, and X. Liu. Robust pedestrian detection in thermal infrared imagery using the wavelet transform, Infrared Physics and Technology, Volume 53, Issue 4, 267-273, 2010.

  2. D. Geronimo, A. M. Lopez, A. D. Sappa, and T. Graf. Survey of Pedestrian Detection for Advanced Driver Assistance Systems, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no.7, 1239- 1258, 2010.

  3. R. Bishop. Intelligent vehicle applications worldwide, IEEE Intelligent Systems and their Applications, vol. 15, no. 1, 78-81, 2000.

  4. A. Vu, A. Ramanandan, A. Chen, J. A. Farrell, and M. Barth. Real-Time Computer Vision/DGPS-Aided Inertial Navigation System for Lane- Level Vehicle Navigation, IEEE Trans. on Intelligent Transportation Systems, vol. 13, no.2, 899-913, 2012.

  5. G. Bhatnagar, Q. M. J. Wu, and B. Raman, Navigation and surveillance using night vision and image fusion, 2011 IEEE Sym. on Industrial Electronics and Applications (ISIEA), 342-347, Sept. 25-28, 2011.

  6. R. Nahas and S.P. Kozaitis, Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors, American Journal of Engi- neering and Applied Sciences 2014, 7 (4): 355-362 DOI: 10.3844/aje- assp.2014.355.362.

Leave a Reply