An Improved Edge Sensing Demosaicing and DCT Based Resizing Algorithm for Color Filter Array Images

DOI : 10.17577/IJERTCONV3IS32002

Download Full-Text PDF Cite this Publication

Text Only Version

An Improved Edge Sensing Demosaicing and DCT Based Resizing Algorithm for Color Filter Array Images

NACTET-2015 Conference Proceedings

Anuvinda. S.M

Dept. of EEE

Trinity College of Engineering, Thiruvananthapuram

Abstract Most digital cameras use a single sensor array to capture the color information based on Bayer color filter array (CFA) structure and samples only one color value for each pixel and interpolate the other two color values afterwards. The interpolation process is commonly known as demosaicing. In this paper an algorithm is proposed for demosaicing and resizing of single sensor array images. Adaptive heterogeneity projection masks and Sobel Luminance estimation based masks are used to extract more accurate edge information. Edge sensing approach and color difference idea is used to construct the fully populated green color plane. The G plane is interpolated by using the available information of the neighboring red and blue color planes. In order to reduce the estimation error, color difference planes G-R and G-B are interpolated instead of interpolating R and B color planes directly. Then the three constructed planes are resized. The resized red and blue color planes are constructed by using the three resized planes and finally the arbitrary ratio sized full color image is obtained.

Keywords Color difference, color filter array, arbitrary ratio, demosaicing algorithm, digital cameras

  1. INTRODUCTION

    R

    G

    R

    G

    R

    G

    R

    G

    B

    G

    B

    G

    B

    G

    R

    G

    R

    G

    R

    G

    R

    G

    B

    G

    B

    G

    B

    G

    R

    G

    R

    G

    G

    R

    R

    G

    R

    G

    R

    G

    R

    G

    B

    G

    B

    G

    B

    G

    R

    G

    R

    G

    R

    G

    R

    G

    B

    G

    B

    G

    B

    G

    R

    G

    R

    G

    G

    R

    Digital cameras are becoming the most popular in consumer electronics market now. For representing a full color image, all the three primary colors red (R), green (G) and blue (B) at each pixel location are required. To capture the complete image three separate arrays of sensors are required. Most of the digital cameras use a single Charge Coupled Device (CCD) or Complementary Metal Oxide Semi-conductor (CMOS) sensors to capture the color information, instead of using three separate sensors. This is done in order to reduce the hardware cost and size. The surface of the sensor is covered with a color filter array (CFA). Bayer CFA structure is the most prevalent one among various proposed CFAs which is shown in Figure1.

    R

    Fig. 1. Bayer CFA Pattern

    Yamuna M Nair

    Dept. of EEE

    Trinity College of Engineering, Thiruvananthapuram

    An important factor to determine the luminance of the color image is the green color plane. So half of the pixels in Bayer CFA pattern are assigned to green color plane. The remaining parts are evenly shared by red and blue color plane. Each of the captured images in Bayer CFA has one of the three primary colors. This image is called mosaic image. The two missing colors on each pixel location have to be interpolated to get a full color image. This process of interpolation is called demosaicing or color interpolation [2], [3], [5], [8], [13], [15]. The missing colors that are reconstructed resemble closely to original ones. Apart from Demosaicing, resizing is also done. Here resizing refers to zooming process. Various resizing algorithms [1], [4], [6], [11], [12], [14] have been developed for mosaic images. The previously developed resizing algorithms can be roughly classified into three approaches. In the first approach, the mosaic image, by demosaicing process is first recovered to the full color image and then by zooming process the demosaicied full color image is zoomed. Here demosaicing and zooming are performed separately and independently. In the second approach, the CFA zooming method is used on the mosaic image to obtain the zoomed mosaic image and then existing demosaicing process is applied to obtain the full color image [11]. A third approach was proposed recently using combined demosaicing and zooming process [4]. This approach has better quality performance when compared to the other two approaches. The resizing algorithms developed earlier focused on quad zooming process. This motivated to develop an improved, combined demosaicing and resizing algorithm for mosaic images. Here an improved combined demosaicing and resizing algorithm for mosaic images is presented. Adaptive heterogeneity projection masks and Sobel Luminance (SL) estimation based masks [5] are used to extract more accurate edge information. Edge sensing approach and color difference idea is used to construct the fully populated green color plane. In order to reduce the estimation error color difference planes G-R and G-B are interpolated instead of interpolating R and B color planes directly. Then the three constructed planes are resized to arbitrary ratio sized planes by using composite length DCT technique [12]. The resized red and blue color planes are constructed by using the three resized planes, green plane and the two color difference planes and finally the arbitrary ratio sized full color image is obtained. The proposed algorithm has better image quality performance in terms of two objective

    color image quality measures, the color peak signal-to-noise ratio (CPSNR) and the S-CIELAB Ea*b, and one subjective color quality measure, the color artifacts when compared with natve algorithms which is the combination of well known demosaicing and resizing method [3], [10], [13], [15] and [12]. In the second section adaptive heterogeneity projection masks and SL based masks are explained. In the third section resizing algorithm is presented. Experimental results are demonstrated in the fourth section. In the final section some conclusions and scope for future wok is specified.

    Step 1: Initially, set theNAleCftTbEoTu-n2d0a1r5yCxonf=erjen-c2e aPnrdoceriegdhint gs boundary xr = j 2, the mask size NH ( i, j) = 5 and maximum mask size Nmax = 9

    l

    l

    Step 2: Assume threshold value T = 8. If the condition Max (Sl, Sr) < T and T = 8 holds, the mask size NH (i, j) is output as the proper horizontal mask size. Otherwise, go to Step 3.

    Step 3: Update NH( i, j) , xl and xr by using NH( i, j) = NH( i, j) + 2, xl = xl – 1 and xr = xr + 1. If NH (i, j) = Nmax then NH( i, j) = Nmax is output as the proper mask size and stop the procedure. Otherwise, go to Step 2.

  2. EXTRACTION OF MORE ACCURATE EDGE

    INFORMATION

    Here adaptive heterogeneity projection masks and SL based

    | Ig

    m

    m

    m

    m

    m

    m

    m m

    m m

    | Ig SH(i,j) = | Ir

    (i, j)- Ir

    m

    m

    m

    m

    (i, j)- Ir

    m

    m

    (i, j)- Ir

    (i, j+1)|

    (i, j+1)|

    (i, j+1 )| (2)

    masks [7] is introduced. The R, G and B color pixels at

    position (i, j) in the mosaic image Im are denoted by Irm(i, j), Ig (i, j) and Ib (i, j) respectively.

    | Ib (i, j)- Ir (i, j+1)|

    After finding the two heterogeneity projection maps, the two heterogeneity projection values at position (i,j) are

    m m

    denoted by HPH( i j) and HPV(i j). Then the tuned horizontal

    1. Adaptive Heterogeneity Projection

      Adaptive heterogeneity projection mask [5] is used to extract more accurate edge information from the mosaic images. For this luminance estimation technique is used. In this technique, a symmetric convolution mask is used to estimate the luminance of the pixel at position (i, j) in the mosaic image. Based on this concept three possible heterogeneity projection masks with different sizes (N = 5, 7,

      9) is adopted. The three possible heterogeneity projection masks [5] used in this paper are shown in Table I. Here N and Mhp(N) denote the mask size and the corresponding heterogeneity projection mask, respectively.

      TABLE I

      THREE POSSIBLE HETEROGENEITY PROJECTION MASKS

      N

      Mhp (N)

      5

      [ 1 -2 0 2 1 ]

      7

      [ 1 -4 5 0 -5 4 -1 ]

      9

      [ 1 -6 14 -14 0 14 -14 6 -1 ]

      For a mosaic image Im the horizontal heterogeneity projection HPH-map and the vertical heterogeneity projection HPv-map can be found by

      HPV-map = | Im * Mhp (N)|

      HPH-map = | Im * Mhp (N) T| (1)

      Here |.| denotes the absolute value operator. The operator T denotes the transpose operator. The operator * denotes the convolution operator. In order to extract more accurate horizontal and vertical edge information and to reduce the computation time, the two proper mask sizes NH( i, j) and NV( i j) for each pixel at position (i, j) should be determined. For simplicity, only NH( i, j) is determined, since the method for determining NV( i j) is same as that for NH( i j) . The horizontal spectral-spatial correlation (SSC) [19] map is utilized to determine the proper horizontal mask size for each pixel. The horizontal SSC value is calculated then using equation 2.

      After that proper horizontal mask sizes NH(i, j)can be determined [15]. The procedure of the determination for NH( i, j)consists of the following three steps:

      and vertical heterogeneity projection values can be computed.

    2. Sobel -Luminance (SL) based masks

    To extract more accurate gradient information, the luminance estimation technique is embedded into the Sobel operator, in order to make the Sobel operator workable on mosaic images [16]. By running the four SL-based masks [15] on the 5 x 5 mosaic sub-image centered at position (i, j) [5], the horizontal gradient response the vertical gradient response the / 4 -diagonal gradient response – / 4 diagonal gradient response can be obtained easily. Experimental results show that the proposed approach has better performance, when compared with the indirect approach: first apply the bilinear demosaicing process to input mosaic image; next convert the demosaiced full color image to the luminance map and then the sobel edge detector is run on the obtained luminance map.

  3. THE PROPOSED IMPROVED DEMOSAICING AND

    RESIZING ALGORITHM

    Most of the existing methods use only existing green channel neighborhood to interpolate the missing green value. The solution proposal is to estimate the missing green samples based on the variance of color difference along different edge directions. This helps to preserve not only the information of edge regions but also the details of the texture regions. The interpolation of green channel is done by taking advantage of the known available red and blue information. As a result the interpolation error decreases and the image quality improves. Time complexity can also be reduced. Higher order gradient information can be used in order to improve the edge direction finding. The spatial bandwidth of chromatic signals can be limited without degradation of the image. Interpolation made on the chromatic domain results in smooth chromatic transition which is pleasing to human eye. The various stages in the proposed algorithm are as follows:

    i) Constructing fully populated green plane by the interpolation of mosaic green plane and the available red and blue channel information, using edge sensing interpolation approach ii) Constructing fully populated G-R and G-B color

    difference plane by using fully populated Green channel and the available information of red and blue channels. iii)Resizing the three constructed planes to obtain arbitrary ratio sized ones and based on the resized planes the resized R and B planes are recovered to obtain arbitrary ratio sized full color image

    1. Stage 1.Interpolation of the mosaic G plane

      This section describes how fully populated G plane Idm( i j) is constructed using the edge-sensing approach and color difference idea. The central pixel at position (i,j) is taken as

      difference plane for the poNsAitiCoTnsETd-e2p0i1c5teCdoinnfegrreanycecPerllosc.eTedhiengs G-R color difference plane interpolation estimation for the other positions consists of two steps: In the second step the G-

      R color difference values Dgr of the pixels are interpolated and in the third step the G-R color difference values of the pixels at position {(i + 2m, j + 2n+)} and {(i + 2m+, j + 2n)} are interpolated. The G-R color difference value can be estimated from its four neighboring pixels. In order to estimate Dgr more accurately, the gradient magnitudes of four diagonal variations are considered to determine the proper four weights.

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      the representative to explain how the value of G color Ig ( i j)

      dm

      is estimated from its four neighboring pixels. The tuned horizontal and vertical heterogeneity projection values are

      determined first. To determine Ig (i j) more accurately, four

      dm

      proper weights in terms of the gradient magnitude are assigned to the corresponding four pixels in the interpolation estimation phase. Given a pixel at position (i, j) , based on the horizontal and vertical gradient magnitudes, its horizontal and vertical weights can be determined by

      w(H, x, y) = 1/[1 + k=-11k dm H (x, y+k]

      w(V, x, y) = 1/[1 + k=-11k dmV (x, y+k] (3)

      respectively, where k = 3 if k = 1; k = 1, otherwise [15]. Considering the neighboring pixel located at position (i -1, j), if the vertical gradient magnitude is large, i.e. there is a horizontal edge passing through it, based on the color difference assumption [8], [13], it reveals that the G component of this pixel makes less contribution to that of the current pixel; otherwise, it reveals that the G component of this pixel makes more contribution to estimate that of the current pixel. According to the above analysis, the vertical weight is selected for the pixel at position (i -1, j). In the same way, the weights of the other three neighbors are denoted by w(V, i+1, j), w(H, i, j-1)and w(H,i,j+1) respectively.

      Fig. 2. The pattern of the mosaic G-R color difference plane.

      After performing Step 2, the current pattern of the G-R color difference plane is shown in Figure. 2. The central pixel at position (i ', j ') in Figure. 3, is obtained by shifting one pixel down, from figure 2, is taken as the representative to explain the G-R color difference plane interpolation in Step 2. It is not difficult to find that the pattern of the GR color difference plane, because it is the same as that of the G plane in the mosaic image as shown in Figure 1 [15]. Therefore, the interpolation estimation approach described in last section can be directly used to estimate the G-R color difference value at position (i ', j '). The pattern of the G-R color difference plane shifting one pixel down is shown in figure 3. This is obtained from the pattern of mosaic G-R color difference plane.

      Consequently, the value of Ig (i j) can be estimated by

      I

      I

      mo

      mo

      dm

      dm

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      Dgr

      g (i,j)=Ib

      dm

      (i,j)+ w(d,x,y)Ggb(x,y)

      w(d,x,y) (4)

      Finally a refinement approach, which combines the concept of color ratios [5] and the extracted accurate edge information, is used to refine fully populated G plane.

    2. Stage.2 Interpolation of the mosaic G-R and G-B color difference planes

      Instead of interpolating the R and B color planes directly, the G-R and G-B color difference planes are interpolated. This helps to preserve not only the information of edge regions but also the details of the texture regions and also the color difference plane is much smoother than the original color plane. It also eliminates the estimation error. The interpolation involves three steps. In the first step, according to the mosaic

      Fig. 3.The pattern of the G-R color difference plane by shifting one pixel down.

      image and the fully populated G plane Ig , the mosaic G-R

      dm

      color difference plane can be obtained by

      dm

      dm

      m

      m

      Dgr(ir, jr) = Ig (ir, jr) Ir (ir, jr) (5) where (ir, jr) ( i+2m, j+2n+1 ). After performing Step 1, Fig.

      2 illustrates the pattern of the obtained mosaic G-R color

      1. (b) (c)

      Fig. 4. Data dependence of the proposed interpolation estimation for mosaic green image. (a) Horizontal variation (vertical edge). (b) Vertical variation (horizontal edge) and (c) Other variations

      After constructing the fully populated G plane, G-R color difference plane, and G-B color difference plane, the three constructed planes will be resized to the required arbitrary- ratio sized ones by using the DCT approach and then the arbitrary-ratio resized full color image is obtained. Data dependence of the proposed interpolation estimation is shown in Figure. 4

    3. Stage 3: Resizing the fully populated G plane, G-R color difference plane, and G-B color difference plane

    Based on the composite length DCT [12], the resizing stage for constructing the fully populated G plane, G-R and G-B color diffrence plane is done. The two dimensional DCT of M x N matrix A is can be found using the formula

    Bpq =m=0toM-1 n=0 to N-1 p q Amn (cos (2m+1)p)/ 2M (cos (2n+1) q)/ 2N (7)

    where p = 1/ M, p=0 and 2/M, 1 p M-1 q = 1/ N, q=0 and 2/N, 1 q N-1

    g

    g

    g

    g

    The resizing stage for Dgr and Dgb i.e., G-R and G-B color difference planes are same as that of the fully populated green plane, Idm . Let DCT and IDCT are the DCT and inverse DCT on an image block [15]. The fully populated G plane Idm with size M x N, is first divided into a set of image blocks, each with size 8 x 8. If the M x N green plane is to be resized to a plane with size q/p M x q/p N, the resizing ratio is said to be q/p. According to this resizing ratio q/p, first p2 blocks are collected to be an active unit. In order to achieve resizing ratio q/p, the p2 blocks in each active unit are to be increased or decreased to q2 blocks. The steps in resizing are as follows:

    1. DCT is performed on 8 x 8 image block

    2. An active unit is chosen and it is increased or decreased

    3. Each 8 x 8 unit in the DCT coefficient block in the active unit is expanded to (8 + z) x (8 x z) block. IDCT is performed on each of this block to get upsized image

    4. This upsized image is divided into q2 blocks and a set of re-sampled image block is obtained

    5. DCT is performed on this block and high frequency coefficients are truncated.

      Here z represents a non-negative integer which satisfies the condition: p(8+z) = Cq, C 8. In this paper the ratio q/p is assumed to be equal to 4/3, then the smallest z is 4 due to the reason 3(8+ 4) = 9 * 4. After performing the resizing procedure on all active units, a set of 8 x 8 DCT coefficient block is obtained [15]. Then q/p M x q/p N sized G plane is obtained by performing IDCT on each 8 x 8 DCT coefficient blocks. In the same way q/p M x q/p N sized G-R and G-B color difference plane is obtained. The arbitrary ratio sized R and B planes can be constructed by

      and ZDgr (iz, jz) denote GN-RACaTnEd TG-2-B015coCloonr fdeirfefnecreenPcreocveaelduiengs of the pixel at position (i, j).

      Fig.5. The twenty four testing images from Kodak PhotoCD

      1. (b) (c)

        (d)

        Fig.6. Output results for testing image No.23 (a) Original image

      2. Mosaic image (c) Demosaiced RGB image (d) Resized image

    TABLE II CPSNR COMPARISON

    Algorithm

    Resizing ratio = 4/3

    1

    32.1575

    2

    31.8666

    3

    32.5719

    4

    33.6423

    Proposed method

    37.5614

    TABLE III

    AVERAGE S-CIELAB Ea*b COMPARISON

    1

    2

    3

    4

    Proposed

    2.84487

    2.92431

    2.72974

    2.55286

    2.52326

  4. EXPERIMENTAL RESULTS

    The experimental results show that the proposed algorithm exhibits high quality output compared the four native algorithms based on the demosaicing methods proposed in [13], [10], [3] and [15] are called 1, 2, 3 and 4 , respectively in

    Zr (i , j ) = Zg

    (i , j )- ZD

    (i , j )

    terms of CPSNR and the S-CIELAB Ea*b, and one

    dm z z

    dm z z

    gr z z

    Zb (i , j ) = Zg

    (i , j )- ZD

    (i , j ) (8)

    subjective color image quality measure, the color artifacts.

    dm z z

    dm z z

    gb z z

    The proposed algorithm produces less color artifacts when

    where Zdmr (iz, jz), Zdmg (iz, jz) and Zdmb (iz, jz) denote the three color components at pixel position (iz, jz) in the q/p M x q/p N sized full color image Zdm respectively, ZDgr (iz, jz)

    compared with the other four algorithms. Also the execution- time of the proposed resizing algorithm is better when compared with the other four algorithms. The algorithm is

    tested using twenty-four testing images from Kodak PhotoCD, each with size 512 x 768 and is implemented using Interactive Data Language (IDL), version 6.3. Table II and Table III shows the comparison of image quality in terms of CPSNR and S-CIELAB, Ea*b respectively for the testing image No.23. It is observed from the two tables, that the proposed algorithm shows the best image quality in terms of CPSNR and S-CIELAB Ea*b.

    Color artifacts, which is one of the subjective visual image measure, is adopted to demonstrate the visual quality of the proposed algorithm. After demosaicing and resizing the mosaic image, some color artifacts may appear on certain non- smooth regions of the full color image. But compared with the other algorithms, less color artifacts are observed. It is observed that the proposed resizing algorithm produces the least color artifacts, i.e. the best visual effect.

    The execution time of the five concerned algorithms is shown in Table IV, based on the twenty four testing mosaic image and the resizing ratios. The results show that the execution time for the concerned algorithm is moderate, when compared with the other four algorithms. However, the proposed algorithm has the best image quality performance in the four resizing algorithms.

    TABLE IV.

    EXECUTION TIME OF THE FIVE CONCERNED ALGORITHMS.

    Algorithm

    1

    2

    3

    4

    Proposed

    Time(s)

    12.52

    12.94

    12.68

    12.78

    12.50

    The CPSNR for a color image with size M x N is defined by

    to arbitrary ratio sized fullNcAoCloTrEiTm-a2g01e5. ECxopneferrimenecnetaPlrorceeseudltisngs shows that the proposed algorithm is assumed to have better image quality performance in terms of two objective color quality measures CPSNR (color peak signal-to-noise ratio) and the S-CIELAB Eab and one subjective measure the color artifacts when compared with popular demosaicing and resizing methods. It provides best visual effects compared to other resizing algorithms. The average execution time for the proposed resizing algorithm is moderate. The algorithm can be used in consumer electronic products like digital camcorders and digital cameras. The color difference idea is used to get a smooth image. A proposal for future work is to perform color interpolation on YCbCr domain, instead of performing the interpolation in the RGB domain and then performing resizing. It is assumed that this proposal can produce better quality color images, compared with the existing algorithms with a superior CPSNR value and least color artifacts.

    *

    *

    REFERENCES

    1. S. Battiato, G. Gallo, and F. Stanco, A locally adaptive zooming algorithm for digital images, Image and Vision Computing, vol. 20, no. 11, pp. 805-812, 2002.

    2. B. E. Bayer, Color imaging array, U.S. Patent # 3 971 065, 1976

    3. K. H. Chung and Y. H. Chan, Color demosaicking using variance of color differences, IEEE Trans. Image Processing, vol. 15, no. 10, pp. 2944- 2955, 2006.

    4. K. H. Chung and Y. H. Chan, A low-complexity joint color demosaicking and zooming algorithm for digital camera, IEEE Trans. Image Processing, vol. 16, no. 7, pp. 1705-1715, 2007

    5. K. L. Chung, W. J. Yang, W. M. Yan, and C. C. Wang, Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection, IEEE Trans. Image Processing, vol. 17, no. 12, pp. 2356-2367, 2008.

    6. K. L. Chung, W. J. Yang, P. Y. Chen, W. M. Yan, and C. S. Fuh, New joint demosaicing and zooming algorithm for color filter array, IEEE Trans. Consumer Electronics, vol. 55, no. 3, pp.1477-1486, 2009.

    7. B. Gunturk, Y. Altunbask, and R. Mersereau, Color plane interpolation

      CPSNR = 10log10

      2552 / [(1/3MN)[I

      (9)

      ori

      (m,n) Zdm

      (m,n)]2]

      using alternating projections, IEEE Trans. Image Processing, vol. 11, no. 9, pp. 997-1013, 2002.

    8. W. Lu and Y. P. Tang, Color filter array demosaicking: new method

      where Iori and Zdm denote the color component of color pixel in the original full color image and the color component of color pixel in the zoomed color image respectively. The S- CIELAB Ea*b of a color image with size M x N is defined by

      and performance measures, IEEE Trans. Image Processing, vol. 12, no.10, pp. 1194-1210, 2003.

    9. R. Lukac, K. Martin, and K. N. Plataniotis, Demosaicked image postprocessing using local color ratios, IEEE Trans. Circuits and Systems for Video Technology, vol. 14, no. 6, pp. 914-920, 2004.

    10. R. Lukac and K. N. Plataniotis, Normalized color-ratio modeling for CFA interpolation, IEEE Trans. Consumer Electronics, vol. 50, no. 2, pp.

      Ea*b = 1/MN { [c

      EIc

      ori(m,n) EZcdm

      (m,n)]}1/2 (10)

      737-745, 2004.

    11. R. Lukac, K. N. Plataniotis, and D. Hatzinakos, Color image zooming on the Bayer pattern, IEEE Trans. Circuits and Systems for Video

      ori

      ori

      ori

      ori

      dm

      dm

      dm

      dm

      where {L, a, b}; EIL (m,n), EIaori(m,n), EIb (m,n) denote the three CIELAB color components of the color pixel at position (m,n) in the original full color image and EZL (m,n), EZadm(m,n), EZb (m,n) denote the three CIELAB color components of the color pixel at position (m,n) in the resized full color image. The image quality is better if S-CIELAB

      Ea*b is smaller and CPSNR is high.

  5. CONCLUSION

In this paper, an improved combined demosaicing and resizing algorithm for single sensor array images is proposed. Based on the color difference concept and the composite length DCT, the mosaic image can be demosaiced and resized

Technology, vol. 15, no. 11, pp. 1475-1492, 2005.

  1. Y. S. Park and H. W. Park, Arbitrary-ratio image resizing using fast DCT of composite length for DCT-based transcoder, IEEE Trans. Image Processing, vol. 15, no. 2, pp. 494-500, 2006.

  2. S. C. Pei and I. K. Tam, Effective color interpolation in CCD color filter arrays using signal correlation, IEEE Trans. Circuits and Systems for Video Technology, vol. 13, no. 6, pp. 503-513, 2003.

  3. L. Zhang and D. Zhang, A joint demosaicking-zooming scheme for single chip digital color cameras, Computer Vision and Image Understanding, vol. 107, no. 1-2, pp. 14-25, 2007.

  4. K. L. Chung, W. J. Yang, W. M. Yan, and C. S. Fuh, New joint demosaicing and arbitrary ratioresizing algorithm for color filter array based on DCT approach, IEEE Trans. Consumer Electronics, vol. 56, no. 2, pp.783-791, 2010.

  5. R. Gonzalez and R. Woods, Digital Image Processing, Addison Wesley, New York, 1992.

  6. H. A. Chang and H. H. Chen, Stochastic color interpolation for digital NACTET-2015 Conference Proceedings

    cameras, IEEE Trans. Circuits and Systems for Video Technology, vol. 17, no. 8, pp. 964-973, 2007.

  7. R. Lukac and K. N. Plataniotis, Digital camera zooming for the color filter arrays Electronics Letters, vol. 39, no. 25, pp. 1806-1807, 2003.

  8. Muresan.D.D. and T. W. Parks, Demosaicing using optimal recovery,

    IEEE Transactions on Image Processing, vol. 14, no. 2, pp. 267278, 2005.

  9. Lukac R., K.N. Plataniotis, D. Hatzinakos, and M. Aleksic, A novel cost effective demosaicing approach, IEEE Transactions on Consumer Electronics, vol. 50, no. 1, 2004.

Leave a Reply