Segmentation of Caudate Nucleus from Human Brain Mr Images using Wavelet Transform

DOI : 10.17577/IJERTCONV3IS08016

Download Full-Text PDF Cite this Publication

Text Only Version

Segmentation of Caudate Nucleus from Human Brain Mr Images using Wavelet Transform

Jisha K R

Assistant Professor

Department of Electronics & Communication Heera College of Engineering & Technology Trivandrum

Kerala

AbstractIn this paper a knowledge-driven algorithm to automatically delineate the caudate nucleus (CN) region of the human brain from a magnetic resonance (MR) image. The development, anatomy, and function of the CN are of great interest to the areas of cognitive and clinical neuroscience. Since the lateral ventricles (LVs) are good landmarks for positioning the CN, the algorithm first extracts the LVs, and automatically localizes the CN from this information guided by anatomic knowledge of the structure. This completes the initial segmentation procedure. The fine tuning of CN boundaries is done by means of applying wavelet transform. The wavelet transform always is used to analyze image. The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale image. In this paper, an efficient segmentation method for medical image analysis is presented, which combines pyramidal image segmentation with hierarchical watershed segmentation algorithm. The segmentation procedure consists of pyramid representation, image segmentation, region merging and region projection. Each layer of pyramid is split into a number of regions by a root labeling technique, and the regions are projected onto next higher-resolution layer by reverse wavelet transform. The projection gradually achieve onto full resolution layer so that segmentation is ended. Morphologic operation is used to smooth original image while filtering out noise. I have applied this new approach to analyze medical image. Experimental results demonstrate the method effective.

Keywords Caudate nucleus, magnetic resonance imaging (MRI), segmentation, validation.

  1. INTRODUCTION

    AS a key subcortical component of the basal ganglia, the caudate nucleus (CN) is involved in numerous critical brain functions, including sensory-motor control, cognition, language, emotion, reward, and learning. Aberrant morphology and function of the CN has also been implicated in a number of important brain disorders, including Huntingtons disease, Tourette syndrome, autism, attention deficit hyperactivity disorder, and fragile X syndrome. Accordingly, the development, anatomy, and function of the CN are of great interest to the areas of cognitive and clinical neuroscience. The CN is a large C- shaped structure juxtaposed along the surface of the lateral ventricle (LV). Compared to many other subcortical regions, the CN is relatively straightforward to extract from

    magnetic resonance imaging (MRI) data, since a large portion of the structure has a clean and obvious boundary with the LVs and with surrounding white matter (WM). Portions of this boundary can be localized with standard edge detection provided the appropriate parameters are chosen. However, the CN is also adjacent to related, although separate, subcortical nuclei, where the boundaries between structures are much less clear. For example, there are no obvious borders visible between the CN and the more ventral nucleus accumbens in typical high-resolution MRI scans, although these two structures are readily distinguished by standard histology. Another methodological concern in CN segmentation is where the medial boundary of this structure borders the more lateral- inferior putamen. Here, cell bridges appear as fingers that span the gap between the two structures.

    Neuroimaging investigations frequently focus on the CN as a region-of-interest (ROI) in studies of typical and atypical brain development and function. The results of these investigations often include data pertaining to volume, shape, activation, and functional connectivity of the CN. However, the ability to generate these findings depends on an accurate and reproducible CN segmentation that is typically performed by a trained rater who manually circumscribes the CN on contiguous slices from a high- resolution magnetic resonance (MR) dataset.

    Manual segmentation of the CN typically requires a significant time investment. Even when well-trained research staff perform this function, manual segmentation is prone to errors associated with inter observer and intra observer variability (also called rater drift, referring to the tendency of rater bias to occur increasingly over time). Thus, in addition to reduced productivity associated with the time investment required for manual measurement, variability in results obtained from such measurements will add noise to datasets where between-group effect sizes may already be small.

    Alternatives to manual segmentation have been proposed using a variety of computer assisted methods. These include deformable models, elastic image registration techniques relying on atlases, a Bayesian approach based on manually labeled training images, active contour evolution, knowledge based approaches such as fuzzy modeling, information fusion, and histogram-driven. Despite the fact that a large number of fully automatic and

    semiautomatic segmentation methods have been described in the literature, many imaging research laboratories continue to use manual delineation as the technique of choice for CN segmentation. Reluctance to embrace fully automatic segmentation approaches may be due to concerns about reliability, lack of flexibility for data of varying characteristics, their reliance on human experts for initialization and/or guidance, and the high computational demands of approaches based on image registration.

    To bridge the gap between methodological advances and laboratory routine, we have developed a knowledge-driven algorithm to automatically delineate the CN region of the human brain from an MR image. I have approached the problem of delineation of the CN by first identifying the LVs, brain regions that are easily locatable due to the high tissue contrast of their contained cerebrospinal fluid (CSF) and their clearly defined borders. Shape and positional information is extracted from this initial analysis, and this information is combined with anatomical knowledge to automatically localize CN boundaries. Validation of this algorithm was performed using data from an ongoing study of fragile X syndrome, a neuro genetic condition where CN morphology is known to be abnormal. The results demonstrate that an automated algorithm driven by MR data characteristics and anatomic knowledge is able to segment the CN on a consistent and accurate basis.

  2. METHOD

An Automatic Segmentation Algorithm for the CN

This algorithm takes as input the MR image of a human head and the 3-D locations of the anterior and posterior commissures (AC and PC, respectively). Anatomically, the CN is juxtaposed along the LV surface and a large portion of the structure has a clean and obvious boundary with the LVs, making the LVs good landmarks for localizing the head and body of the CN. The first step of the algorithm automatically locates the LVs based on a previously well- defined procedure, and produces an initial CN candidate set of voxels from these ventricle positions. Owing to partial volume effects, image noise, and the limited spatial and contrast resolution of MRI, the medial and inferior boundaries of the CN are not distinct, even in high- resolution images with 1 mm3 – voxel size. Therefore, the second step of the algorithm is to fine-tune the CN boundaries determined above by incorporating additional anatomical knowledge of the shape and position of the CN into the algorithm. These two rocessing stages are described in detail below.

  1. Initial Extraction of an Approximate CN ROI:

    This algorithm step starts by producing a segmentation of the LVs, then determining an initial CN location by region growing from gray matter (GM) voxels adjacent to the ventricles. An intensity histogram is generated to identify the approximate intensity ranges of cerebrospinal fluid (CSF), GM, and WM. CN subregions are extracted in 3-D space from contiguous coronal slices on a slice-by-

    slice basis. Bounding boxes for initial CN extraction are defined on each coronal slice to reduce potential region- growing leakage through cell bridges into the putamen (see Fig. 1). The superior and the inferior borders of the CN are taken as a horizontal line drawn through the roof and the inferior tip of the LVs, respectively; the medial border is the LV; and the lateral border is set 35 mm laterally away from the AC base point. The CN is subdivided into two subregions, an anterior subregion and a posterior subregion, by a coronal plane passing through the anterior commissure (AC). Each subregion is extracted independently: this the medial border is the LV; and the lateral border is set 35 mm laterally away from the AC base point. The CN is subdivided into two subregions, an anterior subregion and a posterior subregion, by a coronal plane passing through the anterior commissure (AC). Each subregion is extracted independently: this approach makes it more straightforward to incorporate anatomical knowledge and intensity threshold information specific to each subregion as opposed to the entire structure.

    The CN is assumed to be comprised of gray matter (GM) voxels only. Growing is initiated from GM seed points located next to the lateral boundary of the LV at the slice that contains the AC, and then extended to the connected GM voxels within the current bounding box. After extracting the CN within the bounding box for this slice, region-growing continues by adding the connected GM voxels on successive slices anteriorly and posteriorly. This process yields an approximation of the head and body of the CN (see Fig. 1).

    Fig. 1. Initial extraction result. White color voxels represent the CN; the white-color lines, which are drawn through the roof and the inferior tip of the LVs, and the lateral ventricle define the bounding box. Note that some parts of the CN are potentially missed in the initial extraction step.

    Under some conditions, partial volume effects, as manifested by variation in signal intensities, can adversely affect CN segmentation. Therefore, this initial CN segmentation process is subject to the following anatomical constraints: 1) the CN is comprised of GM voxels within the bounding box only, 2) the outline of the lateral boundary with WM should be smooth, with a change of only 1 voxel in successive slices, 3) the shift in CN boundary on each successive coronal slice should be gradual and smooth, and 4) the tail of the CN is assumed to taper as it proceeds along the side of the LVs; the superior boundary of the tail in the posterior sub region should be equal or inferior to the superior boundary of the previous slice

    The algorithm uses the bounding boxes to prevent GM voxels from growing into the putamen via GM cell bridges that connect these two structures in T1-weighted MR datasets. The smoothness constraint on the CN outline

    of the lateral boundary with WM (Constraints 2 and 3 above) in the anterior sub region further prevents putamen GM voxels from being regarded as CN. However, these bounding boxes will initially result in misclassification of some GM voxels as non-CN in the anterior sub region. Subsequent steps, described below, are designed to resolve this initial misclassification error.

  2. Fine-Tuning CN Boundaries:

    The goals of the stage are 1) to correct initial false-negative CN misclassification errors, 2) to define the visually ambiguous boundary between the CN and nucleus acumens with anatomical knowledge and constraints, and

  3. to obtain a final CN segmentation with smooth, recognizable, and valid boundaries.

SEGMENTATION METHOD

A general outline of proposed method is shown in Fig.

  1. After original image I smoothed by using morphological filter to reduce noise particles, pyramid representation creates multiresolution images using wavelet transform. The images at different layers represent various image resolutions. Images are first segmented into a number of regions at each layer of the pyramid with watershed transformation. Starting from top layer ( L I ), regions are merged with merging parameter. The resulting at L I is projected onto the 1 L I layer by an inverse wavelet transform until L equals 0. Here, the meaning of I0 is the full-resolution (original) image.

    1. Multiscale Morphological Filtering Method

      For image segmentation, an algorithm often needs a preprocessing step like smoothing to reduce the effect of undesired perturbations which might cause over- and under- segmentation. The very small scale details are usually considered. Therefore, many morphological filters are often designed to smooth noisy gray-level images by a determined composition of opening and closing with a given structuring element . The main disadvantage of conventional opening and closing is that they do not allow a perfect preservation of the edge information, shown Fig. 2(b). This operator emphasizes only on the size of the

      features, but ignores their shape completely. However, it is possible to design morphological filters by reconstruction that satisfies these requirements for both shape and size of the features .

      The elementary geodesic dilation and erosion, denoted by of size one (the smallest size) of the image I(x) with respect to a reference image is defined as the minimum between dilated by an SE S of size one and . Hence, reconstructing dilation and erosion of arbitrary size are obtained through iteration as

      Based on the operation opening and closing by reconstruction of opening and closing may be defined as

      Because reconstructing opening and closing void drawback of conventional operation, shown in Fig. 2(c), this morphological operation helps segmentation obtain more semantic region partitioning in the later stages. Before removing noise particles, it is a necessity to estimate their scale according as for such analysis. The main concern, here, is the segmentation and we have used iterative filtering as following

    2. Pyramidal Representation

      Multiresolution methods attempt to obtain a global view of an image by examining it at various resolution levels building the pyramid representation of an image. Several types of multiresolution image decomposition include Gaussian pyramids, Laplacian pyramids and wavelets. Both Gaussian and Laplacian pyramid generally entail some loss of information. Unlike these methods, the wavelet transform provides a complete image representation and performs decomposition according to

      both scale and orientation. To create a multiresolution image, we used a Haar wavelet transform.

      The wavelet transform of a signal f(x) is performed by convolution of the signal with a family of basis function.

      Where is basis function, s and t are referred to as the dilation and translation parameters, respectively.

      The image can be decomposed into its wavelet coefficients using Mallats pyramid algorithm. By using Haar wavelets, the original image is first passed through low-pass filters to generate LL, LH, HL and HH subbands. The decompositions are repeated on the LL subband to obtain the next four subbands.. Let

      represents the transform results LL, LH, HL and HH at scale 2j, respectively. For J-scale transforms, the original image can be represented by

      where the size of the wavelet representation is the same as that of the original signal. This representation is composed of a coarse signal at resolution 2j and a set of detail signal at

      resolution 2i-2j.

    3. Image segentation and Segmentation Projection

      After creating the pyramid image using a wavelet transform, the different resolution image Ii is segmented through the application of a watershed algorithm. Generally, the blurred images represented in each layer of the pyramid are used for segmentation. Segmentation is applied in two stages. Firstly, the scale with which morphological operation use is defined at every layer. Then, the classical the watershed algorithm is used to generate an objects contours.

      However, when an image is degraded by noise, it becomes over-segmented. Therefore, over-segmented images may require further merging of some regions. Our decision on which regions to merge is determined through homogeneity and similarity criteria based on the wavelet coefficients. Each of the regions will have mean, second- and third-order central moment values of the wavelet coefficients calculated. For each region of the segmented image, we calculate the mean (M), second- (µ2) and third order (µ3) central moments of the region as [8]

      where num(Ri) is the number of pixels of segmented region

      1. To merge the segmented regions using similarity criteria (d), we can use the following equation:

i=1,2,3N

where mvi is the similarity value of segmented region i and N is the number of segmented regions. R(M), R(µ2)and R(µ3) are the mean, second- and third-order moment values of the segmented region, respectively. If the mv values of the adjacent regions satisfy a specified value, two adjacent regions will be merged.

regions will be merged.

Once the merged image ML is generated at L layer, it must be projected onto next layer in order to finish the full resolution image segmentation. Direct projection of the segmented image offers very poor results, as can be observed in Fig. 4. At each projected pyramid level, the region frontiers become less smooth, showing a heavy blocking effects appearance. To overcome the abovementioned problems, we use an inverse wavelet transform to implement projection from low to high resolution layer step by step.

During projecting from i to i-1 layer, a parent- child spatial relationship between the image elements of two successive layers is defined. This relationship is evaluated by means of a similarity measure. Since different segmentation results can be obtained by different definitions of the spatial relationship and the similarity measure . Spatial relationship between the image elements of two successive layers of the pyramid describes the family relationship between these elements. The children of a layer can belong to different parents in the upper layer. Similarity between a child image element and its possible parents describes how similar they are. By using features of image elements similarity can be defined, for example by comparing the contrast or texture properties of a child and its possible parent(s). Furthermore, the child-parent relationship was validated by a similarity measure that takes the gray-value statistics of potential parents and a child into account. Among the potential parents, the label of a parent with the highest similarity is assigned to the child. Next, a new label is assigned to a child that is too dissimilar to any parent. The validation approach is extended into the next lower layer of the pyramid, and finally, results in the generation of labeled regions in the bottom layer.

Fig. Wavelet pyramid representation of brain CT image. Top: multiresolution images which resolution from 64 to 256; Bottom: segmentation regions respective to top images

EXPERIMENT RESULT

In this Section, I analyzed the proposed segmentation scheme for medical image focusing on the following: i) morphological operation is useful to smooth image; ii) multiresolution analysis refines segmenting regions.

To evaluate the performance of the proposed approach, simulation is carried out on medical images, such as brain CT image, cell image. The pyramid image is generated by the two-scale Harr wavelet transform, and regions and labels are extracted from a low-resolution image. I have evaluated the segmentation results of the presented method. I used common objective measurements: the number of segmented regions, PSNR, Goodness and computation times. A larger value of i indicates that the feature of the region was not well segmented during the image segmentation process. In this method, the use of low resolution is highly preferable since it generates the best objective quality when the number of region and Goodness value are considered at the same time.

CONCLUSION

In this paper, an method for image segmentation using a multiresolution-based watershed segmentation algorithm is described. I analyzed the proposed segmentation scheme focusing on the multiresolution properties of gradient watershed boundaries for image in each layer of pyramid. We have described how regions are projected onto lower layer and build hierarchical parent- child regions relationship of successive two layers. As shown in our experimental results, the algorithm generates visually meaningful segmentation results. It demonstrated

that proposed method is efficient for medical image analysis.

REFERENCES

  1. [1] A. M. Graybiel, The basal ganglia: Learning new tricks and loving it, Curr. Opin. Neurobiol., vol. 15, pp. 638644, 2005.

  2. [2] S. Tisch, P. Silberstein, P. Limousin-Dowsey, and M. Jahanshahi, The basal ganglia: Anatomy, physiology, and pharmacology, Psychiatr. Clin. North Am., vol. 27, pp. 757799, 2004.

  3. [3] S. Eliez, C. M. Blasey, L. S. Freund, T. Hastie, and A. L. Reiss, Brain anatomy, gender and IQ in children and adolescents with fragile X syndrome, Brain, vol. 124, pp. 16101618, 2001.

  4. [4] E. Hollander, E. Anagnostou, W. Chaplin, K. Esposito, M. M. Haznedar, E. Licalzi, S. Wasserman, L. Soorya, and M. Buchsbaum, Striatal volume on magnetic resonance imaging and repetitive behaviors in autism, Biol. Psychiatry, vol. 58, pp. 226232, 2005.

  5. [5] J. W. Mink, Neurobiology of basal ganglia and Tourette syndrome: basal ganglia circuits and thalamocortical outputs, Adv. Neurol., vol. 99, pp. 8998, 2006.

  6. [6] Bhandarkar, S.M., Hui, Z., Image segmentation using evolutionary computation, IEEE Trans. Evolut. Comput. 3 (1), 1 21. (7), 689700, 1999..

  7. [7] Rezaee, M.R., van der Zwet, P.M.J., Lelieveldt, B.P.E., van der Geest, R.J., Reiber, J.H.C., A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering, IEEE Trans. Image Process. 9 (7), 12381248, 2002.

  8. [8] Wang, J.Z., Li, J., Gray, R.M., Wiederhold, G., Unsupervised multiresolution segmentation for images with low depth of field. IEEE Trans. Patt. Anal. Mach. Intell, 23 (1), 8590, 2001.

  9. [9] P. Salembier, Morphological multiscale segmentation for image coding, Signal Process., vol. 38, pp. 359386, 1994

  10. [10] S. Mukhopadhyay, B. Chanda, An edge preserving noise smoothing technique using multiscale morphology, Signal rocessing, 82 527 544, 2002.

Leave a Reply