A Smartphone based Wound Assessment System for Patients with Diabetes using Accelerated Mean Shift Algorithm

DOI : 10.17577/IJERTV6IS050518

Download Full-Text PDF Cite this Publication

  • Open Access
  • Total Downloads : 112
  • Authors : Anuja Titus , Shobhana . S, Ashwin Singerji, Deepa. D. Raj
  • Paper ID : IJERTV6IS050518
  • Volume & Issue : Volume 06, Issue 05 (May 2017)
  • DOI : http://dx.doi.org/10.17577/IJERTV6IS050518
  • Published (First Online): 31-05-2017
  • ISSN (Online) : 2278-0181
  • Publisher Name : IJERT
  • License: Creative Commons License This work is licensed under a Creative Commons Attribution 4.0 International License

Text Only Version

A Smartphone based Wound Assessment System for Patients with Diabetes using Accelerated Mean Shift Algorithm

Anuja Titus

PG Scholar School of CSE

Mar Ephraem College of Engineering and Technology Elavuvilai, Marthandam, India

Shobhana .S

Assistant Professor School of CSE

Mar Ephraem College of Engineering and Technology Elavuvilai, Marthandam, India

Ashwin Singerji Assistant Professor School of CSE

Mar Ephraem College of Engineering and Technology Elavuvilai, Marthandam, India

Abstract Diabetic foot ulcers represent a significant health issue. Currently, clinicians and nurses mainly base their wound assessment on visual examination of wound size and healing status, while the patients themselves seldom have an opportunity to play an active role. Hence, a more quantitative and cost-effective examination method that enables the patients and their caregivers to take a more active role in daily wound care potentially can accelerate wound healing, save travel cost and reduce healthcare expenses. Considering the prevalence of smartphones with a high resolution digital camera, assessing wounds by analyzing images of chronic foot ulcers is an attractive option. In this paper, we propose a novel wound image analysis system implemented solely on the Android smartphone. The wound image is captured by the camera on the smartphone with the assistance of an image capture box. After that, the smartphone performs wound segmentation by applying the accelerated mean shift algorithm. Specifically, the outline of the foot is determined based on skin color, and the wound boundary is found using a simple connected region detection method. Within the wound boundary, the healing status is next assessed based on red-yellow-black color evaluation model. Moreover, the healing status is quantitatively assessed, based on trend analysis of time records for a given patient. Experimental results on wound images collected in UMASS Memorial Health Center Wound Clinic (Worcester, MA) following an IRB (Institutional Review Board) approved protocol show that our system can be efficiently used to analyze the wound healing status with promising accuracy.

Index Terms Patients with diabetes, Wound Analysis, Mean shift, Android based smartphone

  1. I NTRODUCTION

    FOR individuals with type 2 diabetes, foot ulcers constitute a significant health issue affecting 5-6 million individuals in the US [1] [2]. Foot ulcers are painful, susceptible to infection and very slow to heal [3] [4]. According to published statistics, diabetes-related wounds are the primary cause of non-traumatic lower limb amputations with approximately 71,000 such amputations in the US in 2004 [5]. Moreover, the cost of treating diabetic foot ulcers is estimated at $15,000 per year per individual. Overall diabetes healthcare cost was estimated at $245 billion in 2012 and is expected to increase in the coming years [5]. Several attempts have been made to use image processing techniques for such tasks, including the measurement of area, or alternatively using a volume instrument system (MAVIS) [8] or a medical digital photogrammetric system (MEDPHOS) [9]. These approaches suffer from several drawbacks including high cost, complexity and lack of tissue classification. To better determine the wound boundary and classify wound tissues, researchers have applied image segmentation and supervised

    machine learning algorithm for wound analysis.

    Deepa.D.Raj PG Scholar School of CSE

    Mar Ephraem College of Engineering and Technology Elavuvilai, Marthandam, India

    There are several problems with current practices for treating diabetic foot ulcers. First, patients must go to their wound clinic on a regular basis to have their wounds checked by their clinicians. This need for frequent clinical evaluation is not only inconvenient and time consuming for patients and clinicians, but also represents a significant health care cost because patients may require special transportation, e.g., ambulances. Second, a clinicians wound assessment process is based on visual examination. He/she describes the wound by its physical dimensions and the color of its tissues, providing important indications of the wound type and the stage of healing [6]. Because the visual assessment does not produce objective measurements and quantifiable parameters of the healing status [7], tracking a wounds healing process across consecutive visits is a difficult task for both clinicians and patients.

    Technology employing image analysis techniques is a potential solution to both these problems. Several attempts have been made to use image processing techniques for such tasks, including the measurement of area, or alternatively using a volume instrument system (MAVIS) [8] or a medical digital photogrammetric system (MEDPHOS) [9]. These approaches suffer from several drawbacks including high cost, complexity and lack of tissue classification [9].

    To better determine the wound boundary and classify wound tissues, researchers have applied image segmentation and supervised machine learning algorithm for wound analysis. A French research group proposed a method of using a support vector machine (SVM) based wound classification method [10] [11]. The same idea has also been employed in

    [12] for the detection of melanoma at a curable stage. Although the SVM classifier method led to good results on typical wound images [10], it is not feasible to implement the training process and the feature extraction on current smartphones due to its computational demands. Furthermore, the supervised learning algorithm requires a large number of training image samples and experienced clinical input, which is difficult and costly.

    Our solution provides image analysis algorithms that run on a smartphone, and thus provide a low cost and easy-to-use device for self-management of foot ulcers for patients with type 2 diabetes. Our solution engages patients as active participants in their own care, meeting the recommendation of the Committee on Quality of Health Care in America to

    provide more information technology (IT) solutions [13].

    The widely used commodity smartphone containing a high- resolution camera is a viable candidate for image capture and image processing provided that the processing algorithms are both accurate and well-suited for the available hardware and computational resources. To convert an ordinary smartphone into a practical device for self-management of diabetic wounds, we need to address two tasks: (i) Develop a simple method for patients to capture an image of their foot ulcers.

    (ii) Design a highly efficient and accurate algorithm for real- time wound analysis that is able to operate within the computational constraints of the smartphone.

    Our solution for task (i) was specifically designed to aid patients with type 2 diabetes in photographing ulcers occurring on the sole of their feet. This is particularly challenging due to mobility limitations, common for individuals with advanced diabetes. To this end, we designed and built an image capture box with an optical system containing a dual set of front surface mirrors, integrated LED lighting and a comfortable, slanted surface for the patients to place their foot. The design ensures consistent illumination and a fixed optical path length between the sole of the foot and the camera, so that pictures captured at different times would be taken from the same camera angles and under the same lighting conditions. Task (ii) was implemented by utilizing an accurate, yet computationally efficient algorithm, i.e., the mean shift algorithm, for wound bondary determination, followed by color segmentation within the wound area for assessing healing status.

    In our previous work [14], the wound boundary determination was done with a particular implementation of the level set algorithm, specifically the distance regularized level set evolution (DRLSE) method [15]. The principal disadvantage of the level set algorithm is that the iteration of global level set function is too computationally intensive to be implemented on smartphones, even with the narrow band confined implementation based on GPUs [15]. In addition, the level set evolution completely depends on the initial curve which has to be pre-delineated either manually or by a well- designed algorithm. Finally, false edges may interfere with the evolution when the skin color is not uniform enough and when missing boundaries, as frequently occurring in medical images, results in evolution leakage (the level set evolution does not stop properly on the actual wound boundary). Hence, a better method was required to solve these problems.

    To address these problems, we replaced the level set algorithms with the efficient mean shift segmentation algorithm [16]. While it addresses the above problems, it also creates additional challenges, such as over-segmentation, which we solved using the region adjacency graph (RAG) based region merge algorithm [17]. In this paper, we present the entire process of recording and analyzing a wound image, using algorithms that are executable on a smartphone, and provide evidence of the efficiency and accuracy of these algorithms for analyzing diabetic foot ulcers.

    This paper is organized as follows: Section II-A provides an overview of the structure of the wound image analysis software system. Section II-B briefly introduces the mean shift algorithm used in our system and related region merge methods. Section II-C introduces the wound analysis method based on the image segmentation results including foot outline detection, wound boundary determination, color

    segmentation within the wound and healing status evaluation. In Section III, the GPU optimization method of the mean shift segmentation algorithm is discussed. Section IV presents the image capture box designed for patients with diabetic foot ulcers to easily use the smartphone to take an image of the bottom of their foot. Experimental results are presented and analyzed in Section V. Finally, Section VI provides an overall assessment of the wound image analysis system. A preliminary version of this work has been reported in [18].

  2. WOUND ANALYSIS METHOD

    1. Wound Image Analysis System Overview

      Our quantitative wound assessment system consists of several functional modules including wound image capture, wound image storage, wound image preprocessing, wound boundary determination, wound analysis by color segmentation and wound trend analysis based on a time sequence of wound images for a given patient. All these processing steps are carried out solely by the computational resources of the smartphone. The functional diagram of our quantitative wound assessment system is shown as in Figure 1 and explained below. All these processing steps are carried out solely by the computational resources of the smartphone. Note that the words highlighted in bold in the text correspond to specific blocks in figures with block diagrams. While the image capture is the first step in the flow chart, the image capture box is not one of the image processing steps and is therefore presented later in Section IV.

      A Nexus 4 smartphone was chosen due to its excellent CPU+GPU performance and high resolution camera. Although there are likely performance variations across the cameras of modern smartphones, such a study was considered beyond the scope of this paper. After the wound image is captured, the JPEG file path of this image is added into a wound image database. This compressed image file, which cannot be processed directly with our main image processing algorithms, therefore needs to be decompressed into a 24-bit bitmap file based on the standard RGB color model. In our system, we use the built-in APIs of the Android smartphone platform to accomplish the JPEG compression and decompression task.

      The image quality parameter was used to control the JPEG compression rate. Setting image quality to 80 was shown empirically to provide the best balance between quality and storage space. For an efficient implementation on the smartphone alone, no method was used to further remove the artifacts introduced by JPEG lossy compression.

      In the Image preprocessing step, we first down-sample the high resolution bitmap image to speed up the subsequent image analysis and to eliminate excessive details that may complicate wound image segmentation. In our case, we down- sample the original image (pixel dimensions 3264 x 2448) by a factor 4 in both the horizontal and vertical directions to pixel dimensions of 816 x 612, which has proven to provide a good balance between the wound resolution and the processing efficiency. In practice, we use the standard API for image 3 resize on the Android smartphone platform to ensure high efficiency. Second, we smooth the images to remove noise (assumed mainly to be Gaussian noise produced by image

      acquisition process [19]) by using the Gaussian blur method whose standard deviation = 0.5 was empirically judged to be optimal based on multiple experiments.

      To determine the boundary of the wound area, we first determine an outline of the foot within the image. Hence the initial image segmentation operation is to divide the original image into pixel groups with homogeneous color values. Specifically, the foot outline detection is performed by finding the largest connected component in the segmented image under the condition that the color of this component is similar enough to a preset standard skin color. Based on the standard color checkers provided in [20], both the light and dark skin color thresholds in CIE LAB space are incorporated into the system, which means that our algorithm is expected to work for most skin colors. Afterwards, we carry out a Wound boundary determination based on the foot outline detection result. If the foot detection result is regarded as a binary image with the foot area marked as white and rest part marked as black, it is easy to locate the wound boundary within the foot region boundary by detecting the largest connected black component within the white part. If the wound is located at the foot region boundary then the foot boundary is not closed, and hence the problem becomes more complicated, i.e., we might need to first form a closed boundary.

      When the wound boundary has been successfully determined and the wound area calculated, we next evaluate the healing state of the wound by performing Color segmentation, with the goal of categorizing each pixel in the wound boundary into certain classes labeled as granulation, slough and necrosis [21] [24]. The classical self-organized clustering method called K-mean with high computational efficiency is used [22]. After the color segmentation, a feature vector including the wound area size and dimensions for different types of wound tissues is formed to describe the wound quantitatively. This feature vector, along with both the original and analyzed images, is saved in the result database.

      The Wound healing trend analysis is performed on a time sequence of images belonging to a given patient to monitor the wound healing status. The current trend is obtained by comparing the wound feature vectors between the current wound record and the one that is just one standard time interval earlier (typically one or two weeks). Alternatively, a longer term healing trend is obtained by comparing the feature vectors between the current wound and the base record which is the earliest record for this patient.

    2. Mean Shift Based Segmentation Algorithm

      We chose the mean shift algorithm, proposed in [16], over other segmentation methods, such as level set ad graph cut based algorithms, for several reasons. First, the mean shift algorithm takes into consideration the spatial continuity inside the image by expanding the original 3D color range space to

      5D space, including two spatial components, since direct

      changing the resolution parameters. Finally, the mean shift filtering algorithm is suitable for parallel implementation since the basic processing unit is the pixel. In this case, the high computational efficiency of GPUs can be exploited.

      Fig. 1. Wound image analysis system software system

      The mean shift algorithm belongs to the density estimation based non-parametric clustering methods, in which the feature space can be considered as the empirical probability density function of the represented parameter. This type of algorithms adequately analyzes the image feature space (color space, spatial space or the combination of the two spaces) to cluster and can provide a reliable solution for many vision tasks [16]. In general, the mean shift algorithm models the feature vectors associated with each pixel (e.g., color and position in the image grid) as samples from an unknown probability density function f ( x) and then finds clusters in this distribution. The center for each cluster is called the mode [25]. Given n data points xi , i 1,…, n in the d-dimensional space Rd , the multivariate kernel density estimator is shown as below [16].

      classification on the pixels proved to be inefficient [16].

      ck ,d

      n x x 2

      Secondly, a number of acceleration algorithms are available

      f h ,K ( x ) nh d k(

      i (1)

      )

      h

      [17] [23]. Thirdly, for both mean shift filtering and region merge methods, the quality of the segmentation is easily controlled by the spatial and color range resolution parameters

      1. [17]. Hence, the segmentation algorithm can be adjustable to different degrees of skin color smoothness by

        i1

        where h is a bandwidth parameter satisfying h 0 and ck ,d is a normalization constant [16]. The function k ( x) is the profile

        of the kernel defined only for x 0 and . represents the

        vector norm. In applying the mean shift algorithm we use a variant of what is known in the optimization literature as multiple restart gradient descent. Starting at some guess at a local maximum yk , which can be a random input data point

        xi , the mean shift computes the density estimate f ( x) at yk

        and takes an uphill step using the gradient descent

        method. The gradient of f ( x) is given as below.

        shift filtering and region fusion results of a sample foot wound image (part (a) in Figure 2) are shown in part (b) and

        (c) in Figure 2, respectively. We can see that the over- segmentation problem in part (b) is effectively solved by region fusion procedure. From the region fusion result in part (c), the foot boundary is readily determined by a largest connected component detection algorithm, which will be introduced in the next Section. A C++ based implementation

        2c 2

        k ,d n x xi

        (2)

        method of the mean shift algorithm can be found in [17].

        nh

        f ( x ) d 2 [ g (

        i1

        h )] m( x)

        (3)

    3. Wound Boundary Determination and Analysis Algorithms

    Because the mean shift algorithm only manages to segment

    the original image into homogeneous regions with similar color features, an object recognition method is needed to

    m ( x )

    n x x 2 x

    n

    i1 xi g(

    x x

    i

    2

    )

    h

    g( i

    interpret the segmentation result into a meaningful wound

    i1 h )

    where g ( r ) k ' ( r) and n is the number of neighbors taken

    into account in the 5 dimension sample domain. In our case, we use the Epanechinikov kernel [26], which makes the derivative of this kernel a unit sphere. Based on [16], we use the combined kernel function shown in eq. (5) where hs and

    hr are different bandwidth values for spatial domain and

    range domain, respectively. In [16], the two bandwidth values are referred to as spatial and range resolutions. The vector m(

    x) defined in eq. (3) is called the mean shift vector [16],

    since it is the difference between the current value x and the weighted mean of the neighbors xi around x. In the mean-shift procedure, the current estimate of the mode yk at

    iteration k is replaced by its locally weighted mean as shown below in eq. (4) [16].

    yk 1 yk m( yk ) (4)

    C

    boundary determination that can be easily understood by the users of the wound analysis system. As noted in [30], a standard recognition method relies on known model information to develop a hypothesis, based on which a decision is made whether a region should be regarded as a candidate object, i.e., a wound. A verification step is also needed for further confirmation. Because our wound determination algorithm is designed for real time implementation on the smartphones with limited computational resources, we simplify the object recognition process while ensuring that recognition accuracy is acceptable.

    Over-segmented (a)

    foot area

    x s 2

    xr 2 (5)

    K hs ,hr ( x ) h 2 h 3 k ( h ) k( h )

    s r s r

    This iterative update of the local maxima estimation will be continued until the convergence condition is met. In our case, the convergence condition is specified as the Euclidean length of the mean shift vector that is smaller than a preset threshold. The threshold value for the mean shift iteration is the same for the task of locating the foot in the full image and for locating the wound within the foot boundary.

    After the filtering (also referred to as the mode seeking) procedure above, the image is usually over-segmented, which means that there are more regions in the segmentation result than necessary for wound boundary determination [27]. To solve this problem, we have to merge the over-segmented image into a smaller number of regions which are more object-representative based on some rules. In the fusion step, extensive use was made of region adjacency graphs (RAG)

    1. [28]. The initial RAG was built from the initial over- segmented image, the modes being the vertices of the graph and the edges were defined based on 4-connectivity on the lattice. The fusion was performed as a transitive closure operation [29] on the graph, under the condition that the color difference between two adjacent nodes should not exceed hf , which is regarded as the region fusion resolution. The mean

    Region fused (b)

    foot area

    (c)

    Fig. 2. Mean shift based image segmentation sample result. (a) Original image. (b) Mean shift filtered image. (c) Region fused image. Note that we artificially increased the brightness and contrast of the images in this figure to highlight the over-segmentation in part (b) and to better observe the region fusion result in part (c).

    Our wound boundary determination method is based on three assumptions. First, the foot image contains little irrelevant background information. In reality, it is not a critical problem as we assume that the patients and/or caregivers will observe the foot image with the wound on the smartphone screen before the image is captured to ensure that the wound is clearly visible. Second, we assume that the healthy skin on the sole of the foot is a nearly uniform color feature. Finally, we assume that the foot ulcer is not located at the edge of the foot outline. These are reasonable assumptions for our initial system development and appear consistent with

    observations made initially from a small sampling of foot images. In the future, we plan to explore ways to relax these assumptions.

    Based on these assumptions, the proposed wound boundary determination method is illustrated as in Figure 3, and explained below.

    After he foot area is located, we generate a binary image with pixels that are part of the foot labeled 1 (white) and the rest part of the image labeled 0 (black). The result of the foot area determination executed on the region fusion image shown in part (c) in Figure 2 is presented in part (a) in Figure

    1. To determine the actual wound boundary, the system locates the black part labeled as 0 within the white foot area

      (Hollow region detection in the foot area). Here we use the simple line-scanning based algorithm illustrated in Figure 4 and explained below.

      In this wound boundary determination algorithm, each row in the binary image matrix is regarded as the basic scanning unit. In each row, the part labeled as 0 in the detected foot region is regarded as the wound part. After every row is scanned, the wound boundary is determined accordingly. Because some small outlier regions may also be generated due to the local color variation of the skin, a Small region filtering procedure is needed to identify only the largest black region as the wound. A sample of the wound boundary determination result is shown in part (b) in Figure 5.

      After the best estimate of the wound boundary is obtained, we analyze the wound area within the boundary using a wound description model. Many methods for assessing and classifying open wounds require advanced clinical expertise and experience, and specialized criteria have been developed for diabetic foot ulcers [21] [32].

      The Largest connected component detection is first performed on the segmented image, using the fast largest connected component detection method introduced in [31] including two passes. In foot Color thresholding, the color feature extracted in the mean shift segmentation algorithm of this component is compared with an empirical skin color feature by calculating the Euclidean distance between the color vector for the current component and the standard skin color vector from the Macbeth color checker [20]. If the distance is smaller than a pre-specified and empirically determined threshold value, we claim that the foot area has been located. Otherwise, we iteratively repeat the largest component detection algorithm on the remaining part of the image while excluding the previously detected components until the color threshold condition is satisfied.

      Fig. 4. Wound part detection algorithm flowchart

      The RYB (red-yellow-black) wound classification model, proposed in 1988 by Arnqvist, Hellgren and Vincent, is a consistent, simple assessment model to evaluate wounds [21]. It classifies wound tissues within a wound as red, yellow, black or mixed tissues, which represent the different phases on the continuum of the wound healing process. Specifically, red tissues are viewed as the inflammatory (reaction) phase, proliferation (regeneration), or maturation (remodeling) phase; yellow tissues imply infection or tissue containing slough that are not ready to heal; and black tissues indicate necrotic tissue state, which is not ready to heal either [21] [32]. Based on the RYB wound evaluation model, our wound analysis task is to classify all the pixels within the wound boundary into the RYB color categories and cluster them. Therefore, classical clustering methods can be applied to solve this task.

      Fig. 3. Largest connected component detection based wound boundary

      determination method flowchart

      For our wound image analysis, a fast clustering algorithm called K-mean is applied [22]. K-mean is a simple unsupervised learning algorithm that solves the well-known clustering problem. A sample of the color based wound analysis result is shown in part (c) in Figure 5. The results presented in Section V demonstrate the effectiveness of the K- mean algorithm for our task.

  3. GPU BASED OPTIMIZATION

Because the CPUs on smartphones are not nearly as powerful as those on PCs or laptops, an optimized parallel implementation based on GPUs is critical for the most computationally demanding module in the algorithm structure. For current Android based smartphones, such as Nexus 4 from Google, the GPUs (Adreno 320) have high computational capabilities (up to 51.2 G Floating Point Operation per Second (FLOPS)) [33]. As the experimental results in Section V shows, the hybrid implementation on both CPUs and GPUs can significantly improve the time efficiency for algorithms, which are suitable for parallel implementation.

Since our wound analysis is implemented on Android smartphones, we take advantage of the Android APIs for GPU implementations. In our case, we use the Renderscript, which offers a high performance computation API at the native level written in C (C99 standard) [34] and gives the smartphone apps the ability to run operations with automatic parallelization across all available processor cores. It also supports different types of processors such as the CPU, GPU or DSP. In addition, a program may access to all of these features without having to write code to support different architectures or a different number of processing cores [35].

Detected foot region

(a)

Detected wound area

(b)

Color analysis

(c)

Fig. 5. Wound boundary determination and analysis result. (a) The foot boundary detection result. (b) Wound boundary determination result. (c) Color segmentation result within the wound boundary.

On the Nexus 4 Android smartphone, we implemented the mean shift based segmentation algorithm both on the Adreno 320 GPU and Quad-core Krait CPU using Renderscript. The algorithm implementation flow is shown as in Figure 6 and is explained below. Our implementation scheme is similar to the ones used in [35] [36].

Fig. 6. Implementation flow of the mean shift algorithm on both CPUs and GPUs

The processing steps Color space transformation, Color histogram generation and discretization, and Weight-map generation (these steps belongs to the mean shift filtering module introduced in Section II) are all implemented on the CPU. Afterwards, all the needed data are moved to the global memory on the GPU. This data includes the original image data in CIE Lab color space, the discretized color histogram for all three channels in this color space and the weight-map, which combines the edge information into the image segmentation to further improve the accuracy [17]. Because the mean shift based segmentation algorithm operates on each pixel of an image, and the computation, which takes place at each pixel, is independent of its distant surroundings, it is a good candidate for implementation on a parallel architecture. Hence, we developed a parallel implementation of Mean shift mode seeking, which simply copies the image to the device and breaks the computation of the mode seeking into single pixels and their surrounding intermediate neighboring region. An independent thread is spawned for the mean shift mode seeking for each pixel. Multi-threads are running at the same time on the GPU to realize the parallel computation. The number of threads running in parallel is determined by the computational capability of the GPU. In the Renderscript programming, this number is optimized automatically and does not need to be specified. After the mean shift mode seeking, all the result modes for each pixel are moved back to the local memory of the CPU. The Region fusion step, as discussed in detail in [17], is performed on the CPU.

To avoid the ghost image effect associated with normal back surface mirrors (reflective surface on the back side of the glass), front surface mirrors (reflective surface on the front side) are needed, as illustrated in part (a) in Figure 8. The optical paths for both the front surface mirror and the normal mirror are shown in Figure 9 (a). As shown in part (b) and part (c) in Figure 9, the ghost image effect has been eliminated by using front surface mirrors.

  1. IMAGE CAPTURE BOX

    Two devices for the image capture of diabetic foot ulcers have been reported in the literature [37] [38]. However, drawback to these designs are either large dimensions or high cost. Moreover, both devices require Wi-Fi connectivity and a laptop or PC for image processing.

    To ensure consistent image capture conditions and also to facilitate a convenient image capture process for patients with type 2 diabetes, we designed an image capture device in the shape of a box. Hence we term this device the Image Capture Box. The image capture box was designed as a compact, rugged and inexpensive device that: (i) allows patients to both view the sole of their foot on the screen of the smartphone and to capture an image since the majority of patients wounds occur on the soles of their feet, (ii) allows patients to rest their feet comfortably, without requiring angling of the foot or the smartphone camera, as patients may be overweight and have reduced mobility, and (iii) accommodates image viewing and capture of left foot sole as well as right foot sole. To achieve these objectives, we make use of two front surface mirrors, placed at an angle of 90° with respect to each other, and with the common line of contact tilted 45° with respect to horizontal. A mechanical drawing of basic optical principle for foot imaging is shown in Figure 7. The optical path is represented in blue straight lines with arrows indicating the direction.

    Fig. 7. Basic optical principle for foot imaging

    A SolidWorks 3D rendering of the image capture box is shown in Figure 8. As seen in this figure, the entire box has a rectangular trapezoid shape. Rectangular openings for placing the foot and smartphone are cut into the slanted surface, shown in Figure 8 (b), which is at 45° with respect to horizontal. In this case, the patients can rest their foot comfortably and view their wounds through the smartphone camera. When using the box, the patients need to ensure that the wound is completely located within the opening by simply observing the image displayed on the smartphone.

    1. (b)

      (c)

      Fig. 8. 3D drawing of the mechanical structure of the image capture box.

      1. From the back. (b) From the front. (c) Internal structure from the front.

        To provide consistent light for wound image viewing and capturing, warm white LED (light emitting diode) lighting, resembling daylight, is incorporated at the back side of the box. Based on our evaluation of different positions of the LED light, we found this to be the optimal location for foot imaging. The wall material for the image capture box is constructed from ¼ white acrylic where white was chosen to obtain better light reflectivity. The actual product is shown in part (a) of Figure 10, and a sample image captured by the image capture box is shown in part (b) Figure 10.

        (a)

      2. (c)

      Fig. 9. Ghost image caused by the normal second surface mirror. (a) ghost image optical path; (b) ghost image using the normal mirrors for the box; (c) improvement by using the front surface mirrors

  2. EXPERIMENTAL RESULTS

      1. Experimental Set-upS

        The goal of the experimental work has been: (i) to assess the accuracy of the wound boundary determination based on the mean shift algorithm and the color segmentation based on the K-mean algorithm and (ii) to perform an efficiency analysis by comparing the mean shift algorithm to 2 other algorithms.

        1. (b)

          Fig. 10. Image capture box illustration; (a) actual product of the image capture box; (b) wound image captured using the warm LED light

          To test accuracy, we applied the mean shift based algorithm on two categories of wound images. For the first category, we used 30 images of simulated wounds, typically referred to as moulage wounds. The moulage wounds permitted us to evaluate our method under relatively consistent skin conditions and on wounds with distinct boundaries. Moulage is the art of applying mock injuries for the purpose of training emergency response teams and other medical and military personnel. In our case, we use the moulage wounds that include typical granulation, slough and necrotic tissues; the wounds were provided by Image Perspectives Corporation (Carson City, Nevada) and applied to the first authors foot.

          The selected four sample images (not all 30 images were presented in this paper) of the moulage wounds are shown in Figure 11 and were captured with the Smartphone camera placed on the image capture box.

          For the second category, we evaluated our wound image analysis method on 34 images of actual patient wounds collected at the UMass-Memorial Health Center Wound Clinic (Worcester, MA), following an IRB approved protocol in accordance with Federal Regulations. The goal of selecting these typical wound images from type 2 diabetic patients is to provide a more realistic test of our wound boundary determination and color segmentation algorithms. Six selected sample images out of a total of 34 are shown in Figure 12. Compared with the images in Figure 11, the real wound images are more complex: they may have uneven illumination over the image plane, complex surrounding skin texture and wounds appearing in a variety of shapes, sometimes lacking distinct boundaries. Note that only two of the wound images in Figure 12 were captured by our smartphone and image capture box; the other four images are directly from the wound image data base in UMASS Wound Clinic.

          To text algorithm efficiency, the mean shift based wound analysis algorithm discussed in Section II was implemented on the CPU (Quad core, 1500MHz, Krait, 2048 MB system RAM) and the GPU (Adreno 320, Qualcomm) of the Nexus 4 Android smartphone. All the programming was done in Java in the Eclipse IDE (Integrated Development Environment).

          Experimental Results on the Images of Moulage Wound Simulation

          The wound boundary determination results of images in Figure 11 are shown in Figure 13. As seen from the part (a)- (d), the mean shift segmentation algorithm described in Section II provides promising boundary detection results. However, there is still visibly imperfect detection as shown in part (c) in Figure 13 where the yellow wound tissue at the

          boundary has similar color to the healthy skin surrounding it.

          As mentioned in Section II, after the wound boundary was determined, the K-mean color segmentation method was performed within the wound boundary. The color segmentation results are shown in part (e)-(h). We assessed the color segmentation performance by comparing the original images in Figure 11 with the results in Figure 13. By careful observation, we found that most of the wound tissues were accurately classified. However, there were still some mis- classifications in part (e) where some visually light red tissues are classified as yellow tissue.

          1. (b)

    (c) (d)

    Fig. 11. Wound images of the moulage simulation applied on the authors feet

    (a) (b) (c)

    (d) (e) (f) Fig. 12. Clinical image samples of actual patients

    1. Experimental Results on Clinical Images from Actual Patients

      The original images from actual patients, shown in Figure 12, are more complex than is the case for the images of moulage simulations and are taken from 6 different patients. The images in part (a) and (b) are appropriately illuminated with a uniform skin color and well-defined wound boundaries, but the remaining four images are either not illuminated uniformly or with an uneven skin texture. The wounds in images (a), (c) and (d) are located completely within the limbs boundary. In contrast, the wounds in images (b), (e) and

      (f) are located almost at the boundary.

      To adapt to these different conditions, we had to adjust the parameters of the algorithm for each wound image. There are three adjustable parameters in the mean shift based wound boundary determination algorithm for each wound image: the spatial resolution hs , the range resolution hr and the fusion

      resolution hf .

      First, we tried parameter settings hs 7 , hr 6 and

      because previous work [17] howed good segmentation results with these values. Our experiments with these default setting on the real wound images in Figure 12 did not provide satisfactory wound boundary determination results for all six images shown in Figure 14. Note that we only try to delineate the boundary for the largest wound in the middle for the multiple-wound situation as shown in part (b) in Figure 12.

      As mentioned in [16], only features with large spatial support are represented in the mean shift filtered image when hs increased, and only features with high color contrast survive when hr is large. In [17], it is also stated that a larger number of regions will remain in the region fused image by employing a smaller hf . In Section II-C, we discussed how

      the wound boundary determination is strongly dependent on whether a complete foot boundary can be detected. Hence, we need a better spatial resolution as well as a better region fusion resolution value to group the small regions in the foot area to a connected component when the skin texture is complex (as shown in part (c) in Figure 12). On the other hand, if the wound is located near to the foot boundary (as shown in part (b), (e) and (f)), better spatial and fusion resolution is also needed to avoid the disconnected foot boundary detection, which will cause the wound boundary determination failure.

      Second, we tried different parameters in specified domain (4 hs 10 , 4 hr 7 , and using 0.5 as the

      adjustment step to customize the parameters for each wound image in Figure 12 as shown in Table 1. The corresponding wound boundary determination results are shown in Figure 15. Much more accurate results are obtained for part (b), (c), (d) and (f) by parameter adjustment.

      For a more objective evaluation, we asked three experienced wound clinicians from UMass Medical School to independently label the wound area for all 34 real wound images. Then we applied a majority vote [6] [7] for each pixel. Finally, we used the MCC (Matthews Correlation Coefficient)

      1. to measure the wound boundary determination accuracy by using the combined labeled wound images as the ground truth. The MCC returns a value between -1 (total disagreement) and +1 (perfect prediction). With a fixed optimal parameter setting for 34 wound images, the MCC score was 0.403. In contrast, with customized parameter settings the MCC score improved to 0.736.

        Color segmentation results provided by K-mean algorithm based on wound boundary determination results from Figure 15 are shown in Figure 16. The results are promising despite a small number of misclassified pixels in part (c) and (f). In part (c), some dark red part is recognized as black tissue. In part (f), some dark yellow part is classified as the red tissues. In conclusion, the wound analysis task is much more complicated for the clinical images of real patients due to the complicated skin color and texture of patients feet, various wound locations and uneven illumination over the image plane

        1. (b)

      (c) (d)

      (e) (f)

      (g) (h)

      Fig 13 Experimental results for the images of the moulage wound simulation (a)-(d): Wound boundary determination results with mean shift based algorithm. (e)-(h): Color segmentation results using K-mean clustering

      method based on the mean shift boundary determination results.

      Table 1 Parameters adjustment for wound boundary determination of

      different images

      Default

      Image a

      Image b

      Image c

      Image d

      Image e

      Image f

      spatial resolution

      hs

      7

      7

      9

      9

      9

      9

      7

      range resolution

      hr

      6

      6.5

      6.5

      8

      6.5

      6.5

      6.5

      fusion resolution

      hf

      3

      7

      7

      10

      10

      10

      12

      (a) (b) (c)

      (d) (e) (f)

      Fig. 14 Wound boundary determination results for clinical images of real patients with pre-set bandwidths

      1. (b) (c)

        1. (e) (f)

          1. (b) (c)

      (d) (e) (f)

      Fig. 16. Wound assessment results for clinical images of real patients

    2. Computational Efficiency Analysis

    In this section, we present data for an efficiency analysis of

    1. the mean shift method as compared to the level set [15] and graph cut based algorithms [40], which are typical image segmentation algorithms, and (ii) a comparison of two different implementations of the mean shift based method.

      First, the three algorithms were implemented without GPU optimization on a quad-core PC with 4 GB of RAM and applied to all 64 wound images (30 Moulage images and 34 real wound images). For similar segmentation results, the average computing time of level set and graph cut based algorithms were approximately 15 and 10 seconds, respectively. In contrast, the mean shift based method required only about 4 seconds on average. Thus, mean shift based wound boundary determination method provides a much better efficiency, being nearly four times faster than the level set based algorithm and twice as fast as the graph cut based algorithm. Hence, it is reasonable to select the mean shift based algorithm as it delivers the most balanced performance of both boundary detection accuracy and time efficiency.

      Second, we compared the mean shift based algorithm on the smartphones CPU to the smartphone CPU+GPU. The average processing time of the 64 images for the CPU and CPU+GPU implementations on the smartphone are approximately 30 seconds and 15 seconds, respectively. We can see that the time efficiency is significantly enhanced by GPU implementation.

      While GPU+CPU implementation of the mean shift algorithm on a laptop only provides minimal improvements in computation time over a CPU implementation, the GPU+CPU implementation on the smartphone does improve the time efficiency by about a factor of two. This is partly because the CPU on the smartphone is not as powerful as the one on the laptop. Moreover, the Renderscript implementation utilizes both the smartphone GPU as well as the CPU and even available DSP devices on chip to provide the effective optimization.

  3. CONCLUSION

We have designed and implemented a novel wound image analysis system for patients with type 2 diabetes suffering from foot ulcers. The wound images are captured by the smartphone camera placed on an image capture box. The

wound analysis algorithm is implemented on a Nexus 4 Android smartphone, utilizing both the CPU and GPU.

We have applied our mean shift based wound boundary determination algorithm to 30 images of moulage wound simulation and additional 34 images of real patients. Analysis of these experimental results shows that this method efficiently provides accurate wound boundary detection results on all wound images with an appropriate parameter setting. Considering that the application is intended for the home environment, we can for each individual patient manually find an optimal parameter setting based on a single sample image taken from the patient before the practical application. Experimental results show that a fixed parameter setting works consistently well for a given patient (same foot and skin condition). In the future, we may consider applying machine learning approaches to enable self-adaptive parameter setting based on different image conditions. The algorithm running time analysis reveals that the fast implementation of the wound image analysis only takes 15 seconds on average on the smartphone for images with pixel dimensions of 816 x 612.

Accuracy is enhanced by the image capture box, whih is designed so that consistent image capture conditions are achieved in terms of the illumination and distance from camera to object. While different smartphone cameras do have slightly different color space characteristics, we have not included color calibration mainly because the most important aspect of the wound assessment system is the tracking of changes to the wound, both in size and color, over consecutive images captures.

Given the high resolution, in terms of pixel size, of all modern smartphone cameras, the performance of our wound analysis system is not expected to be affected by resolution differences across smartphone cameras. In fact, the original large resolution images are down-sampled to a fixed spatial resolution of 816 x 612 pixels.

While different image noise levels for different smartphone cameras is a potential concern, we have determined, based on the experimental results, that any noise level encountered during the image capture process can be effectively removed by applying a Gaussian blurring filter before wound analysis.

The primary application of our wound analysis system is home-based self-management by patients or their caregivers, with the expectation that regular use of the system will reduce both the frequency and the number of wound clinic visits. One concern is that some elderly patients may not be comfortable with operating a smartphone, but this concern could be addressed by further simplifying the image capture process to a simple voice command.

An alternative deployment strategy is placing the system in wound clinics, where a nurse can perform the wound image capture and data analysis. With this implementation, the wound analysis can be moved from the smartphone to a server, which will allow more complex and computationally demanding wound boundary detection algorithms to be used. While this will allow easier and more objective wound tracking and may lead to better wound care, this implementation of the wound analysis system is not likely to reduce the number of visits to the wound clinic.

In either implementation, telehealth is an obvious extension to the wound analysis system whereby clinicians can remotely access the wound image and the analysis results. Hence, a database will be constructed on a possibly cloud-based server to store the wound data for patients.

The possibility of microbial contamination of the image capture box by the users or the environment has so far only been addressed by wiping the surface of the box with an anti- microbial wipe after each use. A better solution may be a disposable contamination barrier, which will cover the slanted surface of the box except the openings. This will avoid the patients foot directly touching the surface of the image capture box.

The entire system is currently being used for wound tracking in the UMass-Memorial Health Center Wound Clinic in Worcester, MA. This testing at the Wound Clinic is a first step toward usability testing of the system by patients in their homes.

In future work, we plan to apply machine learning methods to train the wound analysis system based on clinical input and hopefully thereby achieve better boundary determination results with less restrictive assumptions. Furthermore, we plan to compute a healing score to be assigned to each wound image to support trend analysis of a wounds healing status.

ACKNOWLEDGEMENT

The authors would like to thank all the reviewers for their constructive comments which greatly improve the scientific quality of the manuscript.

REFERENCES

    1. K.M. Buckley, L.K. Adelson and J.G. Agazio, Reducing the risks of wound consultation: Adding digital images to verbal reports, Wound Ostomy Continence Nurs., vol. 36, no. 2, pp. 163-170, Mar. 2009.

    2. V. Falanga, The chronic wound: Impaired healing and solutions in the context of wound bed preparation, Blood Cells Mol. Dis., vol. 32, no. 1: pp. 88-94, Jan. 2004.

    3. C.T. Hess and R.S. Kirsner, Orchestrating wound healing: Assessing and preparing the wound bed, J. Adv. Skin Wound Care, vol. 16, no. 5, pp. 246-257, Sep. 2006.

    4. R.S. Rees and N. Bashshur, The effects of Tele wound management on use of service and financial outcomes, Telemed. J. E. Health, vol. 13, no. 6, pp. 663-674, Dec. 2007.

    5. National Institute of Health. NIHs National Diabetes Information Clearing House, [Online], Available from: www.diabetes.niddk.nih.gov.

    6. H. Wannous, Y. Lucas, S. Treuillet, Combined machine learning with multi-view modeling for robust wound tissue assessment, Proc. of 5th International Conf. on Computer Vision Theory and Application , pp. 98-104, May. 2010.

    7. H. Wannous, Y. Lucas, S. Treuillet, B. Albouy, A complete 3D wound assessment tool for accurate tissue classification and measurement, IEEE 15th Conf. on Image Processing, pp. 2928-2931, Oct. 2008.

    8. P.Plassman, T.D. Jones, MAVIS: A non-invasive instrument to measure area and volume of wounds, Med. Eng. Phys., vol. 20, no. 5, pp. 332-338, Jul. 1998.

    9. A. Malian, A. Azizi, F.A. Van den Heuvel, M. Zolfaghari, Development of a robust photogrammetric metrology system for monitoring the healing of bedscores, Photogrammetric Record, vol. 20, no.111, pp. 241-273, Jan. 2005.

    10. H. Wannous, S. Treuillet and Y. Lucas, Robust tissue classification for reproducible wound assessment in telemedicine environment, J. Electron. Imaging, vol. 19, no. 2, Apr.2010.

    11. H. Wannous, Y. Lucas, S. Treuillet, Supervised tissue classification from color images for a complete wound assessment tool, IEEE Proc. of the 29th Annual Inter. Conf. Eng. Med. Biol. Soc., pp. 6031-6034, Aug. 2007.

    12. N. Kabelev, Machine vision enables detection of melanoma at most curable stage, Tech. Rep., MELA Sciences, Inc., Irvington, NY, May. 2013.

    13. L.T. Kohn, J.M. Corrigan and M.S. Donaldson, Crossing the quality chasm: A new health system for the 21st century health care services. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine, National Academy Press, 2001.

    14. L. Wang, P.C. Pedersen, D. Strong, B. Tulu, E. Agu, Wound image analysis system for diabetics, Proc. of SPIE Conf, on Medical Imaging, vol. 8669, Feb. 2013.

    15. C.M. Li, C.Y. Xu, C.F. Gui, Distance regularized level set evolution and its application to image segmentation, IEEE Trans. Image Processing, vol. 19, no. 12, pp. 3243-3254, Dec. 2010.

    16. D. Comaniciu, P.Meer, Mean shift: A robust approach toward feature space analysis, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 603- 619, May, 2002.

    17. C.M. Christoudias, B. Georgescu, P. Meer, Synergism in low level vision, IEEE Proc. of 16th Inter. Conf. on Pattern Recognition, vol. 4, pp. 150-155,

    18. L. Wang, P.C. Pedersen, D. Strong, B. Tulu, E. Agu, Smartphone-based wound assessment system for diabetic patients, presented in 13th Diabetes Technology Meeting, Oct. 2013.

    19. L.G. Shapiro, C.G. Stockman, Computer Vision, Prentice Hall, 2001.

    20. D. Pascale, RGB coordinates of the Macbeth color checker, Tech. Rep., BabelColor Company, Montreal, Quebec, Canada, Jun. 2006.

    21. J. Arnqvist, J. Hellgren, J. Vincent, Semi-automatic classification of secondary healing ulcers in multispectral images, Proc. of IEEE 9th Conf. Pattern Recognition, pp. 459-461, Nov. 1988.

    22. J.A. Hartigan, M.A. Wong, Algorithm AS 136: A K-mean clustering algorithm, J. Royal Statistical Society, Series C, vol. 28, no. 1, pp. 100-108, 1979.

    23. L. Men, M.Q. Huang, J. Gauch, Accelerating mean shift segmentation algorithm on hybrid CPU&GPU platforms, International Workshop on Modern Accelerator Technologies for GIScience, pp. 157-166, Sep. 2013.

    24. D. Kransner, Wound care how to use the red-yellow- black system, the American Journal of Nursing, vol. 95, no. 5, pp. 44-47, May, 1995.

    25. Y.Z. Cheng, Mean shift, mode seeking, and clustering, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 8, pp. 790-799, Aug. 1995.

    26. W. Zucchini, Applied smoothing techniques: part 1: Kernel density estimation, Lecture Note, University of Science and Technology of China, Hefei, China, Oct. 2003.

    27. J.C. Tilton, Y. Tarabalka, P.M. Montesano, E. Gofman, Best merge region-growing segmentation with integrated nonadjacent region object aggregation, IEEE Trans. Geoscience and Remote Sensing, vol. 50, no. 11, Nov. 2012.

    28. A.Duarte, A.Sanchez, F.Fernandez, A.S.Montemayor, Improving Image Segmentation Quality through Effective Region Merging Using a Hierarchical Social Metaheuristic, Pattern Recognition Letter, vol.27, pp. 1239-1251, Aug. 2006.

    29. R. Lidl, G. Pilz, Applied Abstract Algebra, 2nd edition, Undergraduate Texts in Mathematics, Springer, Nov. 1997.

    30. R. Jain, R. Kasturi, B.G. Schunck, Machine Vision, McGraw-Hill, Inc., 1995.

    31. H. Samet, M. Tamminen, Efficient component labeling of images of arbitrary dimension represented by linear bintrees, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 10, no. 4, pp. 579-586, Jul. 1988.

    32. J.R. Mekkes, W. Westerhof, Image processing in the study of wound healing, J. Clinic in Dermatology, vol. 13, no. 4, pp. 401-407, July, 1995.

    33. Qualcomm Inc., Adreno graphics processing units, [Online], Available from: https://developer.qualcomm.com/mobile- development/maximize-hardware/mobile-gaming-

      graphics-adreno/adreno-gpu

    34. Goolge Inc., RenderScript [Online], Available from http://developer.android.com/guide/topics/renderscript/co mpute.html#overvie w

    35. B. Varga, K. Karacs, High-Resolution Image Segmentation Using Fully Parallel Mean Shift, EURASIP

      J. Advances in Signal Processing, vol. 1, no. 111, 2011.

    36. B. Fulkerson, S. Soatto, Really quick shift: Image segmentation on a GPU, Proc. of Workshop on Computer Vision using GPUs, Sep. 2010.

    37. S. A. Bus, C.E.V.B. Hazengerg, M. Klein, J.G. Van Baal, Assessment of foot disease in the home environment of diabetic patients using a new photographic foot imaging devices J. Med . Eng. Techmol, Vol. 34, no 1 , pp 43- 50 , Jan 2011.

    38. P. Foltynski, P. Ladyzynski, K. Migalska-Musial, S. Sabalinska, A. Ciechanowska, J. Wojcicki, A new imaging and data transmitting device for telemonitoring of diabetic foot syndrome patients, Diabetes Technol. Ther., vol. 13, no. 8, pp. 861-867, Aug, 2011.

    39. B.W. Matthews, Comparison of the predicted and observed secondary structure of T4 phage lysozyme, Biochimica et Biophysica Acta (BBA) Protein Structure , vol. 405, no. 2, pp. 442-451, 1975.

    40. Y. Boykov, S. Corp, O. Veksler, R. Zabih, Fast Approximate Energy Minimization via Graph Cuts, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 11, pp. 1222-1239, Nov. 2001.

Leave a Reply