 Open Access
 Total Downloads : 12
 Authors : Kusha Bhatt, Pankaj Dalal
 Paper ID : IJERTCONV2IS10065
 Volume & Issue : NCETECE – 2014 (Volume 2 – Issue 10)
 Published (First Online): 30072018
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
Initialize the centers of categorical data cluster using genetic approach: A Method
Kusha Bhatt Prof. Pankaj Dalal

Tech Scholar, Department of CSE. Department of CSE. Shrinathji Institute of Technology Shrinathji Institute of Technology
Nathdwara, India Nathdwara, India
kusha.bhatt@gmail.com pkjdalal@gmail.com
Abstract: Clustering is a challenging task in data mining technique. The aim of clustering is to group the similar data into number of clusters. Various clustering algorithms have been developed to group data into clusters. The leading partitioned clustering technique, kmodes, is one of the most computationally efficient clustering methods for categorical data. However, the performance of the kmodes clustering algorithm which converges to numerous local minima strongly depends on initial cluster centers. Currently, most methods of initialization cluster centers are mainly for numerical data. Due to lack of geometry for the categorical data, these methods used in cluster centers initialization for numerical data are not applicable to categorical data. This research proposes a novel initialization method for categorical data which is implemented to the kmodes algorithm using genetic algorithm
Keywords: KModes, Genetic Algorithm, Categorical data, clustering.

INTRODUCTION
Clustering categorical data, i.e., data in which attribute domains consist of discrete values that are not ordered, is a fundamental problem in data analysis. Despite many advances and the vast literature in the clustering of data objects with numerical domains, clustering categorical data, where there is neither a natural distance metric nor geometric interpretation for clusters, remains a significant challenge. In addition, categorical clustering presents many of the same difficulties found in clustering numerical data, e.g., high dimensionality, large data sets and the high computational complexity associated with the discrete clustering problem. Moreover, to be effective most algorithms for clustering categorical data often require the careful choice of parameter values, which makes these algorithms difficult to use by those not thoroughly familiar with the methods
Clustering is unsupervised learning that aims at partitioning a data set into groups of similar items. The goal is to create clusters of data objects where the withincluster similarity is maximized (intracluster similarity) and the betweencluster similarity is minimized (intercluster similarity). One of the stages in a clustering task is selecting a clustering strategy. In this stage, a particular clustering algorithm is selected that is suitable for the data and the desired clustering type. Selecting a clustering
algorithm is not an easy task and requires the consideration of several issues such as data types, data set size and dimensionality, data noise level, type or shape of expected clusters, and overall expected clustering quality. Over the past few decades, many clustering algorithms have been proposed that employ a wide range of techniques such as iterative optimization, probability distribution functions, densitybased concepts, information entropy, and spectral analysis
Clustering an Overview
Prior to discussing specific algorithms for categorical data, we provide a brief discussion of the elements that are considered when designing clustering algorithms. We provide a description of data types, proximity measures, and objective functions.
Data Types
The first stage in data clustering is data collection (Jain and Dubes, 1988)[4]. In this stage, a determination of what data to collect and their initial data type is made. Each dimension represents an attribute, a feature, or an observation. The value of an attribute can be classified as follows:

Quantitative. These attributes contain continuous numerical quantities where a natural order exists between items of the same data type and an arithmetically based distance measure can be defined. Height, weight, and length are some examples.

Qualitative. These attributes contain discrete data whose domain is finite. We refer to the items in the domain of each attribute as categories. Qualitative data are further subdivided as:

Nominal. Data items belonging to this group do not have any inherent order or proximity. We refer to these data as categorical data. Attributes such as color, shape, and city names are some examples of this data type.

Ordinal. These are ordered discrete items that do not have a distance relation. Examples are ranks or ratings.


Binary attributes are attributes that can take on only two values: 1 or 0.

Depending on the context, binary attributes can be qualitative or quantitative.
Proximity Measures
Proximity measures are metrics that define the similarity or dissimilarity between data objects for the purpose of determining how close or related the data objects are. There are various approaches to defining proximity measures. These approaches vary from one application area to another, and depend on the data type. For most algorithms, these proximity measures are used to construct a proximity matrix that reflects the distance or similarity between the data objects. These matrices are used as input for a clustering algorithm that clusters the data according to a partitioning criterion or an objective function. For example, in some of the graphbased algorithms, the input to the algorithm is the graph adjacency matrix and the goal is to partition the graph by finding the minimum cut of the graph. In this section, we discuss some of the wellknown proximity measures.


KMEANS AND KMODES ALGORITHMS
The kmeans algorithm (Anderberg, 1973; Ball & Hall, 1967; MacQueen, 1967; Jain & Dubes, 1988)[23][24][25][4] is a wellknown partitioned clustering algorithm which is widely used in real world applications such as marketing research and data mining to cluster very large data sets due to their efficiency. In 1997 Huang (1997, 1998)[1][2], extended the kmeans algorithm to propose the kmodes algorithm whose extensions have removed the numericonly limitation of the kmeans algorithm and enable the kmeans clustering process to be used to efficiently cluster large categorical data sets from real world databases. Since first published, the kmodes algorithm has become a popular technique in solving categorical data clustering problems in different application domains (Andreopoulos, An, & Wang, 2005)[3].
The kmeans algorithm and the kmodes algorithm use alternating minimization methods to solve non convex optimization problems in finding cluster solutions (Jain & Dubes, 1988)[4]. These algorithms require a set of initial cluster centers to start and often end up with different clustering results from different sets of initial cluster centers. Therefore, these algorithms are very sensitive to the initial cluster centers. Usually, these algorithms are run with different initial guesses of cluster centers, and the results are compared in order to determine the best clustering results. One way is to select the clustering results with the least objective function value formulated in these algorithms, see, for instance (Huang, Ng, Rong, & Li, 2005)[5]. In addition, cluster validation techniques can be employed to select the best clustering result, see, for instance (Jain & Dubes, 1988)[4]. Other approaches have been proposed and studied to address this issue by using a better initial seed value selection for the kmeans algorithm
(Arthur& Vassilvitskii, 2007; Babu & Murty, 1993; Brendan & Delbert, 2007; Bradley, Mangasarian, & Street, 1997; Bradley & Fayyad, 1998; Khan & Ahmad, 2004;
Krishna & Murty, 1999; Laszlo & Mukherjee, 2006, 2007; Pen, Lozano, & Larraaga, 1999)[6][7][[8][9][10][11][12][13][14][15]. For example,
some experts (Babu & Murty, 1993; Krishna & Murty, 1999; Laszlo & Mukherjee, 2006, 2007)[7][11][13][14]
used genetic algorithm to obtain the good initial cluster centers. Arthur and Vassilvitskii (2007)[6] proposed and studied a careful seeding for initial cluster centers to improve clustering results.
However, due to lack of intuitive geometry for categorical data, the techniques used in cluster centers initialization for numerical data are not applicable to categorical data. To date, few researches are concerned for cluster centers initialization for categorical data. However, due to the fact that large categorical data sets exist in many applications, it has been widely recognized that directly clustering the raw categorical data is important. Examples include environmental data analysis (Wrigley, 1985), market basket data analysis (Aggarwal, Magdalena, & Yu, 2002)[16], DNA or protein sequence analysis (Baxevanis & Ouellette, 2001)[18], text mining (Wang & Karypis, 2006)[18], and computer security (Barbara & Jajodia, 2002). Therefore, how to select initial cluster centers for clustering categorical data become an important research question. The kcenters clustering technique.
Huang in Huang (1998)[20] suggested to select the first k distinct objects from the data set as the initial k modes or assign the most frequent categories equally to the initial k modes. Though the methods are to make the initial modes diverse, an uniform criteria is not given for selecting k initial modes in Huang (1998)[2]. Sun, Zhu, and Chen (2002) [19] introduces an initialization method which is based on the frame of refining. This method presents a study on applying Bradleys iterative initialpoint refinement algorithm (Bradley & Fayyad, 1998)[10] to the kmodes clustering, but its time cost is high and the parameters of this method are plenty which need to be asserted in advance. In Coolcat algorithm (Barbara, Couto, & Li, 2002)[20] , the MaxMin distances method is used to find the k most dissimilar data objects from the data set as initial seeds. However, the method only considers the distance between the data objects, by which outliers maybe be selected. Cao, Liang, and Bai (2009)[21] and Wu, Jiang, and Huang (2007)[22] integrated the distance and the density together to propose a cluster centers initialization method, respectively. The difference between the two methods is the definition of the density of an object. Wu used the total distance between an object and all objects from data set as the density of the object. Due to the fact that the time complexity ofcalculating the densities of all objects is O(n2), it limits the process in a subsample data set and uses a refining framework. But this method needs to randomly select subsample, so the sole clustering result cannot be guaranteed. Cao et al. (2009) defined the density of an object based on frequency of attribute values. In this
paper, we prove that Caos density is equivalent to Wus density, which means that Caos method is equivalent to Wus method. Although the two methods can avoid to select outliers as the cluster centers by the density, they have some shortcomings: (1) the object with the maximum density is taken as the first cluster center. Due to the fact that they only considered the factor of density in the selection of the first cluster center, it is possible that the selected object is a boundary point among clusters, which is proved in this paper; (2) one real object in a cluster is selected as the cluster center. But in most cases, the center of a cluster is not a real object but a virtual object, which means that a real object could not sufficiently represent the cluster. In summary, there are no universally accepted method for obtaining initial cluster centers currently. Hence, it is very necessary to propose a new initialization method for categorical data which overcomes shortcomings of the existing initialization methods
The kmeans algorithm has the following important properties:

It is efficient in processing large data sets.

It often terminates at a local optimum.

It works only on numeric values.

The clusters have convex shapes.
The barriers of can be removed by making the following modifications to the kmeans algorithm which help in formulation of kmode algorithms:

Using a simple matching dissimilarity measure for categorical objects,

Replacing means of clusters by modes, and

Using a frequencybased method to find the modes to solve problem.


GENETIC ALGORITHMS
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, the more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number
of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:

Genetic representation of the solution domain

Fitness function to evaluate the solution domain The basic genetic algorithm is as follows:

Start – Genetic random population of n chromosomes (suitable solutions for the problem)

Fitness – Evaluate the fitness f(x) of each chromosome x in the population

New population – Create a new population by repeating following steps until the New population is complete

Selection – Select two parent chromosomes from a population according to their fitness (the better fitness, the bigger chance to get selected).

Crossover – With a crossover probability, cross over the parents to form new offspring (children). If no crossover was performed, offspring is the exact copy of parents.

Mutation – With a mutation probability, mutate new offspring at each locus (position in chromosome)

Accepting – Place new offspring in the new population.

Replace – Use new generated population for a further sum of the algorithm.

Test – If the end condition is satisfied, stop, and return the best solution in current population.

Loop – Go to step2 for fitness evaluation.


CONCLUSION

Categorical data are ubiquitous in realworld databases. The development of the kmodes algorithm was motivated to solve this problem. However, the clustering algorithm need to rerun many times with different initializations in an attempt to find a good solution. Moreover, this works well only when the number of clusters is small and chances are good that at least one random initialization is close to a good solution. In this work, a new initialization method for categorical data clustering has been proposed by optimizing the
distance between the objects and the density of the object and overcomes shortcomings of the existin initialization methods using genetic algorithm. Furthermore, the time complexity of the proposed method will also have to analyse. We have to test the proposed method using seven real world data sets from UCI Machine Learning Repository and experimental results of the proposed method will superior to other initialization methods in the k modes algorithm.
REFERENCES

Huang, Z.X. (1997). A fast clustering algorithm to cluster very large categorical data sets in data mining. In Proceeding SIGMOD workshop research issues on data mining and knowledge discovery (pp. 18).,

Huang, Z. X. (1998). Extensions to the kmeans algorithm for clustering large data sets with categorical values. Data Mining Knowledge Discovery, 2(3), 283304.

Andreopoulos, B., And, A., & Wang, X. (2005). Clustering the internet topology at multiple layers. WSEAS Transactions on Information Science and Applications, 2, 16251634.

Jain, A. K., & Dubes, R. C. (1988). Algorithms for clustering data. Prentice Hall.

Huang, Z. X., Ng, M., Rong, H., & Li, Z. (2005). Automated variable weighting in kmeans type clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 657668.

Arthur, D., & Vassilvitskii, S. (2007). Kmeans++: The advantages of careful seeding. In Proceedings 18th annual ACM SIAM symposium on discrete algorithms (SODA07) (pp. 1027 1035).

Babu,G.P.,&Murty,M.N.(1993). A nearoptimal initial seed value selection for kmeans algorithm using genetic algorithm. Pattern Recognition Letters, 14, 763769.

Brendan, J. F., & Delbert, D. (2007). Clustering by passing messages between data points. Science, 15(16), 972976.

Bradley, P. S., Mangasarian, O. L., & Street, W. N. (1997). Clustering via concave minimization. In M. C. Mozer, M. I. Jordan, & T. Petsche (Eds.). Advances in neural information processing system (Vol. 9, pp. 368374). MIT Press.

Bradley, P. S., & Fayyad, U. M. (1998). Refining initial points for kmeans clustering

Khan, S. S., & Ahmad, A. (2004). Cluster center initialization algorithm for kmeans clustering. Patter Recognition Letters, 25, 12931302.

Krishna, K., & Murty, M. N. (1999). Genetic kmeans algorithm. IEEE Transactions on Systems, Man, and Cybernetics, 29(3), 433439.

Laszlo, M., & Mukherjee, S. (2006). A Genetic algorithm using hyperquadtrees for lowdimensional kmeans clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4), 533543.

Laszlo, M., & Mukherjee, S. (2007). A genetic algorithm that exchanges neighbouring centers for kmeans clustering. Pattern Recognition Letters, 28(16), 23592366.

Pen, J. M., Lozano, J. A., & Larraaga, P. (1999). An empirical comparison of four initialization methods for the k means algorithm. Pattern Recognition Letter, 20, 10271040.

Aggarwal, C. C., Magdalena, C., & Yu, P. S. (2002). Finding localized associations in market basket data. IEEE Transactions on Knowledge and Data Engineering, 14(1), 5162.

Dordrecht: Kluwer. Baxevanis, A., & Ouellette, F. (2001). Bioinformatics: A practical guide to the analysis of genes and proteins (2nd Ed.). NY: Wiley.

Wang, J., & Karypis, G. (2006). On efficiently summarizing categorical databases. Knowledge and Information Systems, 9(1), 1937.

Sun, Y., Zhu, Q. M., & Chen, Z. X. (2002). An iterative initialpoints refinement algorithm for categorical data clustering. Pattern Recognition Letters, 23, 875884. UCI Machine Learning Repository (2010).
<http://www.ics.uci.edu/mlearn/MLRepository.html>.

Barbara, D., Couto, J., & Li, Y. (2002). COOLCAT: An entropybased algorithm for categorical clustering. In Proceedings of the eleventh international conference on information and knowledge management (pp. 582589).

Cao, F. Y., Liang, J. Y., & Bai, L. (2009). A new initialization method for categorical data clustering. Expert Systems with Applications, 33(7), 1022310228.

Wu, S., Jiang, Q. S., & Huang, Z. X. (2007). A new initialization method for categorical data clustering. Lecture Notes in Computer Science, 4426, 972980.

Anderberg, M. R. (1973). Cluster analysis for applicationsAcademic.

Ball, G. H., & Hall, D. J. (1967). A clustering technique for summarizing multivariate data. Behavioural Science, 12, 153 155.

MacQueen, J.B. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of fifth symposium on mathematical statistics and probability (Vol. 1, pp. 281297).