DOI : 10.17577/IJERTCONV14IS020010- Open Access

- Authors : Tausif N. Shaikh, Sharadchandra N. Bangayya
- Paper ID : IJERTCONV14IS020010
- Volume & Issue : Volume 14, Issue 02, NCRTCS – 2026
- Published (First Online) : 21-04-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
A Comprehensive Survey of Rough Set Techniques in Artificial Intelligence
Tausif N. Shaikh Department of Computer Science,
Padmashri Vikhe Patil College of
Arts Science and Commerce, Pravaranagar (Loni Kd), SPPU Pune (MH), India
Sharadchandra N. Bangayya Department of Computer Science, Padmashri Vikhe Patil College of
Arts Science and Commerce,
Pravaranagar (Loni Kd), SPPU Pune (MH), India
-
ABSTRACT :
Rough Set (RS) theory is regarded as a powerful mathematical framework for minimizing input dimensionality while effectively handling ambiguity, uncertainty, and incomplete information in datasets. Over recent years, the theory has attracted considerable interest because of its broad applications within artificial intelligence and cognitive science. Rough set-based techniques have been widely utilized across multiple research areas such as machine learning, intelligent decision systems, inductive reasoning, pattern recognition, data preprocessing, knowledge discovery, decision support, and expert systems. This paper outlines the core concepts of rough set theory and emphasizes major RS- driven research directions and applications. In addition, it reviews the incorporation of rough set methods into various machine learning and AI approaches, including clustering, feature selection and reduction, and rule induction..
Keywords : Artificial Intelligence, Clustering, Rule Induction, Feature Selection
-
INTRODUCTION:
Rough Set Theory (RST) was first introduced by Zdzisaw Pawlak [23] in the early 1980s as an extension of classical set theory designed to address uncertainty by approximating vague concepts. Since its development, RST has been widely utilized in numerous applications including attribute reduction, data simplification, rule generation, genetic data analysis, and knowledge discovery across various disciplines such as data mining, machine learning, and medical diagnosis. It is frequently regarded as a complementary approach to other uncertainty-based set models, particularly fuzzy set theory and multisets.
Over the past few decades, rough set theory has drawn increasing attention from researchers due to its flexibility and strong performance in tackling complex real-world problems. Rough set models have proven particularly valuable in artificial intelligence and cognitive science for representing and processing imprecise or incomplete information. Their applications extend to machine learning, knowledge acquisition, decision analysis, database systems, expert systems, and pattern recognition.
One of the major advantages of RST in decision support and knowledge discovery systems is its independence from prior assumptions. Unlike many other uncertainty-handling techniques, rough set theory does not require any preliminary or external information about the data or the underlying information system.
The central objective of this study is to present a comprehensive review of rough set-based techniques used in knowledge discovery. The paper describes the essential mathematical foundations and core structures of RST, along with several quality measures formulated within the theory to manage uncertainty and improve classification accuracy. Furthermore, it explores the integration of rough set approaches with important classification-related methods such as clustering, feature selection, and rule induction.
RST is grounded in the concept that every object in a universe is characterized by certain information. For example, when considering students who fail an examination, their scores represent the descriptive attributes associated with them. Objects possessing identical attribute values are treated as indistinguishable based on the available knowledge. This equivalence is formally defined as the indiscernibility relation, which forms the fundamental basis of rough set theory.
-
group of objects that are mutually indiscernible constitutes a precise (crisp) set, while collections containing elements that cannot be clearly differentiated are known as rough sets. Consequently, each rough set includes a boundary region composed of objects that cannot be conclusively assigned either to the set or to its complement [77]. These boundary elements emerge due to incomplete or insufficient knowledge, which prevents exact classification. Rough set theory therefore provides an effective framework for analyzing qualitative data, focusing primarily on object-level information processing.
-
-
ROUGH SET THEORY:
Rough sets analyze uncertainty in data. They were used to determine the crucial attributes of objects and build the upper and lower approximate sets of objects sets. In real world data varies in size and complexity, which is difficult to analyze and also hard to manage from computational view point. The major objectives of Rough Set analysis are to reduce data size and to handle inconsistency in data.
-
RELATED APPROACHES :
Rough set theory provides fundamental methodologies for knowledge analysis and data interpretation. It facilitates the development of algorithms for knowledge reduction, approximation of concepts, induction of decision rules, and classification of objects. In the next section we discuss the adoption of rough set theory in various classification techniques
-
Rough Set Theory in Feature Selection:
Feature selection is a crucial step in data mining as it identifies the most significant attributes required for effective data representation while eliminating irrelevant ones. The primary goal of feature selection is to determine an optimal subset of features based on a defined evaluation criterion. The removal of unnecessary attributes is motivated by the following factors [26]:
Noisy variables negatively impact the generalization ability of learning algorithms, since substantial computational resources are wasted on training variables that exhibit poor signal-to-noise ratios.
Misleading or deceptive variables can divert learning algorithms toward incorrect generalizations, resulting in inaccurate concept modeling.
Feature selection is generally driven by two main objectives:
To maximize the informational value contained within the selected feature subset.
To minimize the number of features included in that subset.
Balancing these objectives makes the design of feature selection algorithms challenging. Common approaches include forward selection, which progressively adds variables until satisfactory performance is obtained, and backward elimination, which begins with the complete set of variables and systematically removes those that do not significantly affect performance. Bidirectional hill- climbing techniques further allow both addition and removal of variables at different stages to optimize selection quality. Broadly, feature selection algorithms are classified into two main categories:
FILTERS:
Filter-based feature selection methods act purely as preprocessing techniques that assess the informational significance of attributes, primarily drawing upon concepts from Information Theory. Although these methods are broadly applicable and computationally efficient, they do not incorporate any awareness of the classification behavior of the data . A key limitation of filter approaches is their inability to effectively handle feature redundancy.
Representative algorithms within this category include Relief, Focus , Las Vegas Filter (LVF), Selection Construction Ranking using Attribute Pattern (SCRAP), Entropy-Based Reduction (EBR) , and Fractal Dimension Reduction (FDR) .
Wrapper mthods, in contrast, operate alongside a classifier and evaluate feature subsets based on their classification performance on training data. While wrappers generally yield higher accuracy than filters, they are computationally expensive and lack the scalability and general applicability associated with filter techniques. Notable examples include the Las Vegas Wrapper (LVW) and neural network-driven feature selection approaches .
Within Rough Set Theory (RST), feature selection is fundamentally associated with the notion of a reduct, which serves as a minimal subset of attributes capable of preserving the indiscernibility relation of the entire attribute set. Formally, a reduct is defined as a subset
-
A such that IND(B)=IND(A), where IND(X) represents the indiscernibility relation induced by attribute set X. Such reducts maintain the partition structure of the universe and thus retain full classification capability . The reduct-based framework for attribute reduction has been widely explored by numerous researchers, and rough set models have become a prominent tool for feature selection in various studies
A conventional rough set strategy involves identifying the core for discrete datasets, which includes strongly indispensable features. Reducts are then constructed by combining core attributes with weakly relevant ones, ensuring that each reduct sufficiently represents the decision concepts embedded in the dataset. Minimal reducts, containing the smallest possible number of attributes, are often preferred for efficient classification. To enhance robustness and generalization, the concept of dynamic reducts was introduced, where reduct selection is guided through cross-validation. Dynamic reducts have been applied both for adaptive feature selection and for refining relevant decision rules.
Numerous computational strategies have been proposed for reduct generation. Evolutionary approaches based on genetic algorithms enable the derivation
Numerous computational strategies have been proposed for reduct generation. Evolutionary approaches based on genetic algorithms enable the derivation of reducts with manageable computational complexity, while heuristic- based techniques have also been extensively developed. Caballero et al. presented evolutionary and greedy heuristic algorithms known as epigraph2 and epigraph3, respectively. Another notable RST-driven method is the
Parametrized Average Support Heuristic (PASH) proposed by Zhang and Yao, which evaluates rule quality across decision classes and incorporates parameters to regulate approximation levels. Furthermore, Salamo and Colobardes introduced rough set-based techniques emphasizing feature weighting and instance selection .
-
-
Rough Set Theory in Clustering :
Clustering is a core task in data mining that aims to group similar objects into the same cluster. It is widely applied in data analysis activities such as unsupervised classification, data summarization, and data segmentation, where large datasets are partitioned into smaller, homogeneous subsets that are easier to manage, classify independently, and analyze. Over the years, extensive research has been conducted on clustering techniques for datasets containing categorical, numerical, or mixed-type attributes.
Rough clustering is a relatively recent approach derived from an extension of rough set theory to clustering analysis and is particularly suitable for scenarios in which cluster membership is uncertain or not clearly defined. Unlike traditional clustering methods, rough clustering allows objects to belong to multiple clusters simultaneously. This section reviews significant research contributions in the field of rough clustering.
Clustering based on rough set theory can be implemented by transforming the clustering dataset into a decision table. The rough set notion of representing sets through lower and upper approximations can be naturally extended to clustering problems. In rough clustering,
suitable distance or similarity measures are employed to relax the strict indiscernibility conditions typically required in conventional clustering techniques . Rough clustering has demonstrated successful application in diverse domains such as forestry, medical diagnosis, image processing, web mining, retail and supermarket analysis , and traffic engineering .
Rough set theory has also been utilized to develop efficient heuristic methods for identifying relevant tolerance relations that facilitate object extraction from data. Attribute-oriented rough set techniques reduce the computational burden of learning processes by eliminating irrelevant or insignificant attributes, thereby enhancing the efficiency of knowledge discovery from databases and experimental datasets. Applications of rough sets have proven effective in uncovering relationships within imprecise data, identifying dependencies between objects and attributes, assessing the classificatory significance of features, eliminating redundant information, and generating decision rules [60,61]. In many information
systems, certain object classes cannot be distinctly identified using the available attributes and can therefore only be defined approximately. Rough set theory provides a powerful framework for representing overlapping clusters. Compared to conventional crisp sets, rough sets offer
greater flexibility, while remaining less descriptive than fuzzy sets. Rough clusters generalize the classical notion of precise clusters by assigning objects either to the lower approximationindicating full membership in a single clusteror to the upper approximation, where objects may simultaneously belong to multiple clusters. This dual representation enables rough clustering to effectively model uncertainty in cluster membership. In recent years, increasing research attention has been devoted to this emerging paradigm, and several successful rough clustering outcomes have been reported.
Mazlack et al. [19] introduced a rough setbased method for cluster attribute selection, proposing two techniquesBi-clustering and Total Roughness (TR) which rely on bi- valued attributes and the maximization of total roughness across attribute sets. Parmar et al.
[22] proposed the MinimumMinimum Roughness (MMR) technique, which utilizes lower and upper approximations along with approximation quality measures [83]. For categorical data clustering, Herawan et al. [13] introduced the Maximal Attribute Dependency (MADE) approach, which computes rough attribute dependencies within categorical information systems to select clustering attributes with maximum dependency. Chen et al. [8] developed a hierarchical clustering algorithm based on rough set theory, incorporating an attribute membership matrix and evaluating clustering levels using consistency and aggregate degrees, while cluster similarity is measured through a categorical Euclidean distance metric.Upadhyaya, Arora, and Jain [34] proposed a clustering approach combining rough setbased indiscernibility relations with indiscernibility graphs to identify natural clusters based on similarity rather than exact equivalence. Hakim et al. [11] presented a method for clustering binary data using both indiscernibility relations and their associated levels. Herawan, Yanto, and Deris [14] applied a rough set clustering approach to supplier base management.
In addition to these methods, several related rough clustering techniques have been developed. These include Rough Partitive Algorithms, such as switching regression models, where clusters are represented by functions instead of individual objects [29]. Peters and Lampart [25] introduced rough k-medoids, while Peters [26] proposed a
rough switching regression model; together with rough k- means, these form a class of rough partitive clustering algorithms. Genetic algorithmbased rough clustering methods have also been explored, incluing approaches proposed by Lingras [18], Mitra et al. [20], and Peters et al.
[28] through evolutionary k-medoids. Kohonen networkbased rough clustering integrates rough set concepts into the Kohonen algorithm by incorporating lower and upper approximations into the weight update equations .Rough Support Vector Clustering (RSVC) is a soft clustering technique derived from the Support Vector Clustering (SVC) framework. RSVC achieves soft clustering by integrating rough set principles into the SVC paradigm. In this approach, the quadratic programming problem of SVC is modified to incorporate rough set characteristics, while retaining the same solution structure, allowing existing SVC optimization methods to be directly applied. The cluster labeling strategy in RSVC is also adapted from the conventional SVC approach.
Peters and Weber [27] proposed a dynamic rough clustering method in which algorithm parameters are iteratively updated to adapt to evolving environments, such as seasonal variations in customer behavior. Additional rough clustering approaches include early set- theoretic interpretations proposed by do Prado et al. [9] and Voges et al. [35,36]. Furthermore, Yao et al. [39] suggested relaxing certain rough clustering constraints particularly the requirement that boundary objects belong to at least two clustersand introduced an interval-based clustering framework.
-
Rough Set Theory in Rule Induction :
Decision rule discovery has been widely investigated using several major classification paradigms, including decision tree learning methods such as ID3 and C4.5, Bayesian classifiers, backpropagation-based neural networks, rough set models, and evolutionary computation techniques. These approaches have been extensively discussed across numerous research studies and survey articles. Traditional association rule mining algorithms proposed by Agrawal et al. [1], Agrawal and Srikant [2], Zaki [40], and Han et al.
[12] primarily depend on support and confidence as measures of rule significance. However, these parameters are implicit and not directly embedded within the rules themselves. Furthermore, conventional techniques do not adequately distinguish between moderately strong rules and those that capture deeper and more meaningful relationships among variables. Extracting highlysignificant association rules therefore requires additional processing beyond the initial discovery phase.
Rough Set Theory (RST) provides an alternative inductive learning framework for generating decision rules from attribute-based data tables. The rules derived through RST may be either certain or approximate, although the degree of uncertainty remains implicit, similar to traditional association rule methods. Various algorithms have been developed to
address incomplete datasets containing missing attribute values within rough set-based rule induction. RST has proven particularly effective in automated reasoning and rule-based decision systems, as it offers mechanisms to manage vagueness, ambiguity, and imprecision inherent in real-world attribute relationships. Consequently, rough set approaches have gained significant importance in the design and development of intelligent rule-based systems.
RST serves as a mathematical tool for analyzing uncertain and imprecise information systems by employing concepts such as indiscernibility relations, attribute dependency, approximation accuracy, reducts, core attributes, and decision rule extraction. Through reduct and core analysis, RST enables efficient reduction of conditiondecision datasets and facilitates the generation of minimal decision rules without requiring prior domain knowledge.
The rough set-based rule extraction process generally involves two main stages. First, attribute reduction techniques are applied to eliminate redundant features while preserving the classification capability of the dataset. This stage identifies the most influential attributes. For instance, Hu et al. [16] introduced a heuristic reduction algorithm (DISMAR) based on attribute significance derived from discernibility matrices. Hu [15] proposed POSAR, which employs a positive region-based heuristic measure. Wang and Li [37] developed CEAR using conditional information entropy. Wang et al. [38] integrated Particle Swarm Optimization (PSO) into rough set reduction for brain glioma datasets. Nguyen [21] utilized Boolean reasoning heuristics to analyze malicious decision tables.
The second stage focuses on rule induction from the reduced attribute set, enabling the extraction of meaningful and interpretable decision rules [37]. Tsumoto [115] proposed PRIMEROSE, a probabilistic rule induction method based on rough sets, and later extended it to PRIMEROSE4.5 , incorporating rough inclusion relations to improve performance. The LEM2 algorithm was also introduced for deriving minimal decision rules and has demonstrated effectiveness in classification and medical knowledge extraction. These methods revealed significant
patterns linking glioma MRI attributes with malignancy levels. Law and Au [17] applied rough classification for mixed-type tourism datasets, while Shen and Chouchoulas
[32] proposed RSAR, a modular framework integrating rough set-based dimensionality reduction into fuzzy rule induction.Incremental learning has emerged to address dynamic data environments. Existing methods include incremental protocol verification, incremental classification learning, GA-based ordered training, neural incremental architectures, continuous association rule mining, and incremental gravitational clustering. Several incremental approaches have also been developed within rough set theory. Blaszczynski and Slowinski introduced DomAprioriUpp for incremental rule post-processing. Asharaf et al. [3] proposed incremental rough clustering for interval data. Bazan et al. [5] explored incremental concept approximation for classifier construction. Ripple- Down Rules combined with rough sets were presented by Richards and Compton [30]. Guo et al. [10] developed the RDBRST algorithm for incremental
rule derivation. Shan and Ziarko [31] proposed an incremental rough learning model, later improved by Bian
-
to handle inconsistent datasets through extended decision matrices.
Most rule discovery techniques generate symbolic knowledge in the form of traditional Production Rules (IF THEN). Although easy to interpret, such rules suffer from limitations including poor exception handling, lack of hierarchical reasoning, and excessive fragmentation of knowledge. These shortcomings have been addressed through advanced rule representations such as Censored Production Rules (CPRs), Hierarchical Production Rules (HPRs), Hierarchical Censored Production Rules (HCPRs), and Hierarchical Fuzzy Censored Rules (CPRFHs).
-
-
CONCLUSION :
Rough Set Theory (RST) is primarily concerned with techniques for handling and classifying imprecise, uncertain, and incomplete information derived from experiential data. A key aspect of RST is its ability to distinguish between objects that can be certainly assigned to a category and those that can only be possibly classified. The theory supports the development of computational procedures for knowledge reduction, approximation of concepts, induction of decision rules, and classification of objects.
This study provides an overview of research contributions in rough set theory, where different investigations emphasize one or more rough set methodologies. The present review particularly highlights the application of RST in data preprocessing, clustering, and rule induction. Despite the progress achieved, further exploration and enhancement remain necessary. Managing uncertainty within data typically
introduces additional computational complexity, and the exhaustive evaluation of rules can become computationally intensive. To address these challenge, several simplified and evolutionary rough set-based approaches for decision-making have been proposed, and ongoing research continues in this direction. Moreover, recent studies have focused on hybridizing RST with modern computational paradigms such as granular computing, neural networks, genetic algorithms, and other evolutionary techniques, which are widely documented in the literature.
-
REFERENCES :
-
Agrawal, R., Imielinski, T. and Swami, A. Mining association rules between sets of items in large databases. in Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data, (Washington, D.C,1993), ACM Press ,22(2), 805-810.
-
Agrawal, R. and Srikant, R. Fast algorithms for mining association rules in large databases. in J. B. Bocca, M. Jarke, and C. Zaniolo,(Eds.), Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, (Santiago, Chile ,1994), Morgan Kaufmann, 487 499.
-
Asharaf, S., Shevade, S.K. and Murty, N.M. Rough set based incremental clustering of interval data, Pattern Recognition Letters, 27 (2006), 515- 519.
-
Baqui, S.,Just,J. and Baqui,S.C. Deriving strong association rules using a dependency criterion, the lift measure. International Journal of Data Analysis. Technical Strategy, 1,3(2009),297312.
-
Bazan, J. G., Peters, J. F., Skowron, A. and Nguyen, H. S. Rough set approach to pattern extraction from classifiers. Electronic Notes in Theoretical Computer Science, 82,4(2003),1 10.
-
Berzal,F.,Blanco,I.,Sanchez,D. and Vila,M. A. A new framework to assess association rules. in Symposiom on Intelligent Data Analysis, Lecture Notes in Computer Sciences. 2189(2001), 95104.
-
Bian, X.Certain rule learning of the inconsistent data. Journal of East Institute,12,1(1998),2530. China Shipbuilding
-
Chen,D., Cui, D.W. ,Wang, C.X. and Wang,
Z.R. A Rough Set-Based Hierarchical Clustering Algorithm for Categorical Data. International Journal of Information Technology, 12,3(2006),149-159.
-
Do Prado H.A, Engel ,P.M. and Filho, H.C. Rough clustering: an alternative to find meaningful clusters by using the reducts from a dataset. In J. Alpigini, J. Peters, A. Skowron, N. Zhong(Eds.)
Proceedings of the Rough Sets and Current Trends in Computing (RSCTC02), Lecture Notes in Artificial Intelligence, LNAI- 2475, (Heidelberg, 2002),
Springer-Verlag, 234238.
-
Guo, S., Wang, Z. Y., Wu, Z. C. and Yan, H.
P.A novel dynamic incremental rules extraction algorithm based on rough set theory. in Proceedings of the fourth International Conference on Machine learning and cybernetics, (Guangzhou, 2005), IEEE Computer Society,18-21.
-
Hakim,F., Winarko,S. and Winarko,E. Clustering Binary Data Based on Rough Set Indiscernibility Level. Biomedical Soft Computing and Human Sciences,16,2 (2010),87-95.
-
Han,J.,Pei,J.,Yin,Y. and Mao,R .Mining frequent patterns without candidate generation. Data Mining and Knowledge Discovery, 8(2004),5387.
-
Herawan ,T., Ghazali,R., Yanto, I.T.R. and Deris, M.M. Rough Set Approach for Categorical Data Clustering. International Journal of Database Theory and Application, 3,1(2010), 33-52.
-
Herawan, T., Yanto, I. and Deris, M..ROSMAN: ROugh Set approach for clustering Supplier base MANagement. Biomedical Soft Computing and Human Sciences, 162(2010),105-114.
-
Hu, X. Knowledge discovery in databases: An attribute oriented rough set approach. Ph.D. Thesis, Regina University,1995.
-
Hu, K. Y., Lu, Y. C. and Shi, C. Y. Feature ranking in rough sets. Artificial Intelligence Communications, 16,1(2003),4150.
-
Law, R. and Au, N. Relationship modeling in tourism shopping: A decision rules induction approach. Tourism Management, 21(2000), 241249.
-
Lingras P. Unsupervised rough set classification using Gas. Journal of Intelligent Information System, 16(2001), 215228.
-
Mazlack, L.J., He, A., Zhu, Y. and Coppock, S.A rough set approach in choosing partitioning attributes. InProceedings of the ISCA 13th, International Conference of Computer Applications in Industry and Engineering( CAINE-2000), (Hawaii, USA,2000),ISCA,1-6.
-
Mitra, S. An evolutionary rough partitive clustering. Pattern Recognition Letters, 25(2004), 14391449.
-
Nguyen, H. S. On the decision table with maximal number of reducts. Electronic Notes in Theoretical Computer Science, 82,4(2003),pp. 18.
-
Parmar, D., Wu, T. and Blackhurst, J. MMR: An algorithm for clustering categorical data using rough set theory. Data and Knowledge Engineering, 63(2007), 879893.
-
Pawlak, Z. Rough sets. International Journal of Computer and Information Sciences, 11(1982), 341- 356.
-
Pawlak, Z. and Skowron, A. Rudiments of rough sets. Information Sciences, 177
, 1(2007),327.
-
Peters, G. and Lampart, M. A partitive rough clustering Conference on Rough Sets and Current Trends in Computing (RSCTC'06), Lecture Notes in Artificial Intelligence, LNAI- 4259, Springer,657- 666. (Kobe, Japan,2006),
-
Peters, G.Rough clustering and regression analysis.in Proceedings of 2007 IEEE Conference on Rough Sets and Knowledge Technology (RSKT'07), Lecture Notes in Artificial Intelligence, LNAI-4481, pp.292299 (Toronto, Canada,2007), John Wiley & Sons Inc,292 299.
-
Peters, G. and Weber, R. A dynamic approach to rough clustering. in Proceedings of the Seventh International Conference on Rough Sets and Current Trends in Computing (RSCTC'08), Lecture Notes in Artificial Intelligence, LNAI-5306, (USA,2008), Springer,379 388.
-
Peters, G., Lampart, M. and Weber, R. Evolutionary rough k-medoid clustering. Transactions on Rough Sets VIII, Lecture Notes 5084(2008),289306.
-
Quandt, R. The estimation of the parameters of a linear regression system obeying two separate regimes. Journal of American Statistics Assocociation, 53(1958), 873 880
-
Richards, D. and Compton, P. An alternative verification and validation technique for an alternative knowledge representation and acquisition technique.
Knowledge Based Systems, 12,12(1999), 5573
-
Shan, N. and Ziarko, W. An incremental learning algorithm for constructing decision rules. In
R. S. Kluwer (Ed.). Rough sets, fuzzy sets and knowledge discovery, (Berlin, 1994), Springer- Verlag,326-334.
-
Shen, Q. and Chouchoulas, A. A modular approach to generating fuzzy rules with reduced attributes for the monitoring of complex systems. Engineering Applications of Artificial Intelligence, 13,3(2000),263 278.
-
Tsumoto, S. Extraction of Experts' Decision Rules from Clinical Databases using Rough Set Model. Journal of Intelligent Data Analysis, 2,3(1998),215-227.
-
Systems with Applications, Upadhyaya, S., Arora A. and Jain ,R. Rough Set Theory: Approach for Similarity Measure in Cluster Analysis. in Proceedings of 2006 International Conference on Data Mining, (Hong Kong, China,2006), IEEE Computer Society , 353-356.
-
Voges, K.E, Pope, N.K and Brown, M.R. Cluster analysis of marketing data examining on- line shopping orientation: a comparison of k-means and rough clustering approaches, In: Abbass HA, Sarker RA, Newton CS,(Eds.) Heuristics and Optimization for Knowledge Discovery, Idea Group publishing, Hershey, PA,2002,208-225.
-
Voges, K.E, Pope, N.K and Brown, M.R. A rough cluster analysis of shopping orientation data. in Proceedings of Australian and New Zealand Marketing Academy Conference, (Melbourne,2003), Adelaide,1625 1631.
-
Wang, Q. H. and Li, J. R. A rough set-based fault ranking prototype system for fault diagnosis. Engineering Applications of Artificial Intelligence, 17,8(2004),909917.
-
Wang, X., Yang, J., Jensen, R. and Liu, X. Rough set featur selection and rule induction for prediction of malignancy degree in brain glioma. Computer Methods and Programs in Biomedicine, 83(2006), 147156.
-
Yao, Y.Y, Lingras, P., Wang, R and Miao, D. Interval set cluster analysis: a re-formulation. in Proceedings of the International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular computing (RSFDGrC'09), Lecture Notes in Computer Science, LNCS-5908, (Berlin, Verlag,398- 405.
-
Zaki, M.J. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12,3(2000),372390.
-
Yao, Y.Y, Lingras, P., Wang, R and Miao, D. Interval set cluster analysis: a re-formulation. in Proceedings of the International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular computing (RSFDGrC'09), Lecture Notes in Computer Science, LNCS-5908, (Berlin, Verlag,398- 405.
-
Zaki, M.J. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12,3(2000),372390
