Share This Article:

The Context of Knowledge and Data Discovery in Highly Dense Data Points Using Heuristic Approach

Abstract Full-Text HTML Download Download as PDF (Size:378KB) PP. 88-94
DOI: 10.4236/jsip.2013.41011    4,027 Downloads   5,595 Views  

ABSTRACT

In data mining framework, for proficient data examination recent researchers utilized branch-and-bound methods such as seriation, clustering, and feature selection. Conventional cluster search was completed with diverse partitioning schemes to optimize the cluster pattern. Considering image data, partitioning approaches seems to be computationally complex due to large data size, and uncertainty of number of clusters. Recent work presented a new version of branch and bound model called model selection problem, handles the clustering issues more efficiently. The existing work deployed spatially coherent sampling for generating cluster parameter candidates. But if the problem-specific bounds and/or added heuristics in the data points of the domain area get surmounted, memory overheads, specific model selection, and uncertain data points cause various clustering abnormalities. To overcome the above mentioned issues, we plan to present an Optimal Model-Selection Clustering for image data point analysis in the context of knowledge and data discovery in highly dense data points with more uncertainty. In this work, we are going to analyze the model selection clustering which is first initiated through the process of heuristic training sequences on image data points and appropriates the problem-specific characteristics. Heuristic training sequences will generate and test a set of models to determine whether the model is matched with the characteristics of the problem or not. Through the process of heuristic training sequences, we efficiently perform the model selection criteria. An experimental evaluation is conducted on the proposed model selection clustering for image data point using heuristic approach (MSCHA) with real and synthetic data sets extracted from research repositories (UCI) and performance of the proposed MSCHA is measured in terms of Data point density, Model-Selection Criteria, Cluster validity.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

C. S. Sasireka and P. Raviraj, "The Context of Knowledge and Data Discovery in Highly Dense Data Points Using Heuristic Approach," Journal of Signal and Information Processing, Vol. 4 No. 1, 2013, pp. 88-94. doi: 10.4236/jsip.2013.41011.

References

[1] N. Thakoor, et al., “Branch-and-Bound for Model Selection and Its Computational Complexity,” IEEE Transactions on Knowledge and Data Engineering, Vol. 23, No. 5, 2011, pp. 655-668.
[2] W. Alkhaldi, et al., “Improving the Performance of Model-Order Selection Criteria by Partial-Model Selection Search,” IEEE International Conference on Acoustics Speech and Signal Processing, Dallas, 14-19 March 2010, pp. 4130-4133.
[3] X. F. Li, et al., “A Model Selection Method for Nonlinear System Identification Based fMRI Effective Connectivity Analysis,” IEEE Transactions on Medical Imaging, Vol. 30, No. 7, 2011, pp. 1365-1380.
[4] W. U. Bajwa, et al., “Model Selection: Two Fundamental Measures of Coherence and Their Algorithmic Significance,” IEEE International Symposium on Information Theory Proceedings, Austin, 13-18 June 2010, pp. 1568-1572.
[5] L. Du, et al., “Radar HRRP Statistical Recognition: Parametric Model and Model Selection,” IEEE Transactions on Signal Processing, Vol. 56, No. 5, 2008, pp. 1931-1944. doi:10.1109/TSP.2007.912283
[6] K. Scerri, et al., “Estimation and Model Selection for an IDE-Based Spatio-Temporal Model,” IEEE Transactions on Signal Processing, Vol. 57, No. 2, 2009, pp. 482-492. doi:10.1109/TSP.2008.2008550
[7] W. Fan, et al., “Unsupervised Hybrid Feature Extraction Selection for High-Dimensional Non-Gaussian Data Clustering with Variational Inference,” IEEE Transactions on Knowledge and Data Engineering, 2012, p. 1.
[8] N. Bouguila, et al., “A Model-Based Approach for Discrete Data Clustering and Feature Weighting Using MAP and Stochastic Complexity,” IEEE Transactions on Knowledge and Data Engineering, Vol. 21, No. 12, 2009, pp. 1649-1664. doi:10.1109/TKDE.2009.42
[9] Y. N. Wang, et al., “A Selection Model for Optimal Fuzzy Clustering Algorithm and Number of Clusters Based on Competitive Comprehensive Fuzzy Evaluation,” IEEE Transactions on Fuzzy Systems, Vol. 17, No. 3, 2009, pp. 568-577.
[10] F. Timm, et al., “Fast Model Selection for MaxMinOver-Based Training of Support Vector Machines,” 19th International Conference on Pattern Recognition, 8-11 December 2008, pp. 1-4.
[11] P. Saengsiri, et al., “Comparison of Hybrid Feature Selection Models on Gene Expressiondata,” 8th International Conference on ICT and Knowledge Engineering, 24-25 November 2010, pp. 13-18.
[12] N. Bouguila, et al., “Unsupervised Selection of a Finite Dirichlet Mixture Model: An MML-Basedapproach,” IEEE Transactions on Knowledge and Data Engineering, Vol. 18, No. 8, 2006, pp. 993-1009. doi:10.1109/TKDE.2006.133

  
comments powered by Disqus

Copyright © 2018 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.