Tagluk, Mehmet EminErtugrul, Omer Faruk2024-08-042024-08-0420151568-49461872-9681https://doi.org/10.1016/j.asoc.2015.07.044https://hdl.handle.net/11616/96936Due to technological improvements, the number and volume of datasets are considerably increasing and bring about the need for additional memory and computational complexity. To work with massive datasets in an efficient way; feature selection, data reduction, rule based and exemplar based methods have been introduced. This study presents a method, which may be called joint generalized exemplar (JGE), for classification of massive data sets. This method aims to enhance the computational performance of NGE by working against nesting and overlapping of hyper-rectangles with reassessing the overlapping parts with the same procedure repeatedly and joining non-overlapped hyper-rectangle sections that falling within the same class. This provides an opportunity to have adaptive decision boundaries, and also employing batch data searching instead of incremental searching. Later, the classification was done in accordance with the distance between each particular query and generalized exemplars. The accuracy and time requirements for classification of synthetic datasets and a benchmark dataset obtained by JGE, NGE and other popular machine learning methods were compared and the achieved results by JGE found acceptable. (C) 2015 Elsevier B.V. All rights reserved.eninfo:eu-repo/semantics/closedAccessNested generalized exemplarExemplar-based learningClassificationCompressionArtificial intelligenceA joint generalized exemplar method for classification of massive datasetsArticle3648749810.1016/j.asoc.2015.07.0442-s2.0-84939813591Q1WOS:000360424700040Q1