Feature Selection for Text Classification Using Mutual Information

Küçük Resim Yok

Tarih

2019

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Ieee

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

The feature selection can be defined as the selection of the best subset to represent the data set, that is, the removal of unnecessary data that does not affect the result. The efficiency and accuracy of the system can be increased by decreasing the size and the feature selection in classification applications. In this study, text classification was applied by using 20 news group data published by Reuters news agency. The pre-processed news data were converted into vectors using the Doc2Vec method and a data set was created. This data set is classified by the Maximum Entropy Classification method. Afterwards, a subset of data sets was created by using the Mutual Information Method for the feature selection. Reclassification was performed with the resulting data set and the results were compared according to the performance rates. While the success of the system with 600 features was (0.9285) before the feature selection, (0.9285), then, the performance rates of the 200, 100, 50, 20 models were obtained as (0.9454, 0.9426, 0.9407, 0.9123), respectively. When the results were examined, the success of the 50-featured model was higher than the 600-featured model initially created.

Açıklama

International Conference on Artificial Intelligence and Data Processing (IDAP) -- SEP 21-22, 2019 -- Inonu Univ, Malatya, TURKEY

Anahtar Kelimeler

Natural Language Processing, Doc2Vec, Mutual Information, Maximum Entropy

Kaynak

2019 International Conference on Artificial Intelligence and Data Processing (Idap 2019)

WoS Q Değeri

N/A

Scopus Q Değeri

Cilt

Sayı

Künye