Karcı fractional order neural network (KarcıFANN): solving learning rate, overfitting and underfitting problems

dc.contributor.authorSaygili, Hulya
dc.contributor.authorKarakurt, Meral
dc.contributor.authorKarci, Ali
dc.date.accessioned2026-04-04T13:32:58Z
dc.date.available2026-04-04T13:32:58Z
dc.date.issued2025
dc.departmentİnönü Üniversitesi
dc.description.abstractThe performance of artificial neural networks (ANN) is affected by the selection of hyperparameters. Learning coefficient is a hyperparameter that significantly affects this performance. Choosing the right learning coefficient to achieve optimum success with different models and datasets is a difficult and time-consuming process. Inappropriate learning coefficient can cause problems such as network failure to learn, memorization, gradient explosion and loss. In the Karc & imath; Fractional Neural Network (Karc & imath;FANN) method proposed in this article, the weight update process is performed by using the fractional derivative instead of the learning coefficient, which is a fixed number in Classical ANNs where the Stochastic Gradient Descent (SGD) method is used. Thus, in the Karc & imath;FANN method, a fractional derivative that changes according to the error value obtained in each iteration will be used and thus external intervention to the network will be minimized, thus contributing to the literature. In the study, the results of the Classical ANN and Karc & imath;FANN methods with the same initial and parameter values were compared in order to classify the Kuzushiji_MNIST, GinaPrior2 and SignMnist data sets. In the experimental studies conducted by giving values between 0.1-5.0 to the alpha parameter, which is the fractional order of the fractional derivative, and to the learning coefficient in Classical ANN, it was observed that the Karc & imath;FANN method performed better than the Classical ANN in the classification of Kuzushiji-Mnist and GinaPrior2 data sets, especially between 3.0-5.0. It was observed that the problems of memorization and learning that were encountered in Classical ANN were eliminated in the Karc & imath;FANN method. In addition, the generalizability of the Karc & imath;FANN method was experienced by running it on multiple data sets.
dc.identifier.doi10.17341/gazimmfd.1558859
dc.identifier.endpage2514
dc.identifier.issn1300-1884
dc.identifier.issn1304-4915
dc.identifier.issue4
dc.identifier.scopus2-s2.0-105026447444
dc.identifier.scopusqualityQ2
dc.identifier.startpage2499
dc.identifier.urihttps://doi.org/10.17341/gazimmfd.1558859
dc.identifier.urihttps://hdl.handle.net/11616/108835
dc.identifier.volume40
dc.identifier.wosWOS:001668973600027
dc.identifier.wosqualityQ3
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isotr
dc.publisherGazi Univ, Fac Engineering Architecture
dc.relation.ispartofJournal of the Faculty of Engineering and Architecture of Gazi University
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WOS_20250329
dc.subjectArtificial neural networks
dc.subjectstochastic gradient descent
dc.subjectlearning rate
dc.subjectkarc & imath
dc.subjectfractional artificial neural networks
dc.subjectfractional order derivative
dc.titleKarcı fractional order neural network (KarcıFANN): solving learning rate, overfitting and underfitting problems
dc.typeArticle

Dosyalar