A Novel Attention-based Explainable Deep Learning Framework Towards Medical Image Classification

dc.authorscopusid58668454900
dc.authorscopusid56389888600
dc.authorscopusid57206891969
dc.authorscopusid58882404100
dc.authorscopusid56727581600
dc.authorscopusid57189003551
dc.contributor.authorMuoka G.W.
dc.contributor.authorYi D.
dc.contributor.authorUkwuoma C.C.
dc.contributor.authorMartin M.D.
dc.contributor.authorAydin A.A.
dc.contributor.authorAl-Antari M.A.
dc.date.accessioned2024-08-04T20:03:59Z
dc.date.available2024-08-04T20:03:59Z
dc.date.issued2023
dc.departmentİnönü Üniversitesien_US
dc.description7th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2023 -- 23 November 2023 through 25 November 2023 -- 196776en_US
dc.description.abstractDeep learning applications for medical image classification have shown remarkable promise, particularly incorporating attention-based neural networks. This is particularly relevant in medical imaging, where the integration of Artificial Intelligence assists with various imaging tasks, including classification, segmentation, and detection. Deep learning is revolutionizing medical research and playing a significant role in advancing personalized clinical treatment. However, the lack of interpretability in these models presents a significant obstacle to their adoption in clinical practice. Therefore, there is a growing need for a comprehensive understanding of artificial intelligence systems and their internal mechanisms, capabilities, and limitations, which is the focus of the field of explainable AI. This study proposes a novel attention-based explainable deep learning framework for medical image classification tasks, including Covid-19, breast cancer (BreakHis), lung cancer (LC2500), and Retinal optical coherence tomography (OCT). The proposed framework recorded overall accuracies of 98% (Covid-19 Radiography), 95% (BreakHis), 99.8% (LC2500), and 95%(OCT). For visual analysis of the outcomes, we employ and use the LIME, SHAP, and ELI-5 to analyze the achieved results. The study's primary goal is to bridge the gap between the high performance achieved by attention-based models and the necessity for transparency and interpretability in medical image diagnostics. © 2023 IEEE.en_US
dc.identifier.doi10.1109/ISAS60782.2023.10391289
dc.identifier.isbn9798350383065
dc.identifier.scopus2-s2.0-85184806683en_US
dc.identifier.scopusqualityN/Aen_US
dc.identifier.urihttps://doi.org/10.1109/ISAS60782.2023.10391289
dc.identifier.urihttps://hdl.handle.net/11616/92247
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartofISAS 2023 - 7th International Symposium on Innovative Approaches in Smart Technologies, Proceedingsen_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAttention Mechanismen_US
dc.subjectDeep Learningen_US
dc.subjectExplainable AI(XAI)en_US
dc.subjectMedical Imagingen_US
dc.titleA Novel Attention-based Explainable Deep Learning Framework Towards Medical Image Classificationen_US
dc.typeConference Objecten_US

Dosyalar