A Novel Attention-based Explainable Deep Learning Framework Towards Medical Image Classification
dc.authorscopusid | 58668454900 | |
dc.authorscopusid | 56389888600 | |
dc.authorscopusid | 57206891969 | |
dc.authorscopusid | 58882404100 | |
dc.authorscopusid | 56727581600 | |
dc.authorscopusid | 57189003551 | |
dc.contributor.author | Muoka G.W. | |
dc.contributor.author | Yi D. | |
dc.contributor.author | Ukwuoma C.C. | |
dc.contributor.author | Martin M.D. | |
dc.contributor.author | Aydin A.A. | |
dc.contributor.author | Al-Antari M.A. | |
dc.date.accessioned | 2024-08-04T20:03:59Z | |
dc.date.available | 2024-08-04T20:03:59Z | |
dc.date.issued | 2023 | |
dc.department | İnönü Üniversitesi | en_US |
dc.description | 7th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2023 -- 23 November 2023 through 25 November 2023 -- 196776 | en_US |
dc.description.abstract | Deep learning applications for medical image classification have shown remarkable promise, particularly incorporating attention-based neural networks. This is particularly relevant in medical imaging, where the integration of Artificial Intelligence assists with various imaging tasks, including classification, segmentation, and detection. Deep learning is revolutionizing medical research and playing a significant role in advancing personalized clinical treatment. However, the lack of interpretability in these models presents a significant obstacle to their adoption in clinical practice. Therefore, there is a growing need for a comprehensive understanding of artificial intelligence systems and their internal mechanisms, capabilities, and limitations, which is the focus of the field of explainable AI. This study proposes a novel attention-based explainable deep learning framework for medical image classification tasks, including Covid-19, breast cancer (BreakHis), lung cancer (LC2500), and Retinal optical coherence tomography (OCT). The proposed framework recorded overall accuracies of 98% (Covid-19 Radiography), 95% (BreakHis), 99.8% (LC2500), and 95%(OCT). For visual analysis of the outcomes, we employ and use the LIME, SHAP, and ELI-5 to analyze the achieved results. The study's primary goal is to bridge the gap between the high performance achieved by attention-based models and the necessity for transparency and interpretability in medical image diagnostics. © 2023 IEEE. | en_US |
dc.identifier.doi | 10.1109/ISAS60782.2023.10391289 | |
dc.identifier.isbn | 9798350383065 | |
dc.identifier.scopus | 2-s2.0-85184806683 | en_US |
dc.identifier.scopusquality | N/A | en_US |
dc.identifier.uri | https://doi.org/10.1109/ISAS60782.2023.10391289 | |
dc.identifier.uri | https://hdl.handle.net/11616/92247 | |
dc.indekslendigikaynak | Scopus | en_US |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.ispartof | ISAS 2023 - 7th International Symposium on Innovative Approaches in Smart Technologies, Proceedings | en_US |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Attention Mechanism | en_US |
dc.subject | Deep Learning | en_US |
dc.subject | Explainable AI(XAI) | en_US |
dc.subject | Medical Imaging | en_US |
dc.title | A Novel Attention-based Explainable Deep Learning Framework Towards Medical Image Classification | en_US |
dc.type | Conference Object | en_US |