Arşiv logosu
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
Arşiv logosu
  • Koleksiyonlar
  • Sistem İçeriği
  • Analiz
  • Talep/Soru
  • Türkçe
  • English
  • Giriş
    Yeni kullanıcı mısınız? Kayıt için tıklayın. Şifrenizi mi unuttunuz?
  1. Ana Sayfa
  2. Yazara Göre Listele

Yazar "Pinar, Abdulvahap" seçeneğine göre listele

Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
  • Küçük Resim Yok
    Öğe
    Explainable artificial intelligence in medical research: A synopsis for clinical practitioners—Comprehensive XAI methodologies
    (Elsevier, 2025) Yagin, Fatma Hilal; Pinar, Abdulvahap
    Enhancing the interpretability and transparency of AI models for scientific and medical research is the goal of Explainable Artificial Intelligence (XAI). AI is being used more and more in clinical decision support systems, thus it's important to understand how these systems function and produce outcomes. In activities like finding complicated data patterns, tracking treatment progress, and Disease diagnosis, XAI helps AI models provide more dependable and accurate predictions. XAI facilitates improved patient care and fortifies the decision-making process in medical settings by enhancing the clarity of AI-generated information. Therefore, it is important to use XAI in medical research to increase reliability and make more accurate decisions. The structure that produces findings is known as a “black box.” With XAI, doctors can understand the decisions made by AI because it overcomes the “black box” problem and offers transparency. Medical doctors can make safer and more informed decisions by considering not only the results but also the processes that lead to them. Given that the goal of interest is to provide more accurate predictions and an explanation of their rationale, XAI is a vital tool for both medical research and patient care procedures. This section covers the problems and solutions related to the interpretability of AI and machine learning techniques. It also focuses on the use of XAI algorithms to assist doctors in medical fields. © 2025 Elsevier Inc. All rights reserved.
  • Küçük Resim Yok
    Öğe
    Explainable Artificial Intelligence Paves the Way in Precision Diagnostics and Biomarker Discovery for the Subclass of Diabetic Retinopathy in Type 2 Diabetics
    (Mdpi, 2023) Yagin, Fatma Hilal; Yasar, Seyma; Gormez, Yasin; Yagin, Burak; Pinar, Abdulvahap; Alkhateeb, Abedalrhman; Ardigo, Luca Paolo
    Diabetic retinopathy (DR), a common ocular microvascular complication of diabetes, contributes significantly to diabetes-related vision loss. This study addresses the imperative need for early diagnosis of DR and precise treatment strategies based on the explainable artificial intelligence (XAI) framework. The study integrated clinical, biochemical, and metabolomic biomarkers associated with the following classes: non-DR (NDR), non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR) in type 2 diabetes (T2D) patients. To create machine learning (ML) models, 10% of the data was divided into validation sets and 90% into discovery sets. The validation dataset was used for hyperparameter optimization and feature selection stages, while the discovery dataset was used to measure the performance of the models. A 10-fold cross-validation technique was used to evaluate the performance of ML models. Biomarker discovery was performed using minimum redundancy maximum relevance (mRMR), Boruta, and explainable boosting machine (EBM). The predictive proposed framework compares the results of eXtreme Gradient Boosting (XGBoost), natural gradient boosting for probabilistic prediction (NGBoost), and EBM models in determining the DR subclass. The hyperparameters of the models were optimized using Bayesian optimization. Combining EBM feature selection with XGBoost, the optimal model achieved (91.25 +/- 1.88) % accuracy, (89.33 +/- 1.80) % precision, (91.24 +/- 1.67) % recall, (89.37 +/- 1.52) % F1-Score, and (97.00 +/- 0.25) % the area under the ROC curve (AUROC). According to the EBM explanation, the six most important biomarkers in determining the course of DR were tryptophan (Trp), phosphatidylcholine diacyl C42:2 (PC.aa.C42.2), butyrylcarnitine (C4), tyrosine (Tyr), hexadecanoyl carnitine (C16) and total dimethylarginine (DMA). The identified biomarkers may provide a better understanding of the progression of DR, paving the way for more precise and cost-effective diagnostic and treatment strategies.
  • Küçük Resim Yok
    Öğe
    A proposed tree-based explainable artificial intelligence approach for the prediction of angina pectoris
    (Nature Portfolio, 2023) Guldogan, Emek; Yagin, Fatma Hilal; Pinar, Abdulvahap; Colak, Cemil; Kadry, Seifedine; Kim, Jungeun
    Cardiovascular diseases (CVDs) are a serious public health issue that affects and is responsible for numerous fatalities and impairments. Ischemic heart disease (IHD) is one of the most prevalent and deadliest types of CVDs and is responsible for 45% of all CVD-related fatalities. IHD occurs when the blood supply to the heart is reduced due to narrowed or blocked arteries, which causes angina pectoris (AP) chest pain. AP is a common symptom of IHD and can indicate a higher risk of heart attack or sudden cardiac death. Therefore, it is important to diagnose and treat AP promptly and effectively. To forecast AP in women, we constructed a novel artificial intelligence (AI) method employing the tree-based algorithm known as an Explainable Boosting Machine (EBM). EBM is a machine learning (ML) technique that combines the interpretability of linear models with the flexibility and accuracy of gradient boosting. We applied EBM to a dataset of 200 female patients, 100 with AP and 100 without AP, and extracted the most relevant features for AP prediction. We then evaluated the performance of EBM against other AI methods, such as Logistic Regression (LR), Categorical Boosting (CatBoost), eXtreme Gradient Boosting (XGBoost), Adaptive Boosting (AdaBoost), and Light Gradient Boosting Machine (LightGBM). We found that EBM was the most accurate and well-balanced technique for forecasting AP, with accuracy (0.925) and Youden's index (0.960). We also looked at the global and local explanations provided by EBM to better understand how each feature affected the prediction and how each patient was classified. Our research showed that EBM is a useful AI method for predicting AP in women and identifying the risk factors related to it. This can help clinicians to provide personalized and evidence-based care for female patients with AP.

| İnönü Üniversitesi | Kütüphane | Rehber | OAI-PMH |

Bu site Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası Lisansı ile korunmaktadır.


İnönü Üniversitesi, Battalgazi, Malatya, TÜRKİYE
İçerikte herhangi bir hata görürseniz lütfen bize bildirin

DSpace 7.6.1, Powered by İdeal DSpace

DSpace yazılımı telif hakkı © 2002-2026 LYRASIS

  • Çerez Ayarları
  • Gizlilik Politikası
  • Son Kullanıcı Sözleşmesi
  • Geri Bildirim