Yazar "Celik, Ozer" seçeneğine göre listele
Listeleniyor 1 - 10 / 10
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe AI-powered segmentation of bifid mandibular canals using CBCT(Bmc, 2025) Gumussoy, Ismail; Demirezer, Kardelen; Duman, Suayip Burak; Haylaz, Emre; Bayrakdar, Ibrahim Sevki; Celik, Ozer; Syed, Ali ZakirObjectiveAccurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT.Materials and methodsCBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer (R) open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team.Results69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively.ConclusionsDespite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure.Clinical relevanceBeing able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.Öğe Automated 3D segmentation of the hyoid bone in CBCT using nnU-Net v2: a retrospective study on model performance and potential clinical utility(Bmc, 2025) Gumussoy, Ismail; Haylaz, Emre; Duman, Suayip Burak; Kalabalik, Fahrettin; Say, Seyda; Celik, Ozer; Bayrakdar, Ibrahim SevkiObjectiveThis study aimed to identify the hyoid bone (HB) using the nnU-Net based artificial intelligence (AI) model in cone beam computed tomography (CBCT) images and assess the model's success in automatic segmentation.MethodsCBCT images of 190 patients were randomly selected. The raw data was converted to DICOM format and transferred to the 3D Slicer Imaging Software (Version 4.10.2; MIT, Cambridge, MA, USA). HB was labeled manually using the 3D Slicer. The dataset was divided into training, validation, and test sets in a ratio of 8:1:1. The nnU-Net v2 architecture was utilized to process the training and test datasets, generating the algorithm weight factors. To assess the model's accuracy and performance, a confusion matrix was employed. F1-score, Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) metrics were calculated to evaluate the results.ResultsThe model's performance metrics were as follows: DC = 0.9434, IoU = 0.8941, F1-score = 0.9446, and 95% HD = 1.9998. The receiver operating characteristic (ROC) curve was generated, yielding an AUC value of 0.98.ConclusionThe results indicated that the nnU-Net v2 model achieved high precision and accuracy in HB segmentation on CBCT images. Automatic segmentation of HB can enhance clinicians' decision-making speed and accuracy in diagnosing and treating various clinical conditions.Clinical trial numberNot applicable.Öğe Automated Mesiodens Detection with Deep-Learning-Based System Using Cone-Beam Computed Tomography Images(Wiley-Hindawi, 2023) Syed, Ali Zakir; Ozen, Duygu Celik; Abdelkarim, Ahmed Z.; Duman, Suayip Burak; Bayrakdar, Ibrahim Sevki; Duman, Sacide; Celik, OzerThe detection of mesiodens supernumerary teeth is crucial for appropriate diagnosis and treatment. The study aimed to develop a convolutional neural network (CNN)-based model to automatically detect mesiodens in cone-beam computed tomography images. A datatest of anonymized 851 axial slices of 106 patients' cone-beam images was used to process the artificial intelligence system for the detection and segmentation of mesiodens. The CNN model achieved high performance in mesiodens segmentation with sensitivity, precision, and F1 scores of 1, 0.9072, and 0.9513, respectively. The area under the curve (AUC) was 0.9147, indicating the model's robustness. The proposed model showed promising potential for the automated detection of mesiodens, providing valuable assistance to dentists in accurate diagnosis.Öğe Automatic Feature Segmentation in Dental Periapical Radiographs(Mdpi, 2022) Ari, Tugba; Saglam, Hande; Oksuzoglu, Hasan; Kazan, Orhan; Bayrakdar, Ibrahim Sevki; Duman, Suayip Burak; Celik, OzerWhile a large number of archived digital images make it easy for radiology to provide data for Artificial Intelligence (AI) evaluation; AI algorithms are more and more applied in detecting diseases. The aim of the study is to perform a diagnostic evaluation on periapical radiographs with an AI model based on Convoluted Neural Networks (CNNs). The dataset includes 1169 adult periapical radiographs, which were labelled in CranioCatch annotation software. Deep learning was performed using the U-Net model implemented with the PyTorch library. The AI models based on deep learning models improved the success rate of carious lesion, crown, dental pulp, dental filling, periapical lesion, and root canal filling segmentation in periapical images. Sensitivity, precision and F1 scores for carious lesion were 0.82, 0.82, and 0.82, respectively; sensitivity, precision and F1 score for crown were 1, 1, and 1, respectively; sensitivity, precision and F1 score for dental pulp, were 0.97, 0.87 and 0.92, respectively; sensitivity, precision and F1 score for filling were 0.95, 0.95, and 0.95, respectively; sensitivity, precision and F1 score for the periapical lesion were 0.92, 0.85, and 0.88, respectively; sensitivity, precision and F1 score for root canal filling, were found to be 1, 0.96, and 0.98, respectively. The success of AI algorithms in evaluating periapical radiographs is encouraging and promising for their use in routine clinical processes as a clinical decision support system.Öğe Automatic maxillary sinus segmentation and pathology classification on cone-beam computed tomographic images using deep learning(Bmc, 2024) Altun, Oguzhan; Ozen, Duygu Celik; Duman, Suayip Burak; Dedeoglu, Numan; Bayrakdar, Ibrahim Sevki; Eser, Gozde; Celik, OzerBackgroundMaxillofacial complex automated segmentation could alternative traditional segmentation methods to increase the effectiveness of virtual workloads. The use of DL systems in the detection of maxillary sinus and pathologies will both facilitate the work of physicians and be a support mechanism before the planned surgeries.ObjectiveThe aim was to use a modified You Only Look Oncev5x (YOLOv5x) architecture with transfer learning capabilities to segment both maxillary sinuses and maxillary sinus diseases on Cone-Beam Computed Tomographic (CBCT) images.MethodsData set consists of 307 anonymised CBCT images of patients (173 women and 134 males) obtained from the radiology archive of the Department of Oral and Maxillofacial Radiology. Bilateral maxillary sinuses CBCT scans were used to identify mucous retention cysts (MRC), mucosal thickenings (MT), total and partial opacifications, and healthy maxillary sinuses without any radiological features.ResultsRecall, precision and F1 score values for total maxillary sinus segmentation were 1, 0.985 and 0.992, respectively; 1, 0.931 and 0.964 for healthy maxillary sinus segmentation; 0.858, 0.923 and 0.889 for MT segmentation; 0.977, 0.877 and 0.924 for MRC segmentation; 1, 0.942 and 0.970 for sinusitis segmentation.ConclusionThis study demonstrates that maxillary sinuses can be segmented, and maxillary sinus diseases can be accurately detected using the AI model.Öğe Automatic Segmentation of the Infraorbital Canal in CBCT Images: Anatomical Structure Recognition Using Artificial Intelligence(Mdpi, 2025) Gumussoy, Ismail; Haylaz, Emre; Duman, Suayip Burak; Kalabalik, Fahrettin; Eren, Muhammet Can; Say, Seyda; Celik, OzerBackground/Objectives: The infraorbital canal (IOC) is a critical anatomical structure that passes through the anterior surface of the maxilla and opens at the infraorbital foramen, containing the infraorbital nerve, artery, and vein. Accurate localization of this canal in maxillofacial, dental implant, and orbital surgeries is of great importance to preventing nerve damage, reducing complications, and enabling successful surgical planning. The aim of this study is to perform automatic segmentation of the infraorbital canal in cone-beam computed tomography (CBCT) images using an artificial intelligence (AI)-based model. Methods: A total of 220 CBCT images of the IOC from 110 patients were labeled using the 3D Slicer software (version 4.10.2; MIT, Cambridge, MA, USA). The dataset was split into training, validation, and test sets at a ratio of 8:1:1. The nnU-Net v2 architecture was applied to the training and test datasets to predict and generate appropriate algorithm weight factors. The confusion matrix was used to check the accuracy and performance of the model. As a result of the test, the Dice Coefficient (DC), Intersection over the Union (IoU), F1-score, and 95% Hausdorff distance (95% HD) metrics were calculated. Results: By testing the model, the DC, IoU, F1-score, and 95% HD metric values were found to be 0.7792, 0.6402, 0.787, and 0.7661, respectively. According to the data obtained, the receiver operating characteristic (ROC) curve was drawn, and the AUC value under the curve was determined to be 0.91. Conclusions: Accurate identification and preservation of the IOC during surgical procedures are of critical importance to maintaining a patient's functional and sensory integrity. The findings of this study demonstrated that the IOC can be detected with high precision and accuracy using an AI-based automatic segmentation method in CBCT images. This approach has significant potential to reduce surgical risks and to enhance the safety of critical anatomical structures.Öğe Automatic Segmentation of the Nasolacrimal Canal: Application of the nnU-Net v2 Model in CBCT Imaging(Mdpi, 2025) Haylaz, Emre; Gumussoy, Ismail; Duman, Suayip Burak; Kalabalik, Fahrettin; Eren, Muhammet Can; Demirsoy, Mustafa Sami; Celik, OzerBackground/Objectives: There are various challenges in the segmentation of anatomical structures with artificial intelligence due to the different structural features of the relevant region/tissue. The aim of this study was to detect the nasolacrimal canal (NLC) using the nnU-Net v2 convolutional neural network (CNN) model in cone beam-computed tomography (CBCT) images and to evaluate the successful performance of the model in automatic segmentation. Methods: CBCT images of 100 patients were randomly selected from the data archive. The raw data were transferred to the 3D Slicer imaging software in DICOM format (Version 4.10.2; MIT, Massachusetts, USA). NLC was labeled using the polygonal type of manual method. The dataset was split into training, validation and test sets in a ratio of 8:1:1. nnU-Net v2 architecture was applied to the training and test datasets to predict and generate appropriate algorithm weight factors. The confusion matrix was used to check the accuracy and performance of the model. As a result of the test, the Dice Coefficient (DC), Intersection over Union (IoU), F1-Score and 95% Hausdorff distance (95% HD) metrics were calculated. Results: By testing the model, DC, IoU, F1-Scores and 95% HD metric values were found to be 0.8465, 0.7341, 0.8480 and 0.9460, respectively. According to the data obtained, the receiver-operating characteristic (ROC) curve was drawn and the AUC value under the curve was determined to be 0.96. Conclusions: These results showed that the proposed nnU-Net v2 model achieves NLC segmentation on CBCT images with high precision and accuracy. The automated segmentation of NLC may assist clinicians in determining the surgical technique to be used to remove lesions, especially those affecting the anterior wall of the maxillary sinus.Öğe Classification of temporomandibular joint osteoarthritis on cone beam computed tomography images using artificial intelligence system(Wiley, 2023) Eser, Gozde; Duman, Suayip Burak; Bayrakdar, Ibrahim Sevki; Celik, OzerBackgroundThe use of artificial intelligence has many advantages, especially in the field of oral and maxillofacial radiology. Early diagnosis of temporomandibular joint osteoarthritis by artificial intelligence may improve prognosis. ObjectiveThe aim of this study is to perform the classification of temporomandibular joint (TMJ) osteoarthritis and TMJ segmentation on cone beam computed tomography (CBCT) sagittal images with artificial intelligence. ResultsThe sensitivity, precision and F1 scores of the model for TMJ osteoarthritis classification are 1, 0.7678 and 0.8686, respectively. The accuracy value for classification is 0.7678. The prediction values of the classification model are 88% for healthy joints, 70% for flattened joints, 95% for joints with erosion and 86% for joints with osteophytes. The sensitivity, precision and F1 score of the YOLOv5 model for TMJ segmentation are 1, 0.9953 and 0.9976, respectively. The AUC value of the model for TMJ segmentation is 0.9723. In addition, the accuracy value of the model for TMJ segmentation was found to be 0.9953. ConclusionArtificial intelligence model applied in this study can be a support method that will save time and convenience for physicians in the diagnosis of the disease with successful results in TMJ segmentation and osteoarthritis classification.Öğe Convolutional Neural Network Performance for Sella Turcica Segmentation and Classification Using CBCT Images(Mdpi, 2022) Duman, Suayip Burak; Syed, Ali Z.; Ozen, Duygu Celik; Bayrakdar, Ibrahim Sevki; Salehi, Hassan S.; Abdelkarim, Ahmed; Celik, OzerThe present study aims to validate the diagnostic performance and evaluate the reliability of an artificial intelligence system based on the convolutional neural network method for the morphological classification of sella turcica in CBCT (cone-beam computed tomography) images. In this retrospective study, sella segmentation and classification models (CranioCatch, Eskisehir, Turkiye) were applied to sagittal slices of CBCT images, using PyTorch supported by U-Net and TensorFlow 1, and we implemented the GoogleNet Inception V3 algorithm. The AI models achieved successful results for sella turcica segmentation of CBCT images based on the deep learning models. The sensitivity, precision, and F-measure values were 1.0, 1.0, and 1.0, respectively, for segmentation of sella turcica in sagittal slices of CBCT images. The sensitivity, precision, accuracy, and F1-score were 1.0, 0.95, 0.98, and 0.84, respectively, for sella-turcica-flattened classification; 0.95, 0.83, 0.92, and 0.88, respectively, for sella-turcica-oval classification; 0.75, 0.94, 0.90, and 0.83, respectively, for sella-turcica-round classification. It is predicted that detecting anatomical landmarks with orthodontic importance, such as the sella point, with artificial intelligence algorithms will save time for orthodontists and facilitate diagnosis.Öğe Detecting the presence of taurodont teeth on panoramic radiographs using a deep learning-based convolutional neural network algorithm(Springer, 2023) Duman, Sacide; Yilmaz, Emir Faruk; Eser, Gozde; Celik, Ozer; Bayrakdar, Ibrahim Sevki; Bilgir, Elif; Ferreira Costa, Andre LuizObjectives Artificial intelligence (AI) techniques like convolutional neural network (CNN) are a promising breakthrough that can help clinicians analyze medical imaging, diagnose taurodontism, and make therapeutic decisions. The purpose of the study is to develop and evaluate the function of CNN-based AI model to diagnose teeth with taurodontism in panoramic radiography. Methods 434 anonymized, mixed-sized panoramic radiography images over the age of 13 years were used to develop automatic taurodont tooth segmentation models using a Pytorch implemented U-Net model. Datasets were split into train, validation, and test groups of both normal and masked images. The data augmentation method was applied to images of trainings and validation groups with vertical flip images, horizontal flip images, and both flip images. The Confusion Matrix was used to determine the model performance. Results Among the 43 test group images with 126 labels, there were 109 true positives, 29 false positives, and 17 false negatives. The sensitivity, precision, and F1-score values of taurodont tooth segmentation were 0.8650, 0.7898, and 0.8257, respectively. Conclusions CNN's ability to identify taurodontism produced almost identical results to the labeled training data, and the CNN system achieved close to the expert level results in its ability to detect the taurodontism of teeth.











