Evidence bound clinical decision support with RAG

Küçük Resim Yok

Tarih

2025

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Gazi Univ

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Large language models are increasingly consulted for scientific and clinical questions, yet ungrounded answers still appear too often to trust them on their own. We built a retrieval-augmented assistant that keeps generation tied to a curated, versioned corpus, and records every step from ingestion to answer. Documents are segmented with a practical, token-aware policy and encoded locally; vectors are stored with provenance so the system can cite or abstain. Queries are embedded, top-k passages are retrieved from a vector store, and a prompt asks the generator to respond only with supported statements or to decline. The components are intentionally swappable: the embedder runs on-premises for privacy, the store supports snapshots for repeatable experiments, and the generator (Gemma/Gemma2) is selected for efficient inference. Beyond the pipeline, we preregister an evaluation plan that measures retrieval quality, answer faithfulness and coverage, with ablations on chunk size, overlap, and k. All code, defaults, and scripts are released so others can reproduce the setup, compare their own choices, and extend the system to new domains. The goal is clear: reduce hallucination by grounding answers in literature, keep costs and latency predictable on a single-GPU server, and make empirical evaluation routine rather than optional. Experimental evaluation confirmed these design claims: the proposed modular RAG achieved Recall@k = 0.86, F1 = 0.79, and Attribution Accuracy = 0.91, significantly outperforming both Classic RAG and LLM-only baselines (p < 0.05). These results validate the framework's reliability, grounding fidelity, and reproducibility for evidence-based clinical decision support.

Açıklama

Anahtar Kelimeler

Retrieval-augmented generation, clinical decision support, hallucination mitigation, information retrieval, explainable ai, large language model

Kaynak

Journal of Polytechnic-Politeknik Dergisi

WoS Q Değeri

Q4

Scopus Q Değeri

Cilt

Sayı

Künye