Abstract:
The healthcare industry requires precise, context-aware medication recommendations to support clinical decision-making and patient self-management. Existing AI systems often rely on generic models providing insufficient details, such as specific dosages, which undermines trust. To address this, we introduce CliniGuide, a framework leveraging Large Language Models (LLMs) integrated with Knowledge Distillation (KD) and Retrieval-Augmented Generation (RAG).
CliniGuide employs a two-phase architecture to bridge the gap between raw AI outputs and clinical best practices. First, a teacher LLM generates distilled data to fine-tune a smaller, efficient student model, preserving critical knowledge while reducing computational overhead. Second, RAG retrieves domain-specific information from a vector store, combining it with the student model’s reasoning via Chain-of-Thought (CoT) prompt engineering. This ensures recommendations—including brand names and dosages—are specific, transparent, and aligned with clinical requirements.
Preliminary evaluations demonstrate that CliniGuide enhances the accuracy and clarity of medication suggestions. Quantitative metrics, including BLEU (0.0196), ROUGE-1 (0.2674), ROUGE-2 (0.0519), and ROUGE-L (0.1348), indicate the model effectively captures key concepts, though potential remains for further textual refinement. Ultimately, the synergy of KD, RAG, and CoT reasoning positions CliniGuide as a significant advancement in delivering trustworthy, contextually grounded healthcare insights, marking a vital step toward robust, patient-specific AI recommendation systems.