Abstract:
It can also be seen that assessing medical reports, especially lung cancer is a strenuous and time-consuming process for professionals in the medical sector. In particular, the number of missed diagnoses or delayed treatments has also increased with many patients undergoing medical imaging and detailed documentation. Current AI-based medical report reading solutions suffer from poor interpretability or lack of stability when considering multiple types of data inputs including text and images. This project is expected to construct a full-scale medical report reading system that combines both NLP and CNNs for instantaneous lung cancer report analysis to enhance the accuracy rate and interpretability of the results.
The proposed system, will employ a hybrid of NLP for analyzing the clinical notes and CNN for analyzing medical images such as CT scans. It deals with big data structures of the patient history as well as the input data pre-processing utilizing text standardization and image enlargement. Thus, the hybrid model is learned under the framework of supervised learning and has been designed with high diagnostic capability while ensuring possibility and ease of interpretation using Explainable AI (XAI) features. As a result, the application of this multimodal approach will enable the inclusion of both text and vision data that will help expand the knowledge about lung cancer diagnoses in the model.