| dc.description.abstract |
Driver drowsiness is a major contributor to road accidents, especially during long or solitary journeys where alertness tends to decline. Most existing systems rely on single-modality input, such as visual or physiological cues, and only issue passive alerts without engaging the driver. Their dependency on internet connectivity also limits reliability in low-signal environments.
This research presents LucidRoute, a real-time driver fatigue detection system that combines advanced visual analysis with proactive, voice-based interventions. Drowsiness is identified using facial indicators such as Eye Aspect Ratio (EAR), head nod frequency, Mouth Aspect Ratio (MAR), and YOLOv8-based facial recognition. Upon detection, a conversational assistant powered by the Gemini API offers dialogue or nearby rest-stop suggestions using geolocation data delivered entirely hands-free.
LucidRoute's hybrid architecture combines on-device detection with cloud-based conversational logic and offline fallback mechanisms. Evaluated using a fine-tuned YOLOv8 model and real-time video inputs, the system achieved a visual detection accuracy of 94%. Prototype testing confirmed strong usability and effective intervention, positioning LucidRoute as a reliable step forward in context-aware driver support. |
en_US |