Abstract:
"n the recent years, Driver Assist Systems (DAS) and autonomous cars has focused on developing
systems based on computer vision (CV) with the help of deep learning (DL). Due to the lack of
availability of these new technologies in older and cheaper cars has made these technologies not
accessible to a vast majority of the public. On the other hand, smartphones have gained a lot of
technical capabilities and plays a important role in our lives. The combination of computer vision
and deep learning coupled with smartphones can help drivers and reduce accidents. This paper is
devoted to the development of a real-time road sign detection system harnessing the power of You
Only Look Once (YOLOv8) model combined with a Convolution Neural Network (CNN) as a
hybrid model. Furthermore, a DAS mobile application will be developed to utilize the hybrid
model for real-time road sign detection and to assist drivers.
Grounded on the Literature Review, this research aims on designing and developing a robust
hybrid model for real time road sign detection using YOLOv8 and CNN. The two models will be
trained using datasets that were specifically made from the road signs of Sri Lanka. The trained
model will be used to detect road signs from the real time video stream captured from the
DriverPAL react-native DAS mobile application. Existing road sign detection and classification
models need to trade off simplicity and adaptability, detection time cost and performance in order
for them to be used in real word. The YOLOv8 model has achieved mAP50 of 0.866 and mAP50-
95 of 0.704 which are quite good results. These will be further improved in the upcoming iterations
of the project. The classification model which is a CNN was able achieve 99% accuracy."