Abstract:
The advancements of deep learning have paved the way for object detection methods to reach high levels of accuracy and speed. However, when these models work in adverse weather conditions, their performance downgrades significantly. Especially in the presence of haze these models accurately identify and locate objects. Naturally haze is caused by the tiny droplets of water in the atmosphere. This atmospheric condition can also be caused by particles such as dust and smoke. Due to the scattering and absorption of light by these atmospheric particles, the images fed into object detection algorithms lead to low visibility and distorted images. Traditional systems try to mitigate this issue by using image enhancement techniques. But those add additional resource allocation and reduce the performance of real time object detection systems. This research proposes a novel approach for generating hazy images to augment object detection datasets extracting the lighting and depth information of the image , which can effectively replicate real- world hazy scenarios and consequently improve object detection in such environments. This approach is not restricted to any particular object detection dataset as it does not rely on depth metadata, allowing for its application to a wide range of datasets. Therefore it can be used in a wide range of applications such as surveillance, autonomous driving and other outdoor computer vision applications. Faster RCNN trained on synthetic haze generated PASCAL VOC using the approach mentioned in this research managed obtain 58.6% mAP on RTTS dataset, which is 10.15% improvement than training with the original PASCAL VOC 2012 dataset.