Abstract:
Estimating the depth from a single RGB image is a challenging task that finds applications in various fields such as autonomous driving, robotics, 3D modelling, scene understanding, etc. Though there has been plenty of research in recent years on this field, of which many have produced outstanding results on daytime clear weather conditions, very few researches have been carried out that aim to solve monocular depth estimation in adverse weather conditions. The lack of research in adverse weather monocular depth estimation systems and the inability of the current state-of-the-art approaches that produce remarkable results in clear daytime conditions to produce accurate and consistent results when subjected to adverse weather conditions such as rain, snow or fog has been a significant setback in the autonomous driving industry and has been a key factor forcing most autonomous driving companies to still focus on expensive sensor-based approaches. This dissertation presents WeatherDepth, a novel robust monocular depth estimation approach that is capable of producing high-resolution accurate depth maps in adverse weather conditions such as rain, snow, and fog with the help of transfer learning and the CityscapeWeather dataset, which is an adverse weather depth estimation dataset based on the popular Cityscape dataset. The proposed approach, WeatherDepth, utilizes an adversarially trained autoencoder-based architecture. Experiments on the CityscapeWeather and vKITTI datasets demonstrate that our approach outperforms the state-of-the-art monocular depth estimation systems in generalization capabilities when subjected to adverse weather conditions.