dc.description.abstract |
Mobile image photography is continuously emerging as an area of interest, yet achieving professional-level camera quality remains a challenge due to hardware limitations. In order to improve the images taken from mobile phones, deep learning-based image processing techniques such as convolutional neural networks are proposed. However, these networks are typically trained using large amounts of paired data and lack continuous training using images captured from mobile phone users. This is because, in reality, creating paired image datasets from user-captured images is challenging and may lead to user privacy issues. As a solution to this challenge, this research proposes FLCycleGAN, a novel federated learning based CycleGAN designed to improve the colors of mobile images continuously using user captured images in an unpaired manner. The evaluations on the ZurichRAW to RGB dataset reveal that FL-CycleGAN reconstructs the colors of mobile images with an average PSNR value of 18.46 and SSIM value of 0.707, demonstrating comparable results to state-of-the-art networks based on paired images. Furthermore, FL-CycleGAN reconstructs high-resolution images with a size of 3968×2976 in under 0.005 seconds. |
en_US |