Abstract:
Low-light image enhancement is a complex and considerably challenging piece of work, while the accompaniment of variability in lighting conditions often leads the enhancement towards a trajectory where the result is either too dark or unevenly illuminated. Current techniques struggle to produce enhancement results that balance the brightness factor adjustment, detail preservation, and structural coherency, leading towards over-enhancement or under-enhancement, especially in non-uniform lighting scenarios, a challenging edge scenario.
This research proposes a novel attention-driven multi-scale GAN, incorporating a generative
adversarial network (GAN) and a complementary learning sub-network (CLSN) for targeted
brightness and color enhancement while preserving detail and texture, producing a perceptually coherent enhancement result. The CLSN generates an inverse grey map as attention, guiding the brightness attention dispersion based on grey values across image channels. The GAN network is accompanied by the U-Net architecture, extracting multi-scale features utilizing both the spatial and frequency domains and correspondingly a dual Markov discrimination targeted discrimination distinctions between the global and local image context. This ensures the generator network produces a coherent visual and structural enhancement.
The proposed AMGAN method, enhanced with the complementary learning sub-network,
achieved a promising result in low-light image enhancement. The trained model presented
effective brightness and contrast enhancement, with a PSNR of 18.2278, SSIM of 0.4594. The results demonstrate the potential of AMGAN in achieving the desired brightness factor with expected details while handling varied distributions of light, accompanied by the GAN.