Abstract:
"Training advanced machine learning (ML) models like Denoising Diffusion Probabilistic Models
(DDPMs) is resource-intensive, limiting their practical use. The uniform application of diffusion
steps, regardless of data complexity, make worse this inefficiency. This study introduces a novel
framework to enhance DDPM training efficiency by dynamically adjusting diffusion steps based
on image entropy, also utilizing DDPMs to tackle class imbalance issue in image datasets.
The proposed solution employs a complexity assessment function C(x) to measure image entropy
and a scaling function S(C(x), t) that adjusts diffusion steps and noise levels accordingly. This
method optimizes the diffusion process for each image, improving DDPMs' training efficiency.
Additionally, this approach utilizes the enhanced DDPMs to produce synthetic images for classes
that are underrepresented, effectively balancing the dataset. The technical implementation
leverages these advancements to mitigate the challenges posed by computational demands in
DDPMs and class imbalance issues in image classification.
Evaluations across multiple datasets, including MNIST, FashionMNIST, KMNIST, and QMNIST,
underscore the effectiveness of this approach. By employing metrics such as the Structural
Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR), alongside assessments
of training and sampling efficiency, the study demonstrates the proposed method's capacity to
significantly reduce computational demands while maintaining or enhancing image generation
quality. Moreover, the practical application of this approach in correcting class imbalance further
validates its utility, achieving balanced datasets that lead to improved classification model
performance across various metrics."