NightHaze: Nighttime Image Dehazing via Self-Prior Learning

AAAI 2025
Beibei Lin1*, Yeying Jin1*, Wending Yan2, Wei Ye2, Yuan Yuan2, Robby T. Tan1
1National University of Singapore, 2Huawei International Pte Ltd
*These authors contributed equally.
Qualitative results

Qualitative results from NightEnhance'23, NightDeFog'20, DiT'23 and our method, on the real-world dataset. Ours not only suppress glow but also reveal the detailed textures of the night scenes, including those under low light and strong glow.

Abstract

Masked autoencoder (MAE) shows that severe augmentation during training produces robust representations for high-level tasks. This paper brings the MAE-like framework to nighttime image enhancement, demonstrating that severe augmentation during training produces strong network priors that are resilient to real-world night haze degradations.

We propose a novel nighttime image dehazing method with self-prior learning. Our main novelty lies in the design of severe augmentation, which allows our model to learn robust priors. Unlike MAE that uses masking, we leverage two key challenging factors of nighttime images as augmentation: light effects and noise. During training, we intentionally degrade clear images by blending them with light effects as well as by adding noise, and subsequently restore the clear images. This enables our model to learn clear background priors. By increasing the noise values to approach as high as the pixel intensity values of the glow and light effect blended images, our augmentation becomes severe, resulting in stronger priors.

While our self-prior learning is considerably effective in suppressing glow and revealing details of background scenes, in some cases, there are still some undesired artifacts that remain, particularly in the forms of over-suppression. To address these artifacts, we propose a self-refinement module based on the semi-supervised teacher-student framework. Our NightHaze, especially our MAE-like self-prior learning, shows that models trained with severe augmentation effectively improve the visibility of input haze images, approaching the clarity of clear nighttime images. Extensive experiments demonstrate that our NightHaze achieves state-of-the-art performance, outperforming existing nighttime image dehazing methods by a substantial margin of 15.5% for MUSIQ and 23.5% for ClipIQA.

Pipeline

Overview of our self-prior learning.

Framework overview

This approach aims to produce strong priors that are resilient to real-world night haze degradations. Given a clear nighttime image, we blend clear images with various maps collected from real-world haze images and add noise to these blended images. The augmented input is then fed into an Encoder-Decoder framework to recover the clear background. The loss is a simple L1 loss between the reconstructed image and the clear input. The key to self-prior learning is the severity of the augmentation. By increasing the noise values to approach as high as the pixel intensity values of the glow and light effect blended images, our augmentation becomes severe, resulting in stronger priors.

Qualitative and Quantitative Results

Qualitative Results

Qualitative Results

Quantitative Results

Quantitative Results

BibTeX

@article{lin2024nighthaze,
  title={NightHaze: Nighttime Image Dehazing via Self-Prior Learning},
  author={Lin, Beibei and Jin, Yeying and Yan, Wending and Ye, Wei and Yuan, Yuan and Tan, Robby T},
  journal={arXiv preprint arXiv:2403.07408},
  year={2024}
}