MRWS: Multi-stage RAW low-light image enhancement with wavelet information and SNR prior
Affiliation:

1.Tianjin University;2.Tianjin University of Technology

  • Article
  • | |
  • Metrics
  • |
  • Reference [28]
  • | |
  • Cited by [0]
  • | |
  • Comments
    Abstract:

    For low-light image enhancement tasks, RAW images surpass RGB images due to their high information content, yet their noise and single-channel nature challenge feature extraction. Existing methods using multi-stage CNN frameworks struggle with global feature extraction, while single-stage CNN-Transformer fusions often result in residual noise. To overcome these limitations, this paper introduces a multi-stage RAW image enhancement network combining CNN and Transformer. In consideration of the characteristics inherent to the task at hand, we have devised a CNN-based denoising block for the denoising stage and incorporated wavelet information to enhance frequency features accordingly. A Transformer-based correction block has been designed for the color and white balance recovery stage, with the white balance being adjusted dynamically using a signal-to-noise ratio map. With this design, our method outperforms other state-of-the-art models in all indicators on the Sony and Fuji datasets of SID, and achieves optimal SSIM on the MCR dataset.

    Reference
    [1] Guo C, Li C, Guo J, et al. Zero-reference deep curve estimation for low-light image en-hance-ment[C]//Proceedings of the IEEE/CVF con-ference on computer vision and pattern recognition. 2020: 1780-1789.
    [2] Wang Y, Wan R, Yang W, et al. Low-light image en-hancement with normalizing flow[C]//Proceedings of the AAAI conference on artificial intelligence. 2022, 36(3): 2604-2612.
    [3] Ren W, Liu S, Ma L, et al. Low-light image en-hancement via a deep hybrid network[J]. IEEE Transactions on Image Processing, 2019, 28(9): 4364-4375.
    [4] Chen C, Chen Q, Xu J, et al. Learning to see in the dark[C]//Proceedings of the IEEE conference on com-puter vision and pattern recognition. 2018: 3291-3300.
    [5] Malik S, Soundararajan R. Semi-supervised learning for low-light image restoration through quality as-sisted pseudo-labeling[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023: 4105-4114.
    [6] Chen C, Chen Q, Do M N, et al. Seeing motion in the dark[C]//Proceedings of the IEEE/CVF International conference on computer vision. 2019: 3185-3194.
    [7] Zamir S W, Arora A, Khan S, et al. Learning digital camera pipeline for extreme low-light imaging[J]. Neurocomputing, 2021, 452: 37-47.
    [8] Zhu M, Pan P, Chen W, et al. Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fu-sion network[C]//Proceedings of the AAAI confer-ence on artificial intelligence. 2020, 34(07): 13106-13113.
    [9] Schwartz E, Giryes R, Bronstein A M. Deepisp: To-ward learning an end-to-end image processing pipe-line[J]. IEEE Transactions on Image Processing, 2018, 28(2): 912-923.
    [10] Zamir S W, Arora A, Khan S, et al. Multi-stage pro-gres-sive image restoration[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 14821-14831.
    [11] Huang H, Yang W, Hu Y, et al. Towards low light en-hancement with raw images[J]. IEEE Transactions on Image Processing, 2022, 31: 1391-1405.
    [12] Xu W, Dong X, Ma L, et al. Rawformer: an efficient vision transformer for low-light raw image en-hance-ment[J]. IEEE Signal Processing Letters, 2022, 29: 2677-2681.
    [13] Finder S E, Amoyal R, Treister E, et al. Wavelet Con-volutions for Large Receptive Fields[J]. arXiv preprint arXiv:2407.05848, 2024.
    [14] Barron J T. A general and adaptive robust loss func-tion[C]//Proceedings of the IEEE/CVF confer-ence on computer vision and pattern recognition. 2019: 4331-4339.
    [15] Huang H, Yang W, Hu Y, et al. Towards low light en-hancement with raw images[J]. IEEE Transactions on Image Processing, 2022, 31: 1391-1405.
    [16] Wan C, Yu H, Li Z, et al. Swift parameter-free attention network for efficient su-per-resolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog-nition. 2024: 6246-6256.
    [17] Zamir S W, Arora A, Khan S, et al. Restormer: Effi-cient transformer for high-resolution image restora-tion[C]//Proceedings of the IEEE/CVF con-ference on computer vision and pattern recognition. 2022: 5728-5739.
    [18] Li Y, Lu J, Chen H, et al. Dilated convolutional trans-former for high-quality image derain-ing[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 4199-4207.
    [19] Xu X, Wang R, Fu C W, et al. SNR-aware low-light image enhancement[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 17714-17724.
    [20] Dong X, Xu W, Miao Z, et al. Abandoning the bayer-filter to see in the dark[C]//Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2022: 17431-17440.
    [21] Gu S, Li Y, Gool L V, et al. Self-guided network for fast image denoising[C]//Proceedings of the IEEE/CVF In-ternational Conference on Computer Vision. 2019: 2511-2520.
    [22] Lamba M, Mitra K. Restoring extremely dark images in real time[C]//Proceedings of the IEEE/CVF con-ference on computer vision and pattern recognition. 2021: 3487-3497.
    [23] Meng Z, Xu R, Ho C M. Gia-net: Global information aware network for low-light imaging[C]//European Conference on Computer Vision. Cham: Springer In-ternational Publishing, 2020: 327-342.
    [24] Xu K, Yang X, Yin B, et al. Learning to restore low-light images via decom-posi-tion-and-enhancement[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 2281-2290.
    [25] Wang Z, Cun X, Bao J, et al. Uformer: A general u-shaped transformer for image restora-tion[C]//Proceedings of the IEEE/CVF con-ference on computer vision and pattern recognition. 2022: 17683-17693.
    [26] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similar-ity[J]. IEEE transactions on image processing, 2004, 13(4): 600-612.
    [27] Zhang R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual met-ric[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 586-595.
    [28] Hou Q, Zhou D, Feng J. Coordinate attention for effi-cient mobile network design[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 13713-13722.
    Related
    Cited by
    您输入的地址无效!
    没有找到您想要的资源,您输入的路径无效!

    Comments
    Comments
    分享到微博
    Submit
Get Citation
Share
Article Metrics
  • Abstract:15
  • PDF: 0
  • HTML: 0
  • Cited by: 0
History
  • Received:December 21,2024
  • Revised:January 25,2025
  • Adopted:February 17,2025
Article QR Code