Adaptive-basis decomposition-based low-rank network for efficient non-uniform motion deblurring
Author:
Affiliation:

1. School of Software, Henan University, Kaifeng 475004, China;2. School of Applied Technology, China University of Labor Relations, Beijing 100048, China

  • Article
  • | |
  • Metrics
  • |
  • Reference [34]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    In this study, we present a unified sparsity-driven framework that significantly enhances motion deblurring performance by integrating two key components:a custom-designed dataset and a low-rank module (LRM). This framework leverages the inherent sparsity of per-pixel blur kernels to bolster both deblurring accuracy and model interpretability. Firstly, we propose an adaptive-basis decomposition-based deblurring (ADD) approach, which constructs a tailored training dataset to enhance the generalization capacity of the deblurring network. The ADD framework adaptively decomposes motion blur into sparse basis elements, effectively addressing the intricacies associated with non-uniform blurs. Secondly, an LRM is proposed to improve the interpretability of deblurring models as a plug-and-play module, primarily designed to identify and harness the intrinsic sparse features in sharp images. A series of ablation studies have been conducted to substantiate the synergistic advantages of combining the proposed ADD with the LRM for overall improvement in deblurring efficacy. Subsequently, we empirically demonstrate through rigorous experimentation that incorporating the LRM into an existing Uformer network leads to substantial enhancement in reconstruction performance. This integration yields a sparsity-guided low-rank network (SGLRN). Operating under the overarching principle of sparsity, SGLRN consistently outperforms state-of-the-art methods across multiple standard deblurring benchmarks. Comprehensive experimental results, assessed through quantitative metrics and qualitative visual evaluations, provide compelling evidence of its effectiveness. The overall deblurring results are available at Google Drive.

    Reference
    [1] WANG M, ZHU F, BAI Y. An improved image blind deblurring based on dark channel prior[J]. Optoelectronics letters, 2021, 17(1):40-46.
    [2] LU Y C, LIU T P, LIN C H. Two-stage single image deblurring network based on deblur kernel estimation[J]. Multimedia tools and applications, 2023, 82(11):17055-17074.
    [3] SUN Y, ZHI X, JIANG S, et al. Image fusion for the novelty rotating synthetic aperture system based on vision transformer[J]. Information fusion, 2024, 104:102163.
    [4] ZHANG S, TANG G, LIU X, et al. Retinex based low-light image enhancement using guided filtering and variational framework[J]. Optoelectronics letters, 2018, 14(2):156-160.
    [5] DELBRACIO M, GARCIA-DORADO I, CHOI S, et al. Polyblur:removing mild blur by polynomial reblurring[J]. IEEE transactions on computational imaging, 2021, 7:837-848.
    [6] GONG D, YANG J, LIU L, et al. From motion blur to motion flow:a deep learning solution for removing heterogeneous motion blur[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 21-26, 2017, Honolulu, Hawaii, USA. New York:IEEE, 2017:2319-2328.
    [7] ZHANG J, PAN J, REN J, et al. Dynamic scene deblurring using spatially variant recurrent neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 18-22, 2018, Salt Lake City, Utah, USA. New York:IEEE, 2018:2521-2529.
    [8] PUROHIT K, RAJAGOPALAN A N. Region-adaptive dense network for efficient motion deblurring[C]//Proceedings of the AAAI Conference on Artificial Intelligence, February 7-12, 2020, New York, USA. Washington:AAAI Press, 2020:11882-11889.
    [9] WEN Y, CHEN J, SHENG B, et al. Structure-aware motion deblurring using multi-adversarial optimized cyclegan[J]. IEEE transactions on image processing, 2021, 30:6142-6155.
    [10] LI Y, REN D, SHU X, et al. Learning single image defocus deblurring with misaligned training pairs[C]//Proceedings of the AAAI Conference on Artificial Intelligence, February 7-10, 2023, Washington DC, USA. Washington:AAAI Press, 2023, 37(2):1495-1503.
    [11] NAH S, HYUN KIM T, MU LEE K. Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 21-26, 2017, Honolulu, Hawaii, USA. New York:IEEE, 2017:3883-3891.
    [12] DENG J, DONG W, SOCHER R, et al. Imagenet:a large-scale hierarchical image database[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, June, 2009, Miami, Florida, USA. New York:IEEE, 2009:248-255.
    [13] HIRSCH M, SRA S, SCH?LKOPF B, et al. Efficient filter flow for space-variant multiframe blind deconvolution[C]//Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June, 2010, San Francisco, California, USA. New York:IEEE, 2010:607-614.
    [14] LIU Y L, LAI W S, CHEN Y S, et al. Single-image HDR reconstruction by learning to reverse the camera pipeline[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June, 2020, Seattle, Washington, USA. New York:IEEE, 2020:1651-1660.
    [15] RIM J, LEE H, WON J, et al. Real-world blur dataset for learning and benchmarking deblurring algorithms[C]//16th European Conference on Computer Vision, August 23-28, 2020, Glasgow, UK. Heidelberg:Springer International Publishing, 2020:184-201.
    [16] LJUBENOVI? M, FIGUEIREDO M A T. Blind image deblurring using class-adapted image priors[C]//
    Proceedings of the 2017 IEEE International Conference on Image Processing, September, 2017, Beijing, China. New York:IEEE, 2017:490-494.
    [17] XIE J, HOU G, WANG G, et al. A variational framework for underwater image dehazing and deblurring[J]. IEEE transactions on circuits and systems for video technology, 2021, 32(6):3514-3526.
    [18] MOHAMMAD-DJAFARI A, DUMITRU M. Bayesian sparse solutions to linear inverse problems with non-stationary noise with Student-t priors[J]. Digital signal processing, 2015, 47:128-156.
    [19] HU Z, HUANG J B, YANG M H. Single image deblurring with adaptive dictionary learning[C]//Proceedings of the 2010 IEEE International Conference on Image Processing, September, 2010, Hong Kong, China. New York:IEEE, 2010:1169-1172.
    [20] ZHANG H, YANG J, ZHANG Y, et al. Close the loop:joint blind image restoration and recognition with sparse representation prior[C]//Proceedings of the 2011 International Conference on Computer Vision, November, 2011, Barcelona, Spain. New York:IEEE, 2011:770-777.
    [21] TOFIGHI M, LI Y, MONGA V. Blind image deblurring using row-column sparse representations[J]. IEEE signal processing letters, 2017, 25(2):273-277.
    [22] PAN J, SUN D, PFISTER H, et al. Deblurring images via dark channel prior[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 40(10):2315-2328.
    [23] LIU J, SUN Y, XU X, et al. Image restoration using total variation regularized deep image prior[C]//Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, May, 2019, Brighton, UK. New York:IEEE, 2019:7715-7719.
    [24] CAI J, ZUO W, ZHANG L. Dark and bright channel prior embedded network for dynamic scene deblurring[J]. IEEE transactions on image processing, 2020, 29:6885-6897.
    [25] WU F, DONG W, HUANG T, et al. Hybrid sparsity learning for image restoration:an iterative and trainable approach[J]. Signal processing, 2021, 178:107751.
    [26] LI M, GAO S, ZHANG C, et al. Blind motion deblurring via L0 sparse representation[J]. Computers & graphics, 2021, 97:248-257.
    [27] ZHA Z, WEN B, YUAN X, et al. Low-rankness guided group sparse representation for image restoration[J]. IEEE transactions on neural networks and learning systems, 2022.
    [28] MANDRACCHIA B, LIU W, HUA X, et al. Optimal sparsity allows reliable system-aware restoration of fluorescence microscopy images[J]. Science advances, 2023, 9(35):9245.
    [29] WANG Z, CUN X, BAO J, et al. Uformer:a general U-shaped transformer for image restoration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June, 2022, New Orleans, Louisiana, USA. New York:IEEE, 2022:17683-17693.
    [30] KUPYN O, BUDZAN V, MYKHAILYCH M, et al. Deblurgan:blind motion deblurring using conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June, 2018, Salt Lake City, Utah, USA. New York:IEEE, 2018:8183-8192.
    [31] TAO X, GAO H, SHEN X, et al. Scale-recurrent network for deep image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June, 2018, Salt Lake City, Utah, USA. New York:IEEE, 2018:8174-8182.
    [32] GAO H, TAO X, SHEN X, et al. Dynamic scene deblurring with parameter selective sharing and nested skip connections[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June, 2019, Long Beach, California, USA. New York:IEEE, 2019:3848-3856.
    [33] CHEN L, CHU X, ZHANG X, et al. Simple baselines for image restoration[C]//18th European Conference on Computer Vision, August, 2022, Copenhagen, Denmark. Heidelberg:Springer International Publishing, 2022:17-33.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

CHEN Lei, XIONG Qingbo, ZHANG Wei, LI Runde. Adaptive-basis decomposition-based low-rank network for efficient non-uniform motion deblurring[J]. Optoelectronics Letters,2025,(1):43-50

Copy
Share
Article Metrics
  • Abstract:11
  • PDF: 0
  • HTML: 0
  • Cited by: 0
History
  • Received:December 04,2023
  • Revised:July 18,2024
  • Online: December 13,2024
Article QR Code