Abstract:
In this study, we present a unified sparsity-driven framework that significantly enhances motion deblurring performance by integrating two key components:a custom-designed dataset and a low-rank module (LRM). This framework leverages the inherent sparsity of per-pixel blur kernels to bolster both deblurring accuracy and model interpretability. Firstly, we propose an adaptive-basis decomposition-based deblurring (ADD) approach, which constructs a tailored training dataset to enhance the generalization capacity of the deblurring network. The ADD framework adaptively decomposes motion blur into sparse basis elements, effectively addressing the intricacies associated with non-uniform blurs. Secondly, an LRM is proposed to improve the interpretability of deblurring models as a plug-and-play module, primarily designed to identify and harness the intrinsic sparse features in sharp images. A series of ablation studies have been conducted to substantiate the synergistic advantages of combining the proposed ADD with the LRM for overall improvement in deblurring efficacy. Subsequently, we empirically demonstrate through rigorous experimentation that incorporating the LRM into an existing Uformer network leads to substantial enhancement in reconstruction performance. This integration yields a sparsity-guided low-rank network (SGLRN). Operating under the overarching principle of sparsity, SGLRN consistently outperforms state-of-the-art methods across multiple standard deblurring benchmarks. Comprehensive experimental results, assessed through quantitative metrics and qualitative visual evaluations, provide compelling evidence of its effectiveness. The overall deblurring results are available at Google Drive.