Recovering clear structures from severely blurry inputs is a huge challenge due to the detail loss and ambiguous semantics. Although segmentation maps can help deblur facial images, their effectiveness is limited in complex natural scenes because they ignore the detailed structures necessary for deblurring. Furthermore, direct segmentation of blurry images may introduce error propagation. To alleviate the semantic confusion and avoid error propagation, we propose utilizing high-level vision tasks, such as classification, to learn a comprehensive prior for severe blur removal. We propose a feature learning strategy based on knowledge distillation, which aims to learn the priors with global contexts and sharp local structures. To integrate the priors effectively, we propose a semantic prior embedding layer with multi-level aggregation and semantic attention. We validate our method on natural image deblurring benchmarks by introducing the priors to various models, including UNet and mainstream deblurring baselines, to demonstrate its effectiveness and generalization ability. The results show that our approach outperforms existing methods on severe blur removal with our plug-and-play semantic priors.
Take a Prior from Other Tasks for Severe Blur Removal
Computer Vision and Image Understanding (CVIU), 2024