April 2025
·
4 Reads
Signal Image and Video Processing
Single image super-resolution aims to restore high-resolution images from low-resolution images. Recently, many methods have tackled image super-resolution by leveraging local or global features to boost performance. However, they fail to combine both feature types and often have high parameter counts. We propose a Lightweight Self-Attention Guidance Network (LSAGNet) to address the aforementioned issues. We designed a simple and efficient dynamic local attention (DLA) module to effectively extract local features. Existing Transformer networks often rely on query-key similarities for feature aggregation. However, blindly using these similarities hinders super-resolution reconstruction by failing to retain strong correlations and introducing weak ones. To address this issue, we propose a global self-attention (GSA) mechanism based on a soft-thresholding operation, designed to retain strongly correlated information. Experimental results demonstrate that the proposed LSAGNet achieves an excellent balance between performance and parameter efficiency while also achieving competitive accuracy compared to state-of-the-art methods.