Fig 2 - uploaded by Abdul Muqeet
Content may be subject to copyright.
AiRiA CG Team: architecture of FIMDN.

AiRiA CG Team: architecture of FIMDN.

Source publication
Preprint
Full-text available
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The challenge task was to super-resolve an input image with a magnification factor x4 based on a set of prior examples of low and corresponding high resolution images. The goal is to devise a network that reduces on...

Contexts in source publication

Context 1
... architecture used is inspired by wide activation based network and channel attention network. The network, as shown in Fig. 20, mainly consists of 3 blocks, a feature extraction block, a series of wide activation residual blocks and a set of progressive upsampling blocks (×2). The expansion factor used for wide activation block is six. The depth within the feature extraction blocks and wide activation blocks is 32. The network contains 1.378 million trainable ...
Context 2
... are useful for transmitting different frequency information. Meanwhile, gradients can be propagated from the tail of the network to the head. 4) By using the adaptive weight factor multiple outputs are combined with the learnable parameters, which can adaptively determine the contributions of each blocks. The network architecture is shown in Fig. ...
Context 3
... H-ZnCa team proposed Sparse Prior-based Network for Efficient Image Super-Resolution. As shown in Fig. 24(a), the proposed lightweight model, named SPSR, consists of three components: high-frequency sparse coding generation, feature embedding, and multi-scale feature extraction. Specifically, a convolutional sparse coding module (CSCM) [45,29] is first performed to obtain the high-frequency spare representation of the input. Then, a feature ...
Context 4
... Transform (SFT) layer and two convolutional layers is designed for spatial-wise feature modulation conditioned on the sparse prior representation. To further enhance the abstract ability of SPSR, a multi-scale feature extraction module (MFEM) with channel split mechanism is proposed to efficiently utilize the hierarchical features. As shown in Fig. 24(b ...
Context 5
... team proposed a lightweight deep iterative SR learning method (ISRResDNet) that solves the SR task as a sub-solver of image denoising by the residual denoiser networks [24]. It is inspired by powerful image regularization and large-scale optimization techniques used to solve general inverse problems. The proposed iterative SR approach is shown in Fig. 25. The authors unroll the ResDNet [24] into K stages and each stage performs the PGM updates. ...
Context 6
... architecture used is inspired by wide activation based network and channel attention network. The network, as shown in Fig. 20, mainly consists of 3 blocks, a feature extraction block, a series of wide activation residual blocks and a set of progressive upsampling blocks (×2). The expansion factor used for wide activation block is six. The depth within the feature extraction blocks and wide activation blocks is 32. The network contains 1.378 million trainable ...
Context 7
... are useful for transmitting different frequency information. Meanwhile, gradients can be propagated from the tail of the network to the head. 4) By using the adaptive weight factor multiple outputs are combined with the learnable parameters, which can adaptively determine the contributions of each blocks. The network architecture is shown in Fig. ...
Context 8
... H-ZnCa team proposed Sparse Prior-based Network for Efficient Image Super-Resolution. As shown in Fig. 24(a), the proposed lightweight model, named SPSR, consists of three components: high-frequency sparse coding generation, feature embedding, and multi-scale feature extraction. Specifically, a convolutional sparse coding module (CSCM) [45,29] is first performed to obtain the high-frequency spare representation of the input. Then, a feature ...
Context 9
... Transform (SFT) layer and two convolutional layers is designed for spatial-wise feature modulation conditioned on the sparse prior representation. To further enhance the abstract ability of SPSR, a multi-scale feature extraction module (MFEM) with channel split mechanism is proposed to efficiently utilize the hierarchical features. As shown in Fig. 24(b ...
Context 10
... team proposed a lightweight deep iterative SR learning method (ISRResDNet) that solves the SR task as a sub-solver of image denoising by the residual denoiser networks [24]. It is inspired by powerful image regularization and large-scale optimization techniques used to solve general inverse problems. The proposed iterative SR approach is shown in Fig. 25. The authors unroll the ResDNet [24] into K stages and each stage performs the PGM updates. ...
Context 11
... architecture used is inspired by wide activation based network and channel attention network. The network, as shown in Fig. 20, mainly consists of 3 blocks, a feature extraction block, a series of wide activation residual blocks and a set of progressive upsampling blocks (×2). The expansion factor used for wide activation block is six. The depth within the feature extraction blocks and wide activation blocks is 32. The network contains 1.378 million trainable ...
Context 12
... are useful for transmitting different frequency information. Meanwhile, gradients can be propagated from the tail of the network to the head. 4) By using the adaptive weight factor multiple outputs are combined with the learnable parameters, which can adaptively determine the contributions of each blocks. The network architecture is shown in Fig. ...
Context 13
... H-ZnCa team proposed Sparse Prior-based Network for Efficient Image Super-Resolution. As shown in Fig. 24(a), the proposed lightweight model, named SPSR, consists of three components: high-frequency sparse coding generation, feature embedding, and multi-scale feature extraction. Specifically, a convolutional sparse coding module (CSCM) [45,29] is first performed to obtain the high-frequency spare representation of the input. Then, a feature ...
Context 14
... Transform (SFT) layer and two convolutional layers is designed for spatial-wise feature modulation conditioned on the sparse prior representation. To further enhance the abstract ability of SPSR, a multi-scale feature extraction module (MFEM) with channel split mechanism is proposed to efficiently utilize the hierarchical features. As shown in Fig. 24(b ...
Context 15
... team proposed a lightweight deep iterative SR learning method (ISRResDNet) that solves the SR task as a sub-solver of image denoising by the residual denoiser networks [24]. It is inspired by powerful image regularization and large-scale optimization techniques used to solve general inverse problems. The proposed iterative SR approach is shown in Fig. 25. The authors unroll the ResDNet [24] into K stages and each stage performs the PGM updates. ...

Similar publications

Preprint
Full-text available
This paper reviews the NTIRE 2022 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The task of the challenge was to super-resolve an input image with a magnification factor of $\times$4 based on pairs of low and corresponding high resolution images. The aim was to design a network for single ima...