Compressed sensing (CS) aims to precisely reconstruct the original signal from under-sampled measurements, which is a typical ill-posed problem. Solving such a problem is challenging and generally needs to incorporate suitable priors about the underlying signals. Traditionally, these priors are hand-crafted and the corresponding approaches generally have limitations in expressive capacity. In this paper, a nonconvex optimization inspired multi-scale reconstruction network is developed for block-based CS, abbreviated as iPiano-Net, by unfolding the classic iPiano algorithm. In iPiano-Net, a block-wise inertial gradient descent interleaves with an image-level network-inducing proximal mapping to exploit the local block and global content information alternately. Therein, network-inducing proximal operators can be adaptively learned in each module, which can efficiently characterize image priors and improve the modeling capacity of iPiano-Net. Such learned image-level priors can suppress blocky artifacts and noises/corruptions while preserving the global information. Different from existing discriminative CS reconstruction models trained with specific measurement ratios, an effective single model is learned to handle CS reconstruction with several measurement ratios even the unseen ones. Experimental results demonstrate that the proposed approach is substantially superior to previous CS methods in terms of Peak Signal to Noise Ratio (PSNR) and visual quality, especially at low measurement ratios. Meanwhile, it is robust to noise while maintaining comparable execution speed.