Abstract: Deep learning-based approaches have achieved remarkable success in single-image super-resolution (SISR). Now, how to design lighter neural networks has become a hot research topic. Efficient network structure is significant for building an efficient neural network. The recently proposed SISR framework residual local feature network(RLFN) [15] simplified feature aggregation but its cascaded 3 × 3 convolution still caused too many model parameters, which affected the speed of inference. In this paper, we propose a novel convolution with spatial-channel cheap operation (SCC-Conv) and design an efficient network (SCC-Net) for SISR. Motivated by GhostNet [9], we utilize a single 1 × 1 convolution to decrease the output channel and use a 3 × 3 depthwise convolution as a cheap operation to obtain the feature maps from a spatial view. Experiment results show that successive ghost blocks causes performance degradation in current situation. We then use a 3 × 3 convolution to decrease the output channel and use a 1 × 1 convolution as a cheap operation to obtain the feature maps from a channel view. Our SCC-Net has lower parameters (296k) and faster inference speed while maintaining performance compared with the state-of-the-art methods.