Abstract: Image restoration techniques have developed rapidly in recent years. Some high-level vision tasks such as style transfer, automatic coloring, and large mask inpainting rely on deep learning methods to retrieve specific image attributes. However, due to the lack of a key remainder, the quality of image restoration remains at a low level. For instance, when the mask is large enough, traditional deep learning methods cannot imagine and fill a car on a bridge from a model. It is all dependent on the capacity of neural imagination. This is what an abstract neuron is good at. In this paper, we not only find specific neurons to guide semantic retrieval but also discover more neurons that serve as the indicator. In addition, we propose three principles to guarantee the leverage of reasonable visualization and coherent accuracy in terms of neuron guidance. A novel network called the Transfer-learning Network is designed to adopt the joint training strategy, multi-modal guided neurons, and multi-path attention edge algorithm for inpainting in a coarse-to-fine manner. This is the first time an extremely large mask is filled (35%–66%), guided by a high-level understanding of an image from abstract neuron reflection. Through ablation and combined experiments, the Transfer-learning Network validates that artificial neurons enhance the performance of joint training in multitasking vision problems. Therefore, this joint training framework meets the requirements of refining the background, i.e., removing meaningless noise more sharply and smartly between junior and advanced comprehensive vision tasks.
0 Replies
Loading