Abstract: The advent of high-tech journaling tools facilitates an
image to be manipulated in a way that can easily evade
state-of-the-art image tampering detection approaches. The
recent success of the deep learning approaches in different recognition tasks inspires us to develop a high confidence detection framework which can localize manipulated
regions in an image. Unlike semantic object segmentation
where all meaningful regions (objects) are segmented, the
localization of image manipulation focuses only the possible tampered region which makes the problem even more
challenging. In order to formulate the framework, we employ a hybrid CNN-LSTM model to capture discriminative features between manipulated and non-manipulated regions. One of the key properties of manipulated regions
is that they exhibit discriminative features in boundaries
shared with neighboring non-manipulated pixels. Our motivation is to learn the boundary discrepancy, i.e., the spatial structure, between manipulated and non-manipulated
regions with the combination of LSTM and convolution layers. We perform end-to-end training of the network to learn
the parameters through back-propagation given groundtruth mask information. The overall framework is capable
of detecting different types of image manipulations, including copy-move, removal and splicing. Our model shows
promising results in localizing manipulated regions, which
is demonstrated through rigorous experimentation on three
diverse datasets.
0 Replies
Loading