A serial image copy-move forgery localization scheme with source/target distinguishment
一种具有源/目标区分的串行图像复制-移动伪造定位方案
In this paper, we improve the existing parallel deep neural network (DNN) scheme (BusterNet) for image copy-move forgery localization with source/target distinguishment. To do so, it is based on two branches, i. e., Simi-Det and Mani-Det, but suffers of two main drawbacks: (a) It should ensure that both branches locate regions correctly; (b) Simi-Det branch only extracts single-level and low-resolution features by VGG16 with four pooling layers. To be sure of source and target regions identification, we introduce two sub-networks, that are constructed in a serial way, and named: copy-move similarity detection network (CMSDNet), and source/target regions distinguishment network (STRDNet).
在本文中,我们改进了现有的并行深度神经网络(DNN)方案(BusterNet),用于图像复制移动伪造定位及源/目标区分。为此,它基于两个分支,即 Simi-Det 和 Mani-Det,但存在两个主要缺点:(a)必须确保两个分支都能正确定位区域;(b)Simi-Det 分支仅通过 VGG16 提取单层和低分辨率特征,且有四个池化层。为了确保源和目标区域的识别,我们引入了两个以串行方式构建的子网络,分别命名为:复制移动相似性检测网络(CMSDNet)和源/目标区域区分网络(STRDNet)。
- Linux
- NVIDIA GPU+CUDA CuDNN
- Install TensorFlow and dependencies
安装 TensorFlow 和依赖项
- Training: Wu et al. [1] created a new synthetic dataset with 100,100 samples. Similar to [1], the synthetic dataset is split into training, validation, and testing sets with a ratio of 8:1:1. More specifically, the parameter initialization of all layers uses the default function of Keras. We find that CMSDNet converges after approximately 15 epochs of training. In the first 10 epochs, we use a minibatch gradient descent optimizer with momentum 0.9 and set an initial learning rate of 1.0e-3 and a minibatch size of 16. When validation loss reaches plateaus after 10 epochs, we reduce the learning rate and set it to 1.0e-4 for 5 epochs more. Regarding the STRDNet, it converges after approximately 10 epochs of training. Some optimizer settings are the same as CMSDNet, while the learning rate is always kept at 1.0e-3 and the minibatch size is set to 64.
训练:Wu et al. [1] 创建了一个包含 100,100 个样本的新合成数据集。与 [1] 类似,该合成数据集被分为训练集、验证集和测试集,比例为 8:1:1。更具体地说,所有层的参数初始化使用 Keras 的默认函数。我们发现 CMSDNet 在大约 15 个训练周期后收敛。在前 10 个周期中,我们使用动量为 0.9 的小批量梯度下降优化器,并设置初始学习率为 1.0e-3,小批量大小为 16。当验证损失在 10 个周期后达到平稳时,我们降低学习率,将其设置为 1.0e-4,再训练 5 个周期。关于 STRDNet,它在大约 10 个训练周期后收敛。一些优化器设置与 CMSDNet 相同,而学习率始终保持在 1.0e-3,小批量大小设置为 64。 - Testing: To test the generalization ability of our algorithm, three standard datasets, i.e., CASIA v2.0, CoMoFoD and COVERAGE, are considered to evaluate the performance of the trained model obtained from the synthetic dataset of Wu et al.
测试:为了测试我们算法的泛化能力,考虑了三个标准数据集,即 CASIA v2.0、CoMoFoD 和 COVERAGE,以评估从 Wu 等人的合成数据集中获得的训练模型的性能。
- Train CMSDNet by running CMSDNet.py
通过运行 CMSDNet.py 训练 CMSDNet - Train STRDNet by running STRDNet.py
通过运行 STRDNet.py 训练 STRDNet
- Place the test image in the root directory and name it as 'test.png'
将测试图像放在根目录中,并命名为'test.png'。 - Run CMSDNetTest.py, then you will get the result of CMSDNet
运行 CMSDNetTest.py,然后您将获得 CMSDNet 的结果。 - Run STRDNetTest.py, then you will get the result of STRDNet
运行 STRDNetTest.py,然后您将获得 STRDNet 的结果。
- Y. Wu, W. Abd-Almageed, and P. Natarajan, ‘‘BusterNet: Detecting
copy-move image forgery with source/target localization,’’ in Proc. Eur.
Conf. Comput. Vis. (ECCV)., pp. 168–184, 2018.
Y. Wu, W. Abd-Almageed, 和 P. Natarajan, ‘‘BusterNet: 检测复制移动图像伪造及源/目标定位,’’ 载于 Proc. Eur. Conf. Comput. Vis. (ECCV)., 第 168–184 页, 2018 年。