基于深度神经网络的遗传算法对抗攻击
摘要:
深度神经网络(deep neural network,DNN)能够取得良好的分类识别效果,但在训练图像中添加微小扰动进行对抗攻击,其识别准确率会大大下降。在提出的方法中,通过遗传算法得到最优扰动后,修改图像极少的像素生成对抗样本,实现对VGG16等3个基于卷积神经网络图像分类器的成功攻击。实验结果表明在对3个分类模型进行单像素攻击时,67. 92%的CIFAR-10数据集中的自然图像可以被扰动到至少一个目标类,平均置信度为79. 57%,攻击效果会随着修改像素的增加进一步提升。此外,相比于 LSA 和 FGSM 方法,攻击效果有着显著提升。
Deep Neural Network(DNN)can achieve good classification and recognition effect, but the recognition accuracy will be greatly reduced when adding small disturbance to the training image to counter the attack, This paper, by proposing a method a small number of pixels on the image are modified to generate adversarial samples after the optimal disturbance is obtained by genetic algorithm, Different convolutional neural networks are attacked as image classifiers, and parameters such as the number of images processed in each batch and the number of modified pixels are adiusted, The experimental results show that 67.92% of the natural images in CIFAR-10 data set can be disturbed to at least one target class, with an average confidene of 79.57%, and the attack efect wil be further improved with the inerease of modified pixels. In addition, compared with LSA and FGSM methods, the attack effect is significantly improved.
作者:
范海菊,马锦程,李名
Fan Haiju,Ma Jincheng,Li Ming
机构地区:
计算机与信息工程学院;河南省教育人工智能与个性化学习重点实验室
引用本文:
范海菊、马锦程、李名 . 基于深度神经网络的遗传算法对抗攻击[J].学报(自然科学版) , 2025,53(2) :82-90. (Fan Haiju,Ma Jincheng,Li Ming. Genetic algorithm against attack based on deep neural network [J] .Journal of Henan Normal University(Natural Science Edition) , 2025, 53(2) : 82-90. DOI: 10. 16366/j. cnki. 1000-2367. 2023. 09. 21. 0003. )
基金:
国家自然科学基金;河南省科技攻关计划;河南省高等学校重点科研项目
关键词:
卷积神经网络;遗传算法;对抗攻击;图像分类;信息安全
convolutional neural network; genetic algorithm; adversaria attack; image classification; imformation security
分类号:
O413