Author(s):
P. Sathish Kumar, K.V.D. Kiran
Email(s):
pandaramsathishkumar@gmail.com
DOI:
10.52711/2321-581X.2023.00002
Address:
P. Sathish Kumar1, K.V.D. Kiran2
1Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, AP, India.
2Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, AP, India.
*Corresponding Author
Published In:
Volume - 14,
Issue - 1,
Year - 2023
ABSTRACT:
Deep neural networks (DNNs) are particularly vulnerable to adversarial samples when used as machine learning (ML) models. These kinds of samples are typically created by combining real-world samples with low-level sounds so they can mimic and deceive the target models. Since adversarial samples may switch between many models, black-box type attacks can be used in a variety of real-world scenarios. The main goal of this project is to produce an adversarial assault (white box) using PyTorch and then offer a defense strategy as a countermeasure. We developed a powerful offensive strategy known as the MI-FGSM (Momentum Iterative Fast Gradient Sign Method). It can perform better than the I-FGSM because to its adaptation (Iterative Fast Gradient Sign Method). The usage of MI-FGSM will greatly enhance transferability. The other objective of this project is to combine machine learning algorithms with quantum annealing solvers for the execution of adversarial attack and defense. Here, we'll take model-based actions based on the existence of attacks. Finally, we provide the experimental findings to show the validity of the developed attacking method by assessing the strengths of various models as well as the defensive strategies.
Cite this article:
P. Sathish Kumar, K.V.D. Kiran. Momentum Iterative Fast Gradient Sign Algorithm for Adversarial Attacks and Defenses. Research Journal of Engineering and Technology. 2023; 14(1):7-4. doi: 10.52711/2321-581X.2023.00002
Cite(Electronic):
P. Sathish Kumar, K.V.D. Kiran. Momentum Iterative Fast Gradient Sign Algorithm for Adversarial Attacks and Defenses. Research Journal of Engineering and Technology. 2023; 14(1):7-4. doi: 10.52711/2321-581X.2023.00002 Available on: https://ijersonline.org/AbstractView.aspx?PID=2023-14-1-2
REFERENCES:
1. Dalvi, N., et al. Adversarial classification. in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. 2004.
2. Huang, L., et al. Adversarial machine learning. in Proceedings of the 4th ACM workshop on Security and artificial intelligence. 2011.
3. Lowd, D. and C. Meek. Good Word Attacks on Statistical Spam Filters. in CEAS. 2005.
4. Goodfellow, I.J., J. Shlens, and C.J.a.p.a. Szegedy, Explaining and harnessing adversarial examples. 2014.
5. Szegedy, C., et al., Intriguing properties of neural networks. 2013.
6. LeCun, Y., Y. Bengio, and G.J.n. Hinton, Deep learning. 2015. 521(7553): p. 436-444.
7. Ren, S., et al., Faster r-cnn: Towards real-time object detection with region proposal networks. 2015. 28.
8. Girshick, R., et al. Rich feature hierarchies for accurate object detection and semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
9. He, K., et al. Identity mappings in deep residual networks. in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14. 2016. Springer.
10. Krizhevsky, A., I. Sutskever, and G.E.J.C.o.t.A. Hinton, Imagenet classification with deep convolutional neural networks. 2017. 60(6): p. 84-90.
11. Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
12. Simonyan, K. and A.J.a.p.a. Zisserman, Very deep convolutional networks for large-scale image recognition. 2014.
13. Seide, F., G. Li, and D. Yu. Conversational speech transcription using context-dependent deep neural networks. in Twelfth annual conference of the international speech communication association. 2011.
14. Mohamed, A.-r., et al., Acoustic modeling using deep belief networks. 2011. 20(1): p. 14-22.
15. Liu, Y., et al., Delving into transferable adversarial examples and black-box attacks. 2016.
16. Moosavi-Dezfooli, S.-M., et al. Universal adversarial perturbations. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
17. Dong, Y., et al., Towards interpretable deep neural networks by leveraging adversarial examples. 2017.
18. Kurakin, A., I. Goodfellow, and S.J.a.p.a. Bengio, Adversarial machine learning at scale. 2016.
19. Metzen, J.H., et al., On detecting adversarial perturbations. 2017.
20. Pang, T., C. Du, and J.J.a.p.a. Zhu, Robust deep learning via reverse cross-entropy training and thresholding test. 2017. 3.
21. Papernot, N., et al. Distillation as a defense to adversarial perturbations against deep neural networks. in 2016 IEEE symposium on security and privacy (SP). 2016. IEEE.
22. Tramèr, F., et al., Ensemble adversarial training: Attacks and defenses. 2017.
23. Science, T.D., https://towardsdatascience.com/adversarial-machine-learning-mitigation-adversarial-learning-9ae04133c137. (Accessed on February 20, 2023), 2023.
24. Swathi, Y. and A. Sunitha, Monitoring Fake Profiles on Social Media.
25. Chen, T., et al., Adversarial attack and defense in reinforcement learning-from AI security view. 2019. 2: p. 1-22.
26. Yuan, X., et al., Adversarial examples: Attacks and defenses for deep learning. 2019. 30(9): p. 2805-2824.
27. Zhou, S., et al., Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity. 2022. 55(8): p. 1-39.
28. Rosenberg, I., et al., Adversarial machine learning attacks and defense methods in the cyber security domain. 2021. 54(5): p. 1-36.
29. Qiu, S., et al., Review of artificial intelligence adversarial attack and defense technologies. 2019. 9(5): p. 909.
30. Silva, S.H. and P.J.a.p.a. Najafirad, Opportunities and challenges in deep learning adversarial robustness: A survey. 2020.
31. Sadeghi, K., A. Banerjee, and S.K.J.I. Gupta, A system-driven taxonomy of attacks and defenses in adversarial machine learning. 2020. 4(4): p. 450-467.
32. Ren, H., et al., Adversarial examples: attacks and defenses in the physical world. 2021: p. 1-12.
33. Xu, H., et al., Adversarial attacks and defenses in images, graphs and text: A review. 2020. 17: p. 151-178.
34. Quiring, E., et al. Adversarial preprocessing: Understanding and preventing image-scaling attacks in machine learning. in Proceedings of the 29th USENIX Conference on Security Symposium. 2020.
35. Shi, Y., et al. Adversarial deep learning for cognitive radio security: Jamming attack and defense strategies. in 2018 IEEE international conference on communications workshops (ICC Workshops). 2018. IEEE.
36. Chen, S., et al., Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. 2018. 73: p. 326-344.
37. Tian, J., et al., Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial Vehicles. 2021. 9(22): p. 22399-22409.
38. Chen, L., Y. Ye, and T. Bourlai. Adversarial machine learning in malware detection: Arms race between evasion attack and defense. in 2017 European intelligence and security informatics conference (EISIC). 2017. IEEE.
39. Mani, N., et al., Defending deep learning models against adversarial attacks. 2021. 13(1): p. 72-89.
40. Pyimagesearch,https://pyimagesearch.com/2021/03/01/adversarial-attacks-with-fgsm-fast-gradient-sign-method/. (Accessed on February 20, 2023), 2023.
41. Kumar, M., et al., A comparative study of black box testing and white box testing techniques. 2015. 3(10).
42. Spiceworks, https://www.spiceworks.com/tech/devops/articles/black-box-vs-white-box-testing/. (Accessed on February 20, 2023), 2023.
43. Pytorch, https://pytorch.org/tutorials/beginner/fgsm_tutorial.html. (Accessed on February 20, 2023), 2023.