Background
Type: Conference Paper

Effective Adversarial Attacks on Images by Class Activation Mapping

Journal: ()Year: 2024Volume: Issue:
Etehadi-Abari M.Naghsh Nilchi A.a Hoseinnezhad R.

Abstract

Adversarial attacks on images, where intentional noise is added to deceive machine learning models, have emerged as both a significant security concern and a beneficial tool for en-hancing privacy, protecting intellectual property, and innovating in creative industries. This paper introduces a novel information-theoretic approach to crafting effective adversarial attacks, fo-cusing on parts of the image that contain information relevant to decision-making processes in typical deep learning models for object detection and classification. Our method, Fisher-CAM, generates class activation maps (CAM) using Fisher information to identify significant regions in input images. The adversarial noise is crafted by augmenting images based on these regions and iteratively updating perturbation patterns through gradient calculation and momentum updates. Extensive experiments on a subset of the ImageNet dataset demonstrate that our method surpasses state-of-the-art attack performance in both white-box and black-box settings. © 2024 IEEE.