Deep learning-based face verification systems are typically vulnerable to adversarial attacks, i.e., unnoticeable perturbations of input images making neural network models misrecognize images of the same person. Generating adversarial attacks to evaluate the robustness of these systems is crucial for their reliable deployment. However, most effective attack methods focus on classification tasks and operate in the white-box setting, where attackers can access the gradient information and the internal architecture of victim models. Such unrealistic scenarios overestimate the adversarial risk. We investigate the potential of crafting adversarial perturbations in the black-box setting, where attackers can observe only input images and output responses of victim models. We employ the genetic algorithm (GA) to search for adversarial patches that can camouflage well into facial images. The search involves two conflicting objectives: attack performance and reconstruction quality. We consider four GA variants: a GA optimizing the combined fitness function of the two objectives, a GA that favors well-blended adversarial patches, a GA that focuses on attack performance, and a multi-objective GA that optimizes both objectives separately and simultaneously. Our methods demonstrate strong performance in attacking face verification systems under the realistic black-box setting and generate more natural-looking patches compared to baseline approaches.
Figure 1: Creating and evaluation of an adversarial patch using Genetic Algorithm (GA)
Figure 2: An example Pareto front of NSGA-II. Green points represent successful attacks, while red points represent unsuccessful attacks. The highlighted point in the middle represents the successful attack with the highest \(\mathcal{F}_\text{recons}\).
Table 1: Patch size \(20 \times 20\), population \(N = 80\), mutation \(0.5\).
Figure 3: Adversarial and PSNR scores of the proposed approaches across \(10,000\) iterations (i.e., generations).
Figure 4: Comparison of result images generated by different algorithms. Green borders indicate successful attacks, and red borders denote failed attacks.
@inproceedings{zz,
author = {Khoa Tran, Linh Ly and Ngoc Hoang Luong},
title = {{Evolutionary Black-box Patch Attacks on Face Verification}},
booktitle = {GECCO '25 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion},
address = {Málaga, Spain},
publisher = {{ACM}},
year = {2025}
}