AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption

*Corresponding Authors

Unlike current adversarial approaches, (a) and (b), that apply a single perturbation across the image,
our method disrupts the Stable Diffusion inpainting model's attention mechanism,
separately optimizing two regions divided by an enlarged bounding box.

Abstract

The outstanding capability of diffusion models in generating high-quality images poses significant threats when misused by adversaries. In particular, we assume malicious adversaries exploiting diffusion models for inpainting tasks, such as replacing a specific region with a celebrity. While existing methods for protecting images from manipulation in diffusion-based generative models have primarily focused on image-to-image and text-to-image tasks, the challenge of preventing unauthorized inpainting has been rarely addressed, often resulting in suboptimal protection performance. To mitigate inpainting abuses, we propose AdvPaint, a novel defensive framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks. AdvPaint targets the self- and cross-attention blocks in a target diffusion inpainting model to distract semantic understanding and prompt interactions during image generation. AdvPaint also employs a two-stage perturbation strategy, dividing the perturbation region based on an enlarged bounding box around the object, enhancing robustness across diverse masks of varying shapes and sizes. Our experimental results demonstrate that AdvPaint's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods; AdvPaint attains over a 100-point increase in FID and substantial decreases in precision.

Results Gallery

TBW