The outstanding capability of diffusion models in generating high-quality images poses significant threats when misused by adversaries. In particular, we assume malicious adversaries exploiting diffusion models for inpainting tasks, such as replacing a specific region with a celebrity. While existing methods for protecting images from manipulation in diffusion-based generative models have primarily focused on image-to-image and text-to-image tasks, the challenge of preventing unauthorized inpainting has been rarely addressed, often resulting in suboptimal protection performance. To mitigate inpainting abuses, we propose AdvPaint, a novel defensive framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks. AdvPaint targets the self- and cross-attention blocks in a target diffusion inpainting model to distract semantic understanding and prompt interactions during image generation. AdvPaint also employs a two-stage perturbation strategy, dividing the perturbation region based on an enlarged bounding box around the object, enhancing robustness across diverse masks of varying shapes and sizes. Our experimental results demonstrate that AdvPaint's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods; AdvPaint attains over a 100-point increase in FID and substantial decreases in precision.
AdvPaint effectively degrades the result images against various inpainting manipulations with huge spatial differences (e.g. removing objects or inserting new objects). The state-of-the-art adversarial examples show limitations in protecting input images, as the generated outputs still harmonize with the prompts.
Our proposed method (a) redirects the model’s attention to other regions of the image, while (b) focusing attention on the newly generated object.
ADVPAINT remains robust in diverse inpainting cases where masks vary in sizes and shapes, even exceeding or overlapping with the optimization boundary (in red).