The 36th British Machine Vision Conference 2025

AegisRF
AegisRF: Adversarial Perturbations Guided with Sensitivity for Protecting Intellectual Property of Neural Radiance Fields

Woo Jae Kim, Kyu Beom Han, Yoonki Cho, Youngju Na, Junsik Jung, Sooel Son, and Sung-Eui Yoon

Korea Advanced Institute of Science and Technology (KAIST)


[Paper] [Code]

Abstract

As Neural Radiance Fields (NeRFs) have emerged as a powerful tool for 3D scene representation and novel view synthesis, protecting their intellectual property (IP) from unauthorized use is becoming increasingly crucial. In this work, we aim to protect the IP of NeRFs by injecting adversarial perturbations that disrupt their unauthorized applications. However, perturbing the 3D geometry of NeRFs can easily deform the underlying scene structure and thus substantially degrade the rendering quality, which has led existing attempts to avoid geometric perturbations or restrict them to explicit spaces like meshes. To overcome this limitation, we introduce a learnable sensitivity to quantify the spatially varying impact of geometric perturbations on rendering quality. Building upon this, we propose AegisRF, a novel framework that consists of a Perturbation Field, which injects adversarial perturbations into the pre-rendering outputs (color and volume density) of NeRF models to fool an unauthorized downstream target model, and a Sensitivity Field, which learns the sensitivity to adaptively constrain geometric perturbations, preserving rendering quality while disrupting unauthorized use. Our experimental evaluations demonstrate the generalized applicability of AegisRF across diverse downstream tasks and modalities, including multi-view image classification and voxel-based 3D localization, while maintaining high visual fidelity. Codes are available at this https URL.


Motivations

(a) Geometric perturbations without consideration of their varying impact on rendering quality lead to significant degradation in rendering quality.

(b) Our novel approach mitigates this by measuring the sensitivity of rendering quality to geometric perturbations and adaptively constraining their magnitudes. For example, perturbations are restricted on empty spaces (red point), where disruptions can cause introduction of new artifacts, while larger perturbations are applied on more complex regions (green point), where such disruptions can be better masked by the existing structural complexity.


Our Approach: AegisRF

Overview of AegisRF. For a 3D point (x, d), the Perturbation Field creates appearance (δc) and geometry (δσ) perturbations, while the Sensitivity Field predicts sensitivity (s) to adaptively constrain geometry perturbations (δ̂σ). These perturb NeRF outputs (c, σ) into perturbed versions (c′, σ′), forming adversarial examples in various data forms, aiming to disrupt target unauthorized downstream task (Lpro) while preserving rendering quality (Lnat).


Qualitative Results

Visualizations of rendered images and model predictions on NeRF, NeRFail, Adv-FTpro = 1 for 3D localization, λpro = 0.003 for multi-view classification), and our AegisRF. Our AegisRF shows superior rendering quality compared to NeRFail and Adv-FT.

Quantitative Results