SAGA: Spectral Adversarial Geometric Attack on 3D Meshes

ICCV 2023

Tel Aviv University
*Equal contribution
SAGA results. Two of SAGA's adversarial shapes (a cow and a human face) and their reconstructions by an autoencoder (AE). The reconstructions are of shapes from different semantic classes (a hippo and a face of a different person).
A result of our geometric mesh attack. A mesh of a sphere (top left) is perturbed into an adversarial example (top right). While the original mesh is accurately reconstructed by an AE (bottom left), our attack fools the AE and changes the output geometry to a cube! (bottom right).

Abstract

A triangular mesh is one of the most popular 3D data representations. As such, the deployment of deep neural networks for mesh processing is widely spread and is increasingly attracting more attention. However, neural networks are prone to adversarial attacks, where carefully crafted inputs impair the model's functionality. The need to explore these vulnerabilities is a fundamental factor in the future development of 3D-based applications. Recently, mesh attacks were studied on the semantic level, where classifiers are misled to produce wrong predictions. Nevertheless, mesh surfaces possess complex geometric attributes beyond their semantic meaning, and their analysis often includes the need to encode and reconstruct the geometry of the shape.

We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder. In this setting, an adversarial input mesh deceives the autoencoder by forcing it to reconstruct a different geometric shape at its output. The malicious input is produced by perturbing a clean shape in the spectral domain. Our method leverages the spectral decomposition of the mesh along with additional mesh-related properties to obtain visually credible results that consider the delicacy of surface distortions. Our code is publicly available.

Video

Method



The proposed attack framework. Attack parameters perturb the spectral coefficients of the source shape to craft an adversarial example. The malicious input (Adversary) misleads the AE to reconstruct the geometry of the target mesh. The perturbation is optimized using a loss function that compares the AE's output with the target shape, and regularizes the adversarial shape to preserve the source's geometric properties.

Evolution

Attack evolution. On the left, we present the perturbations of two source shapes during the progress of the optimization process. On the right, we show the AE's reconstructions of the perturbed shapes. SAGA misleads the AE to reconstruct the target mesh while regularizing the deformation of the source shape.

Transferability

Attack transferability. A source shape (top left) is perturbed by SAGA into an adversarial example (top middle). The adversarial shape passes through three different AEs. The first (top right) is the victim AE used in the attack, with a multilayer perceptron (MLP) architecture (denoted as Victim MLP). The second AE (bottom left) has the same MLP architecture but was trained with a different random weight initialization (denoted as Other MLP). The third (bottom right) is a convolutional AE (denoted as CoMA). The three AEs change their input geometric shape.

Attack transferability. Another example of SAGA's transferability to other AEs.

Visual Results

We exhibit SAGA's results and compare them to Lang et al.'s point cloud (PC) geometric attack.


Geometric attacks comparison. SAGA's results on the SMAL dataset. Each frame presents a different source-target pair. In each frame, top row, left to right: the clean source mesh, the PC attack's adversarial example, SAGA's adversarial example, and the clean target shape. Bottom row: the reconstructions of the shapes from the top row after passing through the AE. The heatmap encodes the per-vertex curvature distortion values between each adversarial example and the clean source shape, growing from white to red. Lang et al.'s attack severely distorts the source shape. In contrast, our SAGA better preserves the source and achieves the desired target reconstruction.

Geometric attacks comparison. SAGA's results on the CoMA dataset, as described in the previous figure.

Citation

@article{stolik2022saga,
  author    = {Stolik, Tomer and Lang, Itai and Avidan, Shai},
  title     = {{SAGA: Spectral Adversarial Geometric Attack on 3D Meshes}},
  journal   = {arXiv preprint arXiv:2211.13775},
  year      = {2022},
}

Acknowledgement

This page was inspired by Nerfies: Deformable Neural Radiance Fields. We thank the authors for sharing their source code.