PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields

1Stanford University, 2Adobe Research

PaletteNeRF enables various appearance editings on NeRF models, such as recoloring, photorealistic style transfer, and illumination changing.


Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis. However, it remains underexplored how the appearance of such representations can be efficiently edited while maintaining photorealism.

In this work, we present PaletteNeRF, a novel method for photorealistic appearance editing of neural radiance fields (NeRF) based on 3D color decomposition. Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases (i.e., 3D segmentations defined by a group of NeRF-type functions) that are shared across the scene. While our palette-based bases are view-independent, we also predict a view-dependent function to capture the color residual (e.g., specular shading). During training, we jointly optimize the basis functions and the color palettes, and we also introduce novel regularizers to encourage the spatial coherence of the decomposition.

Our method allows users to efficiently edit the appearance of the 3D scene by modifying the color palettes. We also extend our framework with compressed semantic features for semantic-aware appearance editing. We demonstrate that our technique is superior to baseline methods both quantitatively and qualitatively for appearance editing of complex real-world scenes.


Our model supports photorealistic and intuitive recoloring on the captured scenes. Here we show our results for forward-facing scenes and 360 scenes. For each scene, we show one reference video and three recoloring results.

Our Results on Forward-Facing Captures.

Our Results on 360 Captures

Photorealistic Style transfer

Given a style image, our model can achieve photorealistic style transfer on the captured scene by optimizing a transformation of the palette functions. Here we show our results.

Additional Appearance Editing

Our model also supports two types of appearance edits by scaling the outputs: Adjusting the specular shading of the scene by scaling the view-dependent colors, and modifing the texture of the captured objects by scaling the color offset functions.

Results of changing the view-dependent colors

Results of changing the color offset functions

Interactive GUI

We provide a real-time interative GUI that supports all appearance editings of our model.