Thanks for the outstanding reviews and feedback! We appreciate that the reviewers think the problem important and challenging (R2,R5), our idea novel (R1,R5), interesting(R5), proposed method technical correct (R1,R5) and reasonable (R2,R3), correctly validated (R1,R3), generates good results (R2,R3,R4), and overal clear writing (R1-R5).
We will include all the requested references (NeRF editing, multiview editing, video editing papers), experiments (few-shot NeRF, multiview consistency, visualization on disparity, ablation), and paper writing suggestions (terms, paragraph organization) in the revision.
<!-- Besides, we’d like to emphasize that our submission track is both **CONFERENCE AND JOURNAL**.
-->
## R1,R4,R5:Foreground-background mask
We use Photoshop to export mask image of the edited area. Diffusion based editing models usually requires masks as input. We do not use the mask in background loss.
## R1:"Novel views"
We will refer them to "training views".
## R1:Similarity measure for background loss
Background loss is **cosine similarity** between the rendered patch and the patch at same location in training view.
## R1:Patch size
Each of the feature points at VGG feature layer [1,3,6,8,11,13,15,18,20,22,25,27,29] is treated as patch.
## R1:Balancing foreground-background loss
Both losses use cosine similarity between VGG features. The main difference is that the foreground loss finds the *minimum* cosine loss among all the patches in reference image while background loss computes the cosine loss of the same patch location of rendered view and training view. Taking minimums amounts to selecting foreground or background. There’s no need to balance them.
## R1:Why are the results in TensoRF worse than the original paper?
TensoRF requires per-scene parameter tuning. We use the LLFF flower setting provided by TensoRF and use it for all the experiments.
## R1,R3:Contribution over-claimed
We will tune down the contribution to forward-facing NeRF scene editing, change the title, and discuss limitation in introduction.
## R2:Baseline comparisons
Existing NeRF editing approaches have different goals (e.g. stylization, object pose or color editing, conditional radiance field editing) while our approach enables both local editing (object editing, insertion, removal) and global editing (stylization). None of the existing method achieve them. We will include additional comparisons when applicable (e.g., inpainting, stylization).
## R2:Few-shot NeRF comparison
We will fix it the revised paper.
## R2:Limited viewpoint change
All the videos in supplementary website are rendered in novel views (not training view). On L772-L774 we acknowledge that the viewpoint changes are limited (applicable to forward-facing scenes only).
## R2,R5:Foreground rendering quality
The quality of foreground geometry is limited by mono-depth estimator.
## R3,R4:Explanation of TV loss and the balancing terms
We use the same TV loss used by TensoRF to deal with the cases of fewer training images in real-world scenes, preventing local minima encounters. We set the weighting of all the losses to be 1.
## R3:Limitation
Our method works on forward-facing scenes and is constrained by the reference view selected. For areas unseen in reference view, the change cannot be applied to the scene (stylization, 360 scene). Besides, the geometry of the inserted object is limited by monocular depth estimation used. We will clarify.
## R2,R4:Compare to SINE/EditNeRF/NeuMesh
* **EditNeRF** mainly works on conditional radiance field. Although it supports color editing in real-world scenes, it cannot edit object geometry.
* **NeuMesh** performs object pose editing and texture editing (color editing, w/o shape editing) on single image and does not support object insertion, removal, editing.
* **SINE:** the code is not available.
## R4:Technical contribution
Our technical contribution lies in the general pipeline and loss design, enabling both global/local NeRF editing.
## R4:Losses are common without enough change
Although foreground loss has been applied to stylization problems, it focuses only on *global style transfer*. In contrast, our method aims at local editing. The combination of foreground-background loss creates a **selection mechanism** to edit foreground-background, which distinguishes our method from others.
## R4:Is it trained-from-scratch or pretrained NeRF in RGBD initialization
We take pre-trained NeRF as input for initialization.
## R4:Disparity alignment
Scale/shift alignment used in exisitng work like NSFF globally aligns the *entire* disparity map. For our application such as inpainting, aligning globally leads to erronous results. We thus perform scale/shift alignment only on the unchanged background regions. After alignment, we apply Poisson blending to enable seamless transition between foreground and background.
## R4:How to find most similar patch in foreground(contextual) loss and its formulation
We follow the Nearest Neighbor Feature Matching in ARF. It computes the cosine similarity for patch similarity measurement and finds the patch with minimum cosine loss in reference image.
## R4:Why edited Piano video better than input?
As explained on L734-L740, the improvements come from the different losses and the feature space difference in the losses.
Original TensoRF supervises textures with L2 loss in RGB space while in background loss we use cosine similarity in the VGG feature space.
## R4:How much training time and iterations required in the two stages
Stage 1 typically trains 10,000 iterations for less than 1 hour while stage 2 trains for 3,000 iterations for around 1 hour.
## R4:Why divide the optimization into two stages?
**The two stages conflict.** The first stage modifies the edited area to align with user editing both in reference and training view, see figure-7(c). We found that optimizing all the losses directly, the training views initially don't contain user edited information and thus results in the domination of background loss in the edited area. Such strategy does not lead to the desired editing. We will include this ablation.
## R5:Artifacts in stylization due to incorrect depth.
We only supervise in RGB space for stylization. There is no predicted depth from reference view (L497-L499). The artifacts in stylization stem from the incomprehensive reference view chosen(see L774-L777). The clear lines in living room or bridge car align with the visible area in reference view.
## R5:Why not using MiDaS depth directly
MiDaS depth cannot recover thin and complex structures. We thus blend the aligned MiDaS depth with the rendered depth from NeRF.