# Revision Updated
We would like to thank all reviewers again for their constructive suggestions and feedback.
According to your comments, we have provided an updated revision with a detailed response to each reviewer's questions and concerns. We have revised our paper with the following changes:
## Changes to the abstract
* We move the link to our qualitative results so it is more visible. (Reviewer doFB, V59y, HhWA, U1r6)
## Changes to the introduction
* We provide additional details on the scope of our work, and of similar papers in the visual domain. We also clarify our goals. (Reviewer doFB, HhWA, U1r6)
* We clarify the advantage that a neural acoustic field has over using the dataset in the introduction. (Reviewer HhWA)
* We motivate our work, and why we want to learn local geometric features to help with generalization. (Reviewer doFB, HhWA, U1r6)
* We clarify how the visualization in Figure 1. is done in the caption. (Reviewer doFB)
## Changes to the related work
* We provide additional related work that learn to generate the phase-free magnitude STFT of sound. (Reviewer V59y)
## Changes to methods section
* We have compressed the description for wave equation background in section 3.1 significantly (Reviewer V59y)
* We add a paragraph describing how audio can be rendered, and why our/prior work choose not to model phase in Section 3.2. (Reviewer doFB, V59y, U1r6)
* We clarify that the grid is learned and how it is queried in section 3.3 (Reviewer HhWA, U1r6)
* We have clarified our notation surrounding v, to indicate that our network model outputs $v_{\text{STFT}}$ for a given time/phase in Equation 4, 5, 6 in section 3.2 (Reviewer HhWA)
* We now explain that $\theta$ represents head orientation, and $k$ represents ear (left/right) before equation 3, and note the dimension of every input in section 3.2 (Reviewer HhWA, U1r6)
* We use different notation to distinguish the time domain impulse response function $\phi$, the network to approximate STFT as $\Omega$, and the NAF with the grid as $\Omega_\text{grid}$. (Reviewer U1r6)
## Changes to the experiments section
* We clarify that the test set is selected randomly in section 4.1 (Reviewer U1r6)
* We add additional clarification on our sinusoidal encoding in section 4.2. (Reviewer doFB)
* We add T60 percentage difference as a metric as in Image2Reverb in Table 1. (Reviewer V59y)
* We add a linear decoding experiment on the grid features in Section 4.4 results. (Reviewer V59y)
* We now note the dimension of every input, use $q$ and $q'$ to represent location, and use $x,y$ to represent the ground axis in section 4.2 (Reviewer HhWA, U1r6)
* We clarify that our NAF output the STFT log-magnitude at a specific time/frequency index (Reviewer HhWA)
* We add additional details for the NeRF baseline, including training objective and number of ray samples in section 4.4 (Reviewer HhWA)
* We add additional details on the loss used for the cross-modal training in section 4.4 (Reviewer HhWA)
* We clarify the max magnitude baseline, and note that equation 8 is used when we have a NAF used in section 4.5(Review U1r6)
## Changes to the appendix
* We provide additional details on how we perform zero-padding in Appendix B. (Reviewer U1r6)
* We add a comparison that trains to regress phase + log-magnitude, and one network that trains directly in the time domain waveform. Results are presented in the Table A2 and Figure A6 in Appendix E. (Reviewer V59y)
* We add an experiment that ensures a smooth latent space via a L2 penalty in Appendix G. (Reviewer V59y)
* We add a comparison that highlights the storage advantage our NAF over using the dataset in Appendix D. (Review HhWA)
* We provide a visualization using interpolation applied to the ground truth training data in Figure A5 in Appendix E (Reviewer HhWA)
* We provide more detail on how our local feature grid is initialized in Appendix B. (Reviewer HhWA, U1r6)
* We provide qualitative results for interpolating the "ear" variable in Appendix H (Reviewer U1r6)
## Code for reproducing results
We provide a copy of our code [anonymously here](https://anonymous.4open.science/r/Neural_Acoustic_Fields/)
## Summary of new experiments/visualizations
* We add T60 percentage error in Table 1. (Reviewer V59y)
* We add a linear decoding experiment in section 4.4 (Reviewer V59y)
* We compare modeling the log-magnitude, the log-magnitude + phase, and the time domain in Table A2 of the Appendix. (Reviewer V59y)
* We visualize the waveform recovered from NAF using Griffin-Lim against a network trained directly in the time domain in Figure A6 of the Appendix. (Reviewer V59y)
* We experiment with regularizing the grid presented to NeRF in Table A3 of the Appendix. (Reviewer V59y)
* We add a comparison of the storage cost of the methods in Table A1 of the Appendix. (Reviewer HhWA)
* We visualize the loudness maps when using either nearest or linear interpolation in Figure A5 of the Appendix. (Reviewer HhWA)
* We visualize the effect of interpolating the "ear" latent in Figure A7 of the Appendix. (Reviewer U1r6)
We hope our responses have convincingly addressed all reviewers' concerns. We thank all reviewers for their time and effort. Please do not hesitate to let us know of any additional comments or questions regarding the manuscript or the changes.
<!-- * We move the link to our qualitative results so it is more visible. (Reviwer doFB, V59y, HhWA, U1r6)
* We add a paragraph discussing how the time domain waveform can be recovered for qualitative listening in section 3.2. (Reviwer doFB)
* We clarify how the Figure 1. is visualized in the caption (Reviwer doFB)
* We provide additional clarification of the scope of our project and our goals in the introduction (Reviwer doFB)
* We add additional clarification on our sinusoidal encoding in section 4.2. (Reviewer doFB)
* We have compressed the description for wave equation background in section 3.1 significantly (Reviewer V59y)
* We add a comparison that trains directly in the time domain waveform in the Table A2 in Appendix E. (Reviewer V59y)
* We add T60 percentage difference as a metric as in Image2Reverb in Table 1. (Reviewer V59y)
* We provide additional related work that model the log-magnitude STFT of sound in Section 2 (Reviewer V59y)
* We add a section describing how audio can be rendered, and why our/prior work choose not to model phase in Section 3.2 (Reviewer V59y)
* We add a linear decoding experiment on the grid features in Section 4.4 results. (Reviewer V59y)
* We add an experiment that ensures a smooth latent space via a L2 penalty in Appendix G. (Reviewer V59y)
* We modify the introduction to clarify the scope of our work and the scope of comparable papers in the visual domain. (Reviewer HhWA)
* We add a comparison that highlights the storage advantage our NAF over using the dataset in Appendix D. (Review HhWA)
* We clarify that the grid is learned in section 3.3. (Reviewer HhWA)
* We have clarified our notation surrounding v, to clarify that our network model outputs $v$ for a given time/phase in Equation 4, 5, 6. (Reviewer HhWA)
* We clarify the advantage that a neural acoustic field has over using the dataset in the introduction. (Reviewer HhWA)
* We provide a visualization using interpolation applied to the ground truth training data in Figure A5 in Appendix E (Reviewer HhWA)
* We provide more detail on how our local feature grid is initialized and used in Appendix B. (Reviewer HhWA)
* We now explain that $\theta$ represents head orientation, and $k$ represents ear (left/right) before equation 3 (Reviewer HhWA)
* We now note the dimension of every input, use $q$ and $q'$ to represent location, and use $x,y$ to represent the ground dimension in section 4.2 (Reviewer HhWA)
* We add additional details for the NeRF baseline, including training object and number of ray samples in section 4.4 (Reviewer HhWA)
* We add addition details on the loss used for the cross-modal training in section 4.4 (Reviewer HhWA)
* We provide additional details on the scope of our work, and similar papers in the visual domain in the introduction (Reviewer U1r6)
* We clarify that the grid is learned and how it is queried in section 3.3, and additional details on how the grid is defined in Appendix B (Reviewer U1r6)
* We provide qualitative results for interpolatiing the "ear" variable in Appendix H (Reviewer U1r6)
* We clarify the dimensions of each variable in section 4.2 and section 4.3 for equations 3 and 4. (Reviewer U1r6)
* We add a paragraph on how we render the audio, and why our/prior work choose to not model the phase for spatial acoustics in section 3.2 (Reviewer U1r6)
* We provide additional details on how we perform zero-padding in section 4.3 (Reviewer U1r6)
* We clarify that the max magnitude baseline, and note that equation 8 is used when we have a NAF used in section 4.5(Review U1r6) -->