Dear AC: We thank all the reviewers for their constructive feedback and insightful comments, and we are encouraged that they generally recognize the novelty and contributions of our work: * **Thorough and comprehensive empirical analysis (KEXB, TXqE, JCZg, Vc1j):** We conduct extensive ablations of design choices and provide thorough comparisons with prior baselines. * **Ablation of BFS and novelty of adaptive DFS (KEXB, JCZg):** We present a comprehensive analysis of BFS-style methods and introduce adaptive DFS, which achieves superior performance. * **Theoretical and empirical contributions of local search (KEXB, TXqE, JCZg):** We propose a theoretically grounded local search method using Langevin MCMC and demonstrate significant performance improvements in challenging decision-making domains. * **Unified framework for jointly scaling local and global search (KEXB, JCZg, Vc1j):** We propose a unified framework that scales both local and global search, advancing the Pareto frontier for inference scaling in diffusion models. We thank all the reviewers again for their helpful and constructive feedback to improve our work. We have carefully updated the paper to incorporate these valuable suggestions. All revisions in the updated PDF are highlighted in **red**. We summarize the revisions and additional experiments as follows: * **Discussion of concurrent works (TXqE, Vc1j):** We added discussions of concurrent works [1], [2], [3], as well as a clarification of the relationship between our DFS and SoP [4]. *`Page 3`* * **Ablations for DFS hyperparameters (TXqE, Vc1j):** We added ablations on backtracking depth and the backtracking schedule for DFS, demonstrating the robustness of our method. *`Figure 9, Page 31`* * **Comparisons of HPS score and DAS with gradient (JCZg):** We added experiments reporting the HPS score of our inference-scaling method with the ImageReward verifier, showing no signs of reward hacking. We also compare against DAS using a gradient-guided transition kernel and demonstrate that our improved BFS consistently outperforms DAS in global search efficiency. *`Section E.1.1, Page 28`* * **Comparisons with SoP [4] (TXqE):** We compare both BFS and DFS against SoP, showing that they consistently outperform SoP in global search efficiency. *`Table 2, Page 7; Figure 8, Page 30`* * **Reporting of wall-clock times (TXqE):** We now report the wall-clock runtime of our method. *`Table 12, Page 34`* During the discussion, we are delighted to address the concerns of the reviewers. Specifically, we clarified our contributions regarding DFS, local search and unified framework, and we also explained our offline RL experiment results, resolving the concerns of reviewer TXqE. Our additional experiments on DFS hyperparameters have resolved the concerns of reviewer Vc1j on the robustness of DFS, and we also clarified our adaptive backtracking and scoring functions. --- [1] Jain, Vineet, et al. "Diffusion Tree Sampling: Scalable inference-time alignment of diffusion models." The Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025. [2] Lee, Gyubin, et al. "Adaptive Cyclic Diffusion for Inference Scaling." arXiv preprint arXiv:2505.14036 (2025). [3] Dang, Meihua, et al. "Inference-time scaling of diffusion language models with particle gibbs sampling." arXiv preprint arXiv:2507.08390 (2025). [4] Ma, Nanye, et al. "Inference-time scaling for diffusion models beyond scaling denoising steps." arXiv preprint arXiv:2501.09732 (2025).