Thank you for sharing your feedback with us.
...
Now we respond to your comments.
### W1 & Q1: Dependence on Oracle Programs
While HardTestGen does utilize human-written oracle implementations when available, our method is not limited to such settings. In domains where oracles are unavailable (e.g. synthetically generated problems or proprietary tasks), we propose an alternative oracle-free approach ReALGO based on ALGO[1]. The details of ReALGO implementation and results is in Appendix A.7 in **the supplimentary materials**. We summarize some of the content here.
- ReALGO leverages LLMs to generate a brute-force reference solution by encouraging exhaustive search strategies. It then synthesizes a validator and ten edge-case input generators, for the inputs generated, the corresponding outputs are produced using the brute-force program.
- ReALGO adds a maximum-length test case to detect time complexity issues and ensure coverage of both correctness and efficiency. These 11 handcrafted test cases are intentionally designed to trigger failures in seemingly correct but flawed programs.
- Empirical results on 165 AtCoder problems (with 50 sample solutions each) demonstrate the effectiveness of this oracle-free strategy. As shown in the following table, HardTestGen achieves a significantly lower false positive rate (17.67%) than AceCoder[2] (32.49%) while maintaining a comparably low false negative rate. These findings confirm that HardTestGen remains effective even in the absence of oracle implementations, highlighting the robustness of our approach.
||False Positive Rate (FPR)|False Negative Rate (FNR)|
|-|-|-|
|AceCoder|32.49 | 2.59 |
|HardTestGen|17.67 | 2.19 |
### W2 & Q2: Compute & Cost Overhead
To clarify, when using GPT-4o for test case synthesis on AtCoder problems, the total end-to-end cost per problem is approximately $0.23 USD. This includes generation of 1. the input validator (IV), 2. output judging function (OJF), and 3. input generators (IGs). While we acknowledge that large-scale generation can incur aggregate cost, the cost of our pipeline remains modest, which is reasonable for research or curated dataset construction.
Moreover, we believe that the quality of the resulting test cases, which is shown to significantly improve reward signal quality for reinforcement learning, justifies the modest cost. In fact, using weaker or noisy test cases as reward signals in large-scale RL could result in far greater inefficiencies and wasted resources during training.
For instance, recent work on reinforcement learning typically requires thousands of GPU hours, often exceeding $30,000 USD for a single training run. In comparison, generating test cases using GPT-4o is significantly cheaper and costs less than 5% of the total budget in our experiments.
### W3: Domain Specificity
We thank the reviewer for highlighting the question of domain generalizability. We note that there are many other algorithmic tasks, such as compiler optimization and kernel code that also require high-quality, discriminative test cases to ensure correctness and efficiency. HardTestGen is applicable to these domains.
HardTestGen is also designed to be adaptable to domains with diverse input structures and correctness notions. In particular, the framework includes a component for synthesizing output judging functions (OJFs) via LLMs. This allows our method to define and enforce correctness even in domains where simple string matching is insufficient. We also note that RL improvements in one domain translate to better performance in others, for example, SRPO[3] found that RL with math-only problems can enable more robust reasoning abilities to other domains such as coding tasks.
### W4: Limited Ablations on LLM Choice
In the table below, GPT-4o is used for candidate program generation, while GPT-4o, Claude-4-Sonnet, Kimi-K2, and Qwen3-Coder-Plus are used for test case generation.
| | Diff 1 | | Diff 2 | | Diff 3 | | Diff 4 | |
| ---------------------- | --------- | ------ | --------- | ------ | --------- | ------ | --------- | ------ |
| | Precision | Recall | Precision | Recall | Precision | Recall | Precision | Recall |
| HT w/ GPT-4o | 99.53 | 99.18 | 100.0 | 97.43 | 96.04 | 98.45 | 84.18 | 98.03 |
| HT w/ Claude-4-Sonnet | 99.48 | 99.86 | 100.0 | 95.70 | 98.28 | 99.35 | 93.21 | 96.86 |
| HT w/ Kimi-K2 | 99.41 | 99.87 | 98.30 | 97.01 | 98.06 | 99.13 | 87.11 | 98.04 |
| HT w/ Qwen3-Coder-Plus | 99.47 | 99.14 | 99.62 | 98.88 | 95.20 | 99.13 | 76.83 | 98.82 |
### W5: Clarity of Prompt Engineering
Thank you for helpful suggestion. We will try to include some brief representative prompt in the revision.
### W6: Evaluation on Model Training "proprietary models (Qwen2.5 variants) and bespoke RL code (veRL, GRPO)"
We appreciate the reviewer’s interest in reproducibility and open-source accessibility. To clarify, Qwen2.5 is an open-source model released in September 2024 and is publicly available since. It is not a proprietary model. Similarly, veRL is an open-source reinforcement learning framework, and GRPO is a general-purpose RL algorithm, not a bespoke implementation. Both have been adopted in recent RL literature and benefit from strong community support and reproducibility.
While we acknowledge the value of comparing against earlier models like CodeGen and StarCoder, we decided to use more recent, higher-performing open-source models to better reflect current capabilities and challenges. Nonetheless, the training data can be used to train earlier baselines if desired. We detailed our training setting in the paper, and will make our training scripts and configurations available to facilitate reproducibility and encourage further adoption by the community.
<!---
### Q1: How can the method work without Oracle programs?
[Refer to W1]
### Q2: What is the average time/cost per HardTest example?
[Refer to W2]
-->
### Q3: How are ambiguous tests filtered or detected?
For ambiguous tests (e.g. multiple correct solutions), HardTestGen generates special judges for those problems, guaranteeing that correct programs can pass even when their outpus differ from the oracle.
For potential contradictions, we make use of the consistency of multiple oracle solutions to filter out contradicting test cases.
Though we cannot guarantee that our test cases is completely reward-hacking free, HardTestGen mitigates this risk by providing more robust reward signals. The lower false positive rate compared to existing test sets prevent the reinforcement of LLM learning plausible but incorrect programs to hack the reward signals.
### Q4: Can your method be applied to interactive or multi-agent environments?
Yes. HardTestGen can be extended to such scenarios by synthesizing output judging functions (OJFs) to track sequences of inputs and outputs over time, including intermediate states, API calls, or environment transitions.
### Q5: How is test difficulty calibrated?
We define test difficulty by a case's ability to expose errors or inefficiencies in candidate programs, not by abstract data metrics.
Type 1 test cases are the simplest, as they are small in scale.
Type 2 test cases are harder since they are randomly generated according to the problem's constraints.
Type 3 test cases are intentionally designed to be challenging. For each problem, we analyze typical naive solutions and design "hacking" cases that deliberately trigger their failure modes, such as timeouts or logic errors.
Therefore, difficulty is tied to functional challenge and worst-case behavior, not to statistical complexity. We do not consider a case "hard" unless it reliably challenges program correctness or efficiency.
<!--
### Limitation "The main text underrepresents..."
```
The fragility of the oracle assumption.
```
In the limitation section, we discussed that we propose an initial idea for synthesizing tests without oracles in Appendix A.7.
```
The lack of human calibration or psychometric validation for synthesized tests.
```
Large-scale human validation for all synthesized tests are expensive and infeasible. To evaluate our synthesized test cases, we obtained the gold labels for a subset of problems:
For AtCoder, we run candidate programs on official tests (human-written) that have been previously made available. Note that AtCoder only constitutes a small portion of the 47k problems we have.
For Codeforces, we submit candidate programs to the website to obtain ground-truth verdicts, without accessing the oracle, human-written tests.
-->
### Reference
[1] Zhang, Kexun, et al. "Algo: Synthesizing algorithmic programs with generated oracle verifiers." Advances in Neural Information Processing Systems 36 (2023): 54769-54784.
[2] Huaye Zeng, et al. "Acecoder:
Acing coder rl via automated test-case synthesis" arXiv preprint arXiv:2502.01718 (2025).
[3] Xiaojiang Zhang, et al. "SRPO: A Cross-Domain Implementation of Large-Scale Reinforcement Learning on LLM" arXiv preprint arXiv:2504.14286 (2025).