Thank you for your comments.
Before providing an itemized response, we would like to reiterate the purpose of HardTestGen, which is significantly different from many seemingly similar test generation methods and constitutes the basis of our design choices and response.
Unlike many test generation methods that aims to improve coverage or detect bug for a single program under test, **HardTestGen is used to provide reward signals for reinforcement learning,** meaning that
- It must be scalable and generalizable for problems described in natural language, because the >30k problems in the RL training set exist as natural language descriptions.
- There is not a single *program under test* or *focal method*. RL training generates hundreds of programs for each coding problem during rollout. Therefore, coverage-based test generation techniques need to be applied hundreds of times for one problem, which is less tractable.
- The ultimate criteria for the test cases are reward accuracy and training effectiveness. While coverage is considered an important metric for tests, in reinforcement learning, what actually matters is for the model to get rewarded only for correct outputs and be better trained.
Now we will respond to your comments.
### R1: Performance comparison with TrickCatcher
> Can HARDTESTGEN outperform TrickCatcher?
Yes.
We adapt TrickCatcher to our setting by:
- Selecting one oracle solution as the program under test (PUT) and follow TrickCatcher's approach to generate the program variants and the input generator
- Retaining only variants that pass the public test cases (which serve as the "Existing Test Suite" in TrickCatcher Figure 2).
- Evaluating the generated tests on 209 LLM-generated programs, with precision and recall calculated by comparing the TrickCatcher's test cases' judgement with the CodeForces Online Judge's judgement of the program correctness.
||Precision|Recall|
|-|-|-|
|TrickCatcher|75.76|49.50|
|HardTestGen|96.43|77.88|
As the table suggests, HardTestGen performs better than TrickCatcher in both precision and recall. The gap between the two is as big as 20.67 percentage points for precision and 28.38 percentage points for recall.
We attribute this improvement to our input validator and multiple types of tests, especially type 3 hacking tests.
### R2: Conceptual difference with TrickCatcher
> TrickCatcher operates in a nearly identical setting ... what advantages HARDTESTGEN has over TrickCatcher
Thank you for pointing out to this relevant work, we will make sure we cite it in the revision. However, we insist there is substantial novelty to our approach. Here's why.
**First, the setting is very different.** HardTestGen does not assume access to a program under test, while TrickCatcher does (see TrickCatcher's Figure 2). This difference is meaningful, because HardTestGen generates tests for **online reinforcement learning** -- the tests are generated a priori, while the programs are generated later from LLM rollouts during the learning process. HardTestGen needs to deal with less information and adapt to hundreds diverse programs generated during training without actually seeing them.
**Second, HardTestGen has the following innovations:**
- **generated input validator**. HardTestGen generates not only input generators, but also input validators, which take test inputs and make sure they are valid. TrickCatcher doesn't. Without the input validator, **12.15% of the generated inputs are invalid**.
- **multiple types of inputs**. HardTestGen generates three different types of tests, while TrickCatcher only generates one input generator. Test type 3, the hacking tests, are particularly helpful, where the model creates edge cases, tricky cases, and extreme cases that cause seemingly correct programs to fail. As shown in Table 1, without type 3 tests, test precision falls significantly. In Appendix A.5 of our supplementary material (Page 36 to 38), we show examples of different types of tests and how they reduce false positives.
- **special judges**. Problems with multiple correct solutions are common in competitive programming. HardTestGen generates special judges for those problems, guaranteeing that correct programs can pass even when their outpus differ from the oracle. We observed that across all problems, 25.39% require special judge functions. Without these functions, the recall for the tests drops significantly, as shown in the table below.
| | Diff 1 | | Diff 2 | | Diff 3 | | Diff 4 | |
| ---------- | --------- | ------ | --------- | ------ | --------- | ------ | --------- | ------ |
| | Precision | Recall | Precision | Recall | Precision | Recall | Precision | Recall |
| HardTests | 99.53 | 99.18 | 100.0 | 97.43 | 96.04 | 98.45 | 84.18 | 98.03 |
| HardTests w/o Special Judge | 99.47 | 90.36 | 100.0 | 86.67 | 95.77 | 84.01 | 89.25 | 81.69 |
**Third, our paper goes beyond test generation and provides unique insights on reinforcement learning.** This we will elaborate in the following section (R3) of our response.
### R3: Findings and resources for LLM post-training
We would like to kindly remark that our paper uses test generation as a means to evaluate LLM post-training techniques based on such tests.
While TrickCatcher mostly focuses on testing, our paper is largely a learning paper, with half of our results (the entire section 5) and most of our budget spent on LLM post-training experiments. We trained 7 LLMs to study where test quality matters and how much test quality matters for 3 different post-training techniques.
Our findings (**Section 5.2**) are:
- For teacher distillation, the number of questions matters more than the correctness of distillation trajectories. Therefore teacher distillation doesn't require high-quality tests.
- For self-distillation and reinforcement learning, it is crucial to have accurate correctness estimations, as models trained with HardTests' higher quality rewards perform much better than those trained with lower-quality test cases from TACO.
As reinforcement learning becomes the major technique for improving LLMs' reasoning and coding ability [1, 2], **our findings provide useful insights** for LLM practitioners in curating data and environments for their models.
In addition, we have generated and released a 47k-problem dataset with high-quality test cases to further facilitate LLM post-training and coding research, which itself is a useful resource for the community.
### R4: Difference with "Who judges the judge"
We appreciate you mentioning this relevant paper. We will cite and discuss it in the revision.
We believe there are significant differences between HardTestGen and the ISSTA paper, many of which we already mentioned in R2 (comparison with TrickCatcher):
- The setting is different. The ISSTA paper focuses on checking the correctness of OJ tests, while we focus on LLM post training.
- The ISSTA paper focuses on identifying the problem -- how existing test cases are not enough for complex algorithmic programming problems. While HardTestGen focuses more on solving the problem: how LLMs can be utilized to make tests stronger so that we can train better LLMs for programming.
- The ISSTA paper relies purely on **hand-written** random generators written with CYaRon, which is hard to create at a larger scale. While HardTestsGen uses LLMs to create input generators and input validators, allowing us to scale up the number of problems with good tests.
- Moreover, HardTestsGen also generates different types of inputs that detect inefficient algorithms. This is shown to be very effective in improving the precision and recall of tests.
### R5: Necessity of a large dataset
> diversity, redundancy, and necessity of the large test set (47k+ cases) are not discussed
To clarify, 47k+ is not the total number of test cases, it's the total number of problems. Most of the problems in our dataset have 30 to 50 test cases each.
We argue both the amount of problems and the amount of test cases per problem are necessary.
Recent frontier LLMs including OpenAI o1, DeepSeek R1, Kimi K2, and Grok 4 all suggest that future improvements of LLM ability depend on scaling up reinforcement learning.
Large-scale RL requires large amounts of data, which is why we create a large problem set.
Large-scale RL also needs accurate reward (as shown by our entire Section 5), which is why we need many test cases.
To further address your concern and quantitatively measure the redundancy of tests, we conduct an experiment of randomly removing some portion of test cases and measuring the precision after that:
|Percentage to Keep(%)|100|90|80|70|60|50|40|30|20|10|
|-|-|-|-|-|-|-|-|-|-|-|
|Precision(%)| 88.86 | 88.74 | 87.29 | 85.42 | 85.25 | 83.96 | 82.92 | 78.44 | 74.41 | 68.80 | 67.45| 66.23| 65.36| 63.18| 62.17| 59.79| 56.55|
As shown in the table, test precision immediately drops as more test cases are discarded, suggesting that we have very few redundant tests.
### Reference
[1] Guo et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." arXiv preprint arXiv:2501.12948 (2025).
[2] Jaech et al. "Openai o1 system card." arXiv preprint arXiv:2412.16720 (2024).