# CaPS-SA reviews
## Reviewer 1
> It would be interesting to show the time of CAPS-SA using one thread, to compare the amount of work it does compared to others. That is, how much of the improvement owes to being more parallelizable and how much to doing less work.
JK: Skip for camera-ready?
TR: I Agree.
> As for the analysis, sum L_i is Theta(n log n) on random text and Theta(n^2) on a particular bad case, no? If so, the n log n part is the smaller one. Can you show some realistic texts where CAPS performs poorly? What about repetitive datasets, like collections of similar assembled genomes?
JK: Skip for camera-ready?
TR: I Agree.
## Reviewer 2
> The authors should give more details on sampling of the global pivots.
JK: I thought the sampling step had been well-explained? What are others' thoughts?
TR: We not only explain it, but include pseudo-code.
JK: followup--added reference to pseudo-code.
> Merge-Sort procedure takes two sorted arrays X and Y of suffixes and their respective LCP-arrays L_X and L_Y and produces the array Z as the merged output for X and Y and the LCP-array L_Z of Z.
The analysis of the complexity of the used space by CaPS-SA is missing. It should be included.
JK: Add a line or two mentioning the required space to be 4w|T|, where w = 4 or 8.
TR: Agree.
> Most of the complexity is described with high probability, a theorem on the worst-case complexity is missing.
JK: The high-probability bound comes due to step 4, i.e. merging each final partition. The final partition sizes have uniform distribution (~ n / p) with high probability. In the worst-case, one final partition would have O(n) size, but that just affects the load-balancing between the threads, and not the complexity.
NB: the oversampling amount can be arbitrarily adjusted to drive down the worst-case chance exponentially, which is pretty evident from the corresponding theorem (and proof).
TR: worst-case complexity is basically proven in the merge paper we cite.
> Authors should add a more detailed description of Figure 1 in the text and/or caption (for example they should describe what the colours represent, the colours corresponding to the pivot, etc).
JK: Should address this.
> There are too many references, but nevertheless some important references are missing or inaccurate.
To the best of my knowledge, the paper that introduces the k-BWT is
Schindler. A fast block-sorting algorithm for lossless data compression. In DCC, page 469, Washington, DC, USA, 1997. IEEE Computer Society.
> The following papers should also be considered:
>
> G. Liao, L. Ma, G. Zang and L. Tang, "Parallel DC3 Algorithm for Suffix Array Construction on Many-Core Accelerators," 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, Shenzhen, China, 2015, pp. 1155-1158, doi: 10.1109/CCGrid.2015.56.
>
> Bingmann, T., Dinklage, P., Fischer, J., Kurpicz, F., Ohlebusch, E., Sanders, P. (2022). Scalable Text Index Construction. In: Bast, H., Korzen, C., Meyer, U., Penschuck, M. (eds) Algorithms for Big Data. Lecture Notes in Computer Science, vol 13201. Springer, Cham. https://doi.org/10.1007/978-3-031-21534-6_14
>
> Martin Aumüller and Martin Dietzfelbinger. 2015. Optimal Partitioning for Dual-Pivot Quicksort. ACM Trans. Algorithms 12, 2, Article 18 (February 2016), 36 pages. https://doi.org/10.1145/2743020
JK: Few comments on this one:
- I agree that #references is a lot in the paper. Should trim some down.
- The suggested k-BWT reference by the reviewer is incorrect.
- Should add in the other asked for references.
## Reviewer 3
> One major oversight of the authors is that they don't even discuss distributed memory parallel methods (Example, Flick and Aluru (2015), Fischer and Kurpicz (2019)) or GPU-based parallel methods. Flick & Aluru (2015) showed that they can generate SA for human genome in less than 5 seconds with 1600 cores. Having said that, there are a few advantages of shared-memory parallel methods over distributed memory methods - for example, the use of less memory, ease of access etc. However, the authors neither discuss the issues with those methods nor demonstrate in the experiments such advantages. I find this a major oversight on part of the authors.
JK: This issue has been blown out of proportion. The paper **clearly** targets a shared-memory setting, employs no specific trick from the distributed-memory literature, and also nothing from GPU-parallel methods. Discussing distributed-memory algorithms is out-of-place, and comparing against those methods seem too forced.
TR: I'm open to adding a sentence about this with citations, but definitely we cannot include comparison to those methods.
## Reviewer 4
> The authors introduce a new parallel suffix array construction algorithm, that also builds the LCP array. The authors use the principle of a sample sort to construct a suffix table, also using an LCP array to avoid many string comparisons. The approach is interesting, but it requires the production of the LCP array. While the authors clearly state that the suffix array is used in recent bioinformatics tools, I do not recall the use of the LCP array in bioinformatics tools.
JK: I think the lacking references to the use of LCPs makes sense. We should look for some use-case of LCPs and refer.
> The authors evaluate their approach on diverse data (human genomes, long genomes (axolotl), unitigs). I tried the software on a short DNA sequence that was lying on my hard drive (CM016610.1). Whether the file was formatted in FASTA format or was a raw DNA sequence, the program (compiled with g++ 12.2.0) consistently produced a segmentation fault. It would also be beneficial to improve the program’s help message to detail the role of the parameters. Those issues should be fixed before publication.
JK: The issue lies with the fixed subproblem-count (~8K or so) in the code. Seg-faults are possible when some sub-subarray has size 0, possible at short texts with high subprob-count. The p-count needs to adjusted based based on the text-length.
TR: We should document this in the readme.
> In the authors’ evaluations, their construction algorithm is indeed faster than state-of-the-art algorithms on 32 threads. However, the difference is generally quite small (around 20 to 30%, more on unitigs, less on the axolotl genome) and requires having the LCP array in memory, which can be a limiting factor compared to DivSufSort which uses up to 3 times less memory on the largest instances.
JK: We discovered after the submission that CaPS-SA has been mistakenly run on the axolotl genome with double the RAM-amount than it needs. Are we allowed to fix the result now?
TR: Yes, I think we can.
> It is therefore questionable to say that their approach outperforms others. This is the case for time consumption but taken globally, that's a trade-off. I also think that it should be clearly stated that the LCP array must necessarily be constructed and that this step cannot be bypassed since it is required to build the suffix array. These are undoubtedly disadvantages, but the authors should make a balanced evaluation of their approach without embellishing its limits.
JK: the _embellishment_ is unclear to me.
TR: I do not see this as a disadvantage except from the perspective of memory --- the program is clearly faster than other methods. How many cycles are used doesn't matter if the wall clock time is lower IMO.
> Although the time gain is limited, I believe that the work is important, well-conducted and that the approach is interesting and deserves publication. The paper offers valuable information for the community that this kind of approach only allows for a limited time gain compared to the parallel version of DivSufSort.
JK: Brutal.
TR: Nothing we need to address here, not really constructive.
> Minor remark:
> - Figure 1 is nice, but I would make a few suggestions to improve it: in “Step 2”, the separation between subarrays and sub-subarrays is not consistent (between sub-subarrays, there is sometimes no separation line, sometimes a thin line, sometimes a thick line). I suggest having consistent and easily visible separations. The choice of colors is also not optimal for black and white printing.
JK: Laxman drew the final figure. Would it be possible for you to address this, @Laxman?