# TRETS Response
We would like to thank all the reviewers for their helpful comments! We have carefully considered them and improved our article based on the reviewers' suggestions. Detailed responses are as follows:
### Reviewer: 1
Many thanks to reviewer 1 for their helpful comments and corrections for significantly improving the quality of the article.
> - How long is the runtime to check the dependences? Is it deterministic? Can you handle a reasonable amount of memory accesses per loop?
The runtime for our verification is 2 seconds at maximum. It is deterministic for a single program but varies with the complexity of the programs. The runtime scales exponentially with the number of memory statements, which is a known issue in static analysis. We have added a table (Table 3) listing the runtime results for each benchmark.
> - Which kinds of independent accesses can the tool prove?
> Does it cover the easy cases (e.g. affine memory accesses) well?
Our tool can generate a Boogie program for analysing any memory accesses. Still, the ability to prove the absence of dependence depends on Boogie and the satisfiability modulo theories (SMT) backend used.
To answer the second question, Boogie can analyse simpler cases, including affine accesses, because they are within the scope of the SMT solver, using Linear Integer Arithmetic (LIA). However, it usually takes longer time than a specialised affine analysis package. An optimisation is removing some false assertions using affine analysis beforehand to reduce runtime. We have clarified both in Sec. 2.2.
> At an even higher level, it would be interesting to see how often the code structures examined in the submission where the dynamic approach gives benefits actually occur in real-world code (benchmarks, maybe Github [which lists 292 repositories with code for "hls fpga"])
Thanks a lot for pointing us to this. We have added a paragraph discussing this issue in Sec. 7.1.
> L.185: Boogie has its own "structs" -- what does that mean?
This means that Boogie has its own syntax constructs - we have changed it to "constructs".
> L.252: run-time "event" -- we didn't associate that term with executions of basic blocks, loops etc. at first.
Thanks for pointing it out. We have clarified this in Sec. 3.1.
> Sec. 5.1: The inner-loop with II=3 is already "3-slow", correct? We found that a bit confusing in the context of Leiserson's definition of inserting registers into the datapath.
A loop with II=3 has at least three registers in its hardware cycle, but the control flow still starts sequentially because of potential dependency between two iterations. This is the case in Fig. 9b, and we enable Fig. 9c by proving that these iterations are independent. We have clarified that in Sec. 5.1.
> L.932: Isn't it rather essential to check that an II>1 inner loop actually has available resources to accept the next outer loop iteration? Couldn't you just compute the ResMinII from the modulo scheduling world, and compare that with the average II caused by inter-iteration deps?
The version of Dynamatic we use does not support resource sharing. Each operation is mapped to a unique hardware operator and connected in the data flow form. ResMinII is always one in our case.
> L.953: Using `C` for the minimum dependence distance is very confusing in light of `C` being the parameter for the C-slow pipelining.
Thanks for pointing it out. We have now introduced a new term $Q$ to avoid confusion.
> Eq. 26: At this point we couldn't follow the notation anymore. Aren't the `j`'s just outer-loop iteration numbers? How do they identify the loop, then? What kind of object are the `e`s? (The E_B and E_I notation made sense to me, but we don't understand E_L(i,j))
j is used to identify a particular loop execution event. For instance, E_L(j,k) denote the execution of the kth iteration of the inner loop in the jth iteration of the outer loop. We raise the abstraction from instructions and basic blocks to loops, which results in E_L. We have clarified this in Sec. 5.2.
> L.974: "The dependence constraint above is equivalent to that the minimum dependence distance of the outer loop is less than C" -- according to the examples shown later, the minimum dependence distance should be _greater_ than C.
Sorry for the mistake. We have now corrected it.
> Fig. 13: We think you don't state anywhere f(x) and h(x) were assumed for Fig. 9 that lead to C <= 6.
Sorry for the mistake. We have now added them to the caption.
> L.1223: "C_D is the minimum C that passes the Boogie verification" -- should that be the _maximum_ C, because otherwise C_D is always 1?
Sorry for the mistake. We have now corrected it.
> L.1324: "... other HLS tools, such as CIRCT" -- CIRCT is not really an HLS _tool_. It does have the Handshake dialect and experimental lowerings for dynamically scheduled datapaths, though.
Thanks for this. We have now clarified it as CIRCT-HLS, which has been recently made available and is under active development.
> L.1375: ... except matrixadd ... -- Do you mean fft? matrixadd seems to have solid speedups in Fig. 16.
Sorry for the mistake. We have now corrected it.
> L.1389: (3) is a straight copy from the conference paper -- We suppose the justification for C-slow pipelining being good isn't needed here.
Corrected.
> Table 2/3: The wall-clock time columns referenced from the text are missing here.
We have now added them to the tables.
> Table 3: Can you provide the results for the statically-scheduled HLS here as well?
We have now included the results for Vivado HLS in the tables.
> ## Typos
>
> L.38: to directly program_ming_
> L.39: a software program_s_
> L.94: _The_ existing dependence model
> L.251: run-time event_s_
> L.352: denote whether two executions _that_ have dependences
> L.367: generates _a_ data flow hardware
> L.391: as whether _there_ exist_s_ one instruction (or rewrite, the entire sentence is hard to read)
> L.561: b0(e)_)_
> L.566: subgraph_s_ g' is consecutive_ly_
> L.779: such _an_ approach also maintain_s_
> Fig. 9: Bars show call of function `f(s(i), b[i][j])`, but the function is called `g` in the listing (line 6)
> L.1126: 1 <= C <= _256_
> Fig. 13: Increasing C initial_ly_
> shown in _red_ (optimal point is blue)
> L.1230: adds additional one depth (rewrite)
Fixed.
### Reviewer: 2
> There are many language glitches in the paper, particularly mixture of singular and plural tense, missing words, wrong usage of words and some misspellings.
Thanks for pointing it out. We have gone through the draft again and also fixed the errors mentioned by the other reviewers.
> When you are talking about dependences, in chapter 3, does that include data dependencies? And if so, do you make a difference in RaW, WaR and WaW dependencies (as the last two can be removed using single assignment)?
Yes, the dependency model in Sec. 3 includes data dependencies in theory, where we do not consider any compiler optimisation in practice. We agree that some dependencies are resolved using potential compiler optimisations, but here we focus on general cases.
> In fig. 3(b), you state that g(1) is not equal to h(2) but j*j+1 = 2 (if j=1) and j=2 (for j=2) so g(1)=h(2). I don’t understand why you say it isn’t equal.
Sorry, that was a mistake. We have now fixed the figure.
> I am not convinced that the results of the Boogie program are conclusive and can be used as a proof that there are no dependences. It seems to me Boogie just arbitrarily choses memory locations accessed by one loop and checks that these are not used by the second loop. But there is no guarantee that this is the case for ALL memory accesses. So your conclusions that there are no dependences might be wrong which would make the proposed optimisations invalid. How can you ensure independence for all memory locations?
This is a misunderstanding of the Boogie verifier -- we apologise if the original text was unclear on this point: we have tried to clarify this in Section 2.2. The key point is that the results *are* conclusive and a proof of no dependencies, precisely because Boogie *does not* arbitrarily choose memory locations. In more detail:
The Boogie verifier does not 'run' a Boogie program but generates a set of specifications for formal verification via a backend SMT solver. In this article, the Boogie programs in Fig. 5 and Fig. 10 do not not pick *a* memory access but rather describe what memory accesses *could* be. The SMT query will then search for *any* memory access violating assertions; the absence of such a satisfying assignment corresponds to a proof of correctness.
> In section 3.2 you say “static-of-the-art” instead of “state-of-the-art”
Thanks for pointing it out. It is now fixed.
### Reviewer: 3
> Other than the problem formulation that has been reworded, no technical contribution is made beyond what has already been established by the authors in their prior work. I still don't see how the dependence model that they have formulated, impacts their toolflow, methodology, results or in general, the overall field in any way.
We believe that the general dependence model that we formulate in Sec. 3 brings similar benefits to other "general models" that have been proposed in other domains: (1) it shows how several different problems can be cast as instances of the same general form, thus highlighting their similarities and differences, (2) it may help algorithms developed for one instance to be successfully generalised and transferred to other instances, and (3) it may help identify other interesting instances of the general problem.
Another contribution added to this article is to automatically determine an area-efficient C for dynamic C-slow pipelining while preserving high performance.
> There are some minor grammatical errors. Correction is required in the references section (16 and 17 are a repetition and their FPL22 work that has been heavily copy-pasted has not been cited).
Thanks for pointing it out. We have gone through the draft again and fixed the errors mentioned by the other reviewers. Repeated references have been removed and our FPL22 work is now cited as [15].
### Reviewer: 4
Many thanks to reviewer 4 for their helpful comments for significantly improving the quality of the article.
> If this paper expanded more on how the tool flow checked the related variables (or refer to appropriate related work), it would be of better help to find further similar optimization opportunities.
Thanks for the comments. We have now added more related works in the background section.
> Nevertheless, it would be helpful to give some examples of the whole BB scheduling space and find related work to justify looking only at loops.
Thanks for the comments. The search space for all the possible BB schedules is huge and not scalable. We have shown an example of BB schedules in Fig. 6, and we have extended our discussion.
> The article presents the two optimization steps for consecutive and nested loops in a similar yet orthogonal way. A summarizing paragraph highlighting the similarities and differences between these two would help discover further optimization steps after further understanding the core ideas behind the first two.
Thanks for the comments. We have now added a summary in the conclusion section for comparing these two use cases, and provide some outlooks for future research directions.
> The experimental evaluation of the C-slow is presented in the text too briefly.
Thanks for the comments. We have now rewritten the result section.
> Text is mainly copied from the previous papers, while the added text has numerous typos.
Thanks for pointing it out. We have gone through the draft again and fix the errors mentioned by the other reviewers.
> Smaller comments:
>
> In the abstract it says 7.3 speedup, while the introduction says 14.3x speedup. Could these be clarified?
Sorry for the mistake. We have now corrected it.
> Make it more clear what is meant by control flow, data flow, and BBs.
We have now added a definition of these terms in Sec. 3.2.1.
> Look over the use of the word "dependence". Often in text dependency or dependencies should be used instead.
We have now changed all of them to "dependencies".
> In the abstract I find this key idea sentence difficult to read:
> "In particular, we precisely define the static and dynamic analyses possible on the code in terms of their impact on the degree of conservatism inherent in the scheduling decisions made."
> This sentence must be made more clear to clarify how the degree of conservatism in the scheduling decisions is defined and how it is relevant to existing tools that analyse BBs.
Fixed.
> There are two consecutive "in order to"-s at the beginning of the introduction.
Fixed.
> "control flow graph (CFG)" - should be done before other uses of "CFG".
Fixed.
> Consider adding a figure demonstrating "CDFGs" and how the 4 data-memory dependency types can be mapped.
> For readability - define II.
Fixed.
> where (𝑏ℎ, 𝑘) - Define bh for readability.
Fixed.
> "in each depth" - What does that mean? Depths haven't been clearly defined yet.
Renamed it as loop depth.
> "constraint as shown in constraint ?? for a given C" - broken reference.
Fixed.
> Possibly an algorithm showing the probabilistic analysis to infer the optimized C or referring to the algorithm in Dynamatic would further help explain finding the most optimal C.
> Just like the loop scheduling unit is described later - although that would need a summarization as well.
We have now added an algorithm view of our approach.
> Where does the tool EASY fit in Figure 15?
EASY is a code library, not a tool. We use part of the EASY source for the code generator during the dependence analysis process in the figure.
> "all the lines except matrixadd indicate" - fft instead of matrixadd? However, this mistake is also in the original paper, so I am unsure if I understand this incorrectly - either way this needs clarification.
Sorry for the mistake. We have now corrected it.
> Questions:
>
> With LSQ minimizations do you have any estimations of how big of an impact this could have
This article does not touch the number of LSQs. Minimising LSQs would have impacts on both baselines and our designs, however, it should not affect our observations and conclusions.
> How do the Boogie analysis steps differ between nested loops and consecutive loops?
Both Sec. 3 and Sec. 4 generate a Boogie program for dependence analysis. However, these Boogie programs are generated for different purposes. The Boogie program in Sec. 3 proves the absence of memory dependence among $C$ consecutive outer-loop iterations in a nested loop. The Boogie program in Sec. 4 proves the absence of memory dependence among a number of sequential loops.
> How difficult would it be to rerun the evaluation with the new Dynamatic's resource sharing?
The version of Dynamatic we use does not support resource sharing. It requires some engineering efforts to update the tool flow to support resource sharing. Adding resource sharing does not change our observations and conclusions.
> "we over-approximate analysing" - What does over-approcimate mean?
The static analysis cannot completely predict run-time behaviours at compile time. In this case, "over-approximate" means that we check dependency for two statements in the source instead of checking dependency for two run-time events. We have clarified that in the draft.
>
> Typos:
> "in order to directly programming"
> "this restrictions"
> "a software programs"
> "schedule produced by conservative analysis"
> "specifications of the dependence constraint"
> "two run-time event"
> "connected by handshake signals between them are independent"
> "when not all the"
> "at most one BB receives at most"
> "order ensures the data to each BB also in strict program order"
> "as whether exists one instruction"
> "denotes whether two basic blocks that may"
> "denote whether the execution of subgraphs 𝑔′ is consecutively"
> "exactly as the program order"
> "such approach also maintain"
> "on the over all throughput"
> "run time"
Fixed.