owned this note
owned this note
Published
Linked with GitHub
$$
\newcommand{\pending}{\text{Pending}}
\newcommand{\completed}{\text{Completed}}
\newcommand{\failed}{\text{Failed}}
\newcommand{\cancelled}{\text{Cancelled}}
\newcommand{\running}{\text{Running}}
\newcommand{\paused}{\text{Paused}}
$$
## 1. Introduction
One standard feature of profilers is allowing a tree view where the execution time of a proc can be broken down into its own execution time plus the execution times of the procs it calls (its _children_ in the tree). Fig. 1 shows an example of such a performance tree taken from the [RStudio](https://posit.co/) profiler as it runs the program in Listing 1.
<div style="display: flex; justify-content: center; padding: 5px 0 25px 0; border-style: dashed; border-width: 1.5px; border-color: lightgray">
<img src="https://hackmd.io/_uploads/SkuN8rP6p.png"/>
</div>
**Figure 1.** Tree view in the RStudio profiler.
```r=
parent <- function() {
lapply(1:1000000, function(x) x + 1)
child1()
child2()
}
child2 <- function() {
lapply(1:1000000, function(x) x + 1)
NULL
}
child1 <- function() {
child2()
}
parent()
```
**Listing 1.** R program giving raise to the performance tree in Fig. 1.
The main reason the tree view is useful is that it provides a _snapshot of causality_: each path running root-to-leaf represents a call chain that actually happened, with the contributions of each individual call as well as the _cause_ for those calls happening (i.e., who was the caller) clearly mapped out. While this makes them very handy for tracking down issues, the downside is that performance trees can be expensive to build and maintain.
**Flattened performance ranks.** To make our lives easier at first, we would be happy with a proxy for a performance tree that is much simpler and cheaper to build: we call those _flattened performance ranks_, or just performance ranks for short (Figure 2). A performance rank is a simple one-dimensional flattening of the tree so that performance numbers are aggregated across paths and nodes: "exec time w/ children" represents a path aggregation in Figure 2, and those get further aggregated by nodes referring to the same proc. As we place those in the real line, we can rank the procs and see their overall time expenditure, as well as the time expenditure spanning from their actions (i.e. children that they started).
This is of course not nearly as informative as a performance tree, but it is more than enough for us to identify that the proc that is issuing slow calls is `parent`, and that the slow computation that may warrant optimization is in `child2`.
<div>
<img src="https://hackmd.io/_uploads/S1lxZUv6a.png", style="padding: 20px 0 40px 0"/>
</div>
**Figure 2.** Performance tree and equivalent flattened rank.
Regardless of whether we build a full-fledged performance tree, a flattened performance rank, or a call graph, the key point is that those all depend on how parent/child relationships are defined, and that is where it starts to get murky with Chronos. We discuss some of the issues involved in Section 2, and explain how we have so far addressed them in Section 3. Finally, in Section 4 we provide a real-world example of how performance ranks can be sampled and transformed into time series to follow the execution and debug a real issue.
## 2. Challenges in Building a Performance Tree in Chronos
**Differing semantics.** The meaning for "child" and "parent" in Chronos is different from what you typically expect. Indeed, in Chronos, a Future $f_c$ becomes a child of another Future $f_p$ if and only if:
* $f_p$ is an async Future;
* $f_p$ awaits for $f_c$.
In Chronos, therefore, we can say that the _child of_ relationship is just the inverse of the _awaits for_ relationship; i.e., if $f_p$ _awaits for_ $f_c$ then $f_c$ is a child of $f_p$. This is rather different from the caller/callee relationship you get in a regular program, and can result in surprising results if we adopt it as is.
To see why, consider what would happen if we were to accrue execution times from children into parents as we typically do when building a performance tree for the following program:
```nim=
proc child(): Future[Something] {.async.} =
result = # perform some very expensive computation
# and block the event loop for 1h
proc parent1(): Future[Future[Something]] {.async.} =
result = child()
discard await result
proc parent2(fosterChild: Future[Something]) {.async.} =
discard await fosterChild
let fut = waitFor parent1()
waitFor parent2(fut)
```
Since the Future generated by the call to `child` is awaited by both `parent1` (line $7$) and `parent2` (line $10$), this would mean we would count the execution time of both as having taken $1\text{h}$ when factoring in their children (Fig. 2), which does not make a lot of sense. Indeed, it may actually throw off the user trying to find the cause of the issue as they inspect `parent2` only to find it is not to blame for the performance issue; i.e., it did not _cause_ it. If we aggregate those over longer paths, things can get even more confusing, particularly as those form DAGs and not trees.
<div>
<img src="https://hackmd.io/_uploads/B1IURPwT6.png" style="padding: 20px 0 25px 0"/>
</div>
**Figure 2.** Performance metrics on parent and children.
**Garbage collection.** Suppose we still wanted to aggregate metrics from children into parents using the _awaits for_ relationship above. Execution metrics are tracked per-Future, which means that if we want to accrue child execution time at the parent, then we must keep the child metrics around until no other parents can show up. Because Chronos allows awaiting on Futures even after they are finished, however, this means new parents can be added arbitrarily far into the future.
Indeed, as long as a reference to $f_c$ exists, there may still be other async procs that can await on $f_c$ and become their parent, even far after the $f_c$ has reached a finish state. This leaves us with an uncomfortable problem: either we keep metrics for finished Futures forever, and leak memory, or we rely on [broken finalizers](https://github.com/nim-lang/Nim/issues/4851) to get rid of them, neither of which is a compelling solution.
## 3. Parent/Child Relationships in Chroprof
Chroprof tackles the above issues by defining parent/children relationships that are more similar to how they are defined in a regular program. For given futures $f_p$ and $f_c$, we say that $f_c$ is a _child of $f_p$_ if an only if:
1. $f_p$ is a Future;
2. $f_c$ has been created by $f_p$ or, equivalently[^1], $f_c$ was created while $f_p$ was in $\running$ state.
What defines parent-child relationships then, in this case, is _creation_, not whether or not some Future awaited on another.
This corrects the anomaly we had seen in Figure 2. Indeed, under this new definition `parent2` is no longer a parent of `child` since it did not _create_ it, and therefore does not get`child`'s metrics accounted into its "exec time with children".
<div>
<img src="https://hackmd.io/_uploads/ByUPRwvTa.png" style="padding: 20px 0 25px 0"/>
</div>
**Figure 3.** Performance metrics of parent and children in Chroprof.
**Garbage collection.** Our new definition for parent/child means that the parent for a Future is defined at the moment it is created and never changes. This means we can easily track the lifetime of parent and child Futures on the profiler side and garbage collect partial metrics once both are completed, no finalizers needed.
**Execution overlap.** Finally, this definition [also helps with correcting metrics in execution overlap](https://hackmd.io/eQ_ouNV4QZe0TG334_gkFg#33-Children-and-Overlaps-in-Run-States): since we are tracking that $f_p$ created $f_c$, we know that the first iteration of the $f_c$ happened as part of $f_c$'s execution, and can deduct the correct value once $f_c$ completes.
## 4. Performance Ranks in Practice
Performance ranks built on top of the Chroprof parent/child relationship have proven to be of valuable help. Figure 4 ([click here if the plot does not load](https://storage.googleapis.com/codex-static2/public/perfplot.html)) shows one such occasion, where [Codex](https://github.com/codex-storage/nim-codex/) was slowing during an upload/download experiment. The chart is constructed by looking at metrics aggregated in performance ranks and de-accumulating them to provide a sort of "CPU usage" view for the Chronos event loop over 60s time intervals.
It's easy to spot that `taskHandler` is doing something strange, and that it progressively occupies the event loop more and more until collapse. By spotting the proc immediately below it, one can also infer that `taskHandler` is doing too many `localLookup`s, which turns out to be the case as [one inspects the code for taskHandler](https://github.com/codex-storage/nim-codex/blob/master/codex/blockexchange/engine/engine.nim#L563). By crossreferencing this with logs, [I was able to determine that cancellations were not working](https://github.com/codex-storage/nim-codex/pull/714).
<iframe width="100%" height="500" src="https://storage.googleapis.com/codex-static2/public/perfplot.html" frameborder="0"></iframe>
**Figure 4.** Profiler metrics from a Codex run with a performance bug in `taskHandler`. [Click here if the plot does not load](https://storage.googleapis.com/codex-static2/public/perfplot.html).
[^1]: Since Chronos is single-threaded, the assumption here is that if $f_c$ is created while $f_p$ was in $\running$ state, then it must be that the code running in $f_p$ created $f_c$ as part of its execution.