# Meeting 2021-05-21
## Timeline
May 17-June 7 Community bonding period
June 7-Aug 23 Student coding period
Sprint (+5 full days)
June 7-July 16 Coding Phase 1 (11 full days)
June 14-July 2 AP's exam period (-5 full days)
July 1-7 HB not very available (off the grid)
July 16 Phase 1 evals due
July 17-Aug 23 Coding Phase 2 (11 full days)
Aug 23 Students submit project along with final evaluation of their mentor(s)
Aug 30 Mentors submit final evaluations of students
## Capacity
GSoC students is expected to work 180 hours in total during the 10 weeks. That averages to 18 hours/week = 3.6 hours/day. The total work capacity corresponds to **22.5 days if working 8-hour days**
## Milestones
* All functions implements `useNames=NA/FALSE/TRUE` (preferred order of implementation) using R code
- implement `useNames=NA` first (easy), then `useNames=FALSE` (easy), and lastely `useNames=TRUE` (most work)
- tests will be written along with implementations, focus on one function, e.g. `sum2()` and add tests for `useNames=NA/FALSE/TRUE`. Then continue with another function
- Tests for `useNames=NA/FALSE/TRUE` available for all functions
- Minimal update the function manual pages and examples
* Identify reverse dependency packages that rely on `useNames=NA` (i.e. test against `useNames=FALSE` and also `useNames=TRUE`). Outcome: What happens if we change the default to `useNames=TRUE`?
* New release on CRAN with `useNames=NA`. Will allow useRs and package maitainers to complain if anything breaks.
**Timeline: End of Phase 1 (July 16)**
* Fix bugs reported by end-users and package maintainers
* Change C code structure such that `validateIndices()` always return `R_xlen_t*`. Clean up unnessesary macros.
- This will make the next milestone much easier
- We already have all the tests to validate we're not breaking anything
- Outcome: shorter compile times, smaller compiled package/library, fewer exported symbols
* Implement `useNames=NA/FALSE/TRUE` in C code where possible.
- Cleanup work too (see "IMPORTANT" point below)
* `useNames = TRUE` is the new default
* Public announcement, e.g. blog post
- Ideally written by student
- Final delivery for completing GSoC project
* Extra stuff if there's time
**Timeline: End of Phase 2 (Aug 23)**
* Drop support for `useNames = NA`
---
# Meetings 2021-05-?? + more
# matrixStats: Roadmap for 'useNames'
## Other things to work on
### Move coersion code down to C
We have three separate cases:
* `dim. <- as.integer(dim.)` in R-code. This can be replaced with `PROTECT(dim = coerceVector(dim, INTSXP);` in C-code and hence reduce duplication of code (applies to both row- and col-functions)
* `na.rm <- as.logical(na.rm)`. This statement is not necessary at all with low-level C-code as the statement `narm = asLogicalNoNA(naRm, "na.rm");` also works when `naRm` has a type different from logical. See https://github.com/wch/r-source/blob/79298c499218846d14500255efd622b5021c10ec/src/main/eval.c#L2152 and https://github.com/wch/r-source/blob/f2900da298efe1382277812423b8d242e88603e0/src/main/coerce.c#L1763 or more details.
* **IMPORTANT**: For some functions where the C-level code only handles floating-point value matrices, `as.numeric` is used to coerce the data at R level (example: https://github.com/HenrikBengtsson/matrixStats/blob/3b55d4ebcedc2f127d70b6148b8589be525d78ca/R/rowLogSumExps.R#L42). However, this removes attributes of the matrix and hence the dimnames which are needed for naming. This is solved by replacing the call to `as.numeric` by `PROTECT(lx = coerceVector(lx, REALSXP));
` in C-code, see https://github.com/AngelPn/matrixStats/blob/f17ed21b98120b57420ce0dfd1c1f865f9ffcc2b/src/rowLogSumExp.c#L19.
### Clean up C API
* Macros to handle subsetting of `rows` and `cols` are done for _both integers and doubles_ => 3-by-3 macros + need to pass 2 additional arguments + more ...
```
> nmax <- .Machine$integer.max
[1] 2,147,483,647
> log2(.Machine$integer.max)
[1] 31
x <- matrix(0L, nrow = nmax+1, ncol = 1)
length(x) == nmax+1
x <- matrix(0L, nrow = nmax, ncol = nmax)
length(x) == nmax^2
stopifnot(all(dim(x) <= .Machine$integer.max))
=> typeof(dim(x)) == "integer" => R_len_t
rowMedian(X, rows = c(1L, 4L,5L, 8L))
```
Coercion:
int == R_len_t (zero cost) **IDEAL**
int -> R_xlen_t (requires int -> long)
dbl -> R_len_t and R_xlen_t (always requires coercion)
means2() => indices require R_xlen_t
nrow(x)
x <- matrix(0L, nrow = nmax+1, ncol = nmax)
length(x) == nmax^2+nmax => TOO BIG
#### Cons of using `R_len_t` subscripting for matrices:
* Some matrixStats functions work on vectors which may be longer than `INT_MAX`, requring us to maintain two sets of index validation, naming and coercion functions.
* We may be in trouble if the R core team somehow enables use of matrices with dimentions greater than `INT_MAX`. Our assemption that `R_len_t` is 32-bit may be off-label and hence not necessarily resepcted in coming releases.
* We may never really get away from coercing to `R_xlen_t`. Note that in low-level code, the column offsets are given in `R_xlen_t` (https://github.com/HenrikBengtsson/matrixStats/blob/3b55d4ebcedc2f127d70b6148b8589be525d78ca/src/rowSums2_lowlevel_template.h#L35). Fundamentally, all matrix indexing is done linearly in column-first order. For matrices with more than `INT_MAX` elements it is hence entierly impossible to prevent coercion from `R_len_t` at some point. For smaller matrices however, it should be possible to stay within `R_len_t`, but this would require us to write two different version of the low-level code and dispatch between the cases at runtime.
* `R_len_t` subscripting only gives performance benefits over `R_xlen_t` if the supplied original subscripts are integers. Often useRs disregards the distinction between floats and integers and use floats everywhere [_HB: though, savvy users are after performance and knows this; this is one of objectives for **matrixStats**_]
From these considerations, I consider the "ideal" case of coercion to `R_len_t` is not worth the time and effort of making it work reliably. It might even provide an appreciable preformance benefit. To cite Donald Knuth:
> The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.
[_HB: However, that quote is often misused; if you can trim of milliseconds here and there and there are 100,000s of users who benefit from it, then the amount of time, energy and CO2 adds up. Having said that, I agree it might be overkill for now and most likely nothing for a GSOC student to work on_]
## But first ...
* Bioc meeting (April 30 @ 17:00-18:30 UTC+02)
* matrixStats article
## Priorities
* Correctness
* Performance
* Avoid breaking existing packages and scripts
## Roadmap and strategy
1. Implement support for useNames=NA (current), FALSE, and TRUE controlled via new argument `useNames` [was: ~~R option `matrixStats.useNames` (defaults to env var `R_MATRIXSTATS_USENAMES`)~~ /HB 2021-06-04]
2. Add package tests for FALSE and TRUE (NA can be handled by revdep checks)
- Use `dimnames(apply())` the reference for `useNames=TRUE` + custom tweaks
3. Run package tests with Valgrind, ASan/UBSan enabled
4. Run `revdepcheck::revdep_check()` on 320+ packages and summarize failure rate with
- `R_MATRIXSTATS=NA`
- `R_MATRIXSTATS=FALSE`
- `R_MATRIXSTATS=TRUE`
- Student implements `useNames` for plain R functions. Run revdep checks as soon as this is done.
- Student implements `useNames` for C functions where it's straightforward. Run revdep checks as new functions are implemented.
5. Simplify C API for `validateIndices()` and `setNames()`
5. Benchmark `useNames = TRUE` vs `useNames = FALSE`
- adopt built-in benchmark reports, cf. <https://github.com/HenrikBengtsson/matrixStats/wiki/Benchmark-reports>
6. Can `useNames = TRUE` become the new default?
- no significant (time & memory) performance loss
- small revdep failure rate
- Easy to fix break revdep packages? (serves as a proxy for all unknown user scripts)
- UPDATE 2021-05-20: Probably not; there's a significant overhead dealing with names. See details below.
7. If we can't make `useNames = TRUE` the default, then
- Introduce argument `useNames = NA` everywhere (100% safe)
- Move to making `useNames = FALSE` then new default (start with functions where `NA` and `FALSE` makes no difference)
- For remaining cases, work with revdep packages to make `useNames = FALSE` the new default everywhere
## Practical issues
### Rev dep checks
Running reverse dependency checks on ~~320~~ 348 packages takes a long time. It's possible to run on a laptop but it's tedious and too much to ask from the GSoC student. If they have access to a 16-32 core machine where they can run for 24 hours, they can use that.
* Ask student to run `revdepcheck::revdep_check()` on a small subset of the packages. This is a very useful skill to learn
* Henrik run full ~350 checks on his HPC environment
### Communication with revdep packages
We should encourage the student interact with revdep packages (via their issue trackers). This is a useful skill, it'll help getting over the intimidation, it'll help becoming part of the R community.
### Markdown
Get students onboard with Markdown. Hopefully happens naturally.
# Appendix
## Benchmarks
### Benchmark colSum2() and rowSums2() with and without names
There's approximately a 5% overhead from having to deal with row and column names. The microbenchmarking is not perfect and there's some randomness to it but the below is what you typically on average.
```r
*** colSums2(<integer 5x3 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 1.17µs 1.37µs 685251. 0B 27.4 99996 4
2 with_names 1.29µs 1.44µs 669819. 0B 26.8 99996 4
Relative slowdown: 1.05
*** colSums2(<numeric 5x3 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 1.18µs 1.36µs 678602. 0B 27.1 99996 4
2 with_names 1.28µs 1.5µs 636837. 0B 25.5 99996 4
Relative slowdown: 1.1
*** rowSums2(<integer 5x3 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 1.21µs 1.43µs 647705. 0B 25.9 99996 4
2 with_names 1.31µs 1.52µs 635773. 0B 25.4 99996 4
Relative slowdown: 1.06
*** rowSums2(<numeric 5x3 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 1.21µs 1.4µs 669773. 0B 26.8 99996 4
2 with_names 1.31µs 1.55µs 620545. 0B 24.8 99996 4
Relative slowdown: 1.1
```
```r
*** colSums2(<integer 100x30 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 4.32µs 4.54µs 206186. 1.12KB 8.25 99996 4
2 with_names 4.48µs 4.73µs 197146. 13.16KB 7.89 99996 4
Relative slowdown: 1.04
*** colSums2(<numeric 100x30 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 4.33µs 4.56µs 208938. 1.12KB 8.36 99996 4
2 with_names 4.46µs 4.74µs 205746. 24.88KB 8.23 99996 4
Relative slowdown: 1.04
*** rowSums2(<integer 100x30 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 3.75µs 3.97µs 241704. 1.12KB 9.67 99996 4
2 with_names 3.96µs 4.24µs 225136. 13.71KB 9.01 99996 4
Relative slowdown: 1.07
*** rowSums2(<numeric 100x30 matrix>) ***
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl>
1 without_names 4.13µs 4.36µs 219292. 1.12KB 8.77 99996 4
2 with_names 4.29µs 4.53µs 213516. 25.43KB 8.54 99996 4
Relative slowdown: 1.04
```
Code:
```r
library(matrixStats)
set.seed(42)
dim <- c(5, 3)
dim <- c(100, 10)
for (fcn_name in c("colSums2", "rowSums2")) {
fcn <- get(fcn_name, mode = "function")
for (mode in c("integer", "numeric")) {
message(sprintf("*** %s(<%s %dx%d matrix>) ***",
fcn_name, mode, dim[1], dim[2]))
X <- switch(mode,
integer = sample.int(prod(dim)),
numeric = rnorm(prod(dim)),
)
dim(X) <- dim
dimnames <- list(letters[seq_len(dim[1])], LETTERS[seq_len(dim[2])])
X_names <- structure(X, dimnames = dimnames)
gc()
stats <- bench::mark(
without_names = fcn(X),
with_names = fcn(X_names),
iterations = 100e3,
relative = FALSE,
check = FALSE
)
print(stats, n_extra = 0L)
dt <- as.numeric(stats[["median"]])
cat(sprintf("Relative slowdown: %.3g\n\n", dt[2]/dt[1]))
}
}
```
### Benchmark rbind:ing many named and unnamed matrices
Results: There's a 20% overhead when rbind:ing 10,000 5x3 matrices if they have row- and column names.
```
# A tibble: 2 x 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
<bch:expr> <bch:> <bch:> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm>
1 without_names 2.34ms 2.81ms 343. 1.14MB 28.2 924 76 2.69s
2 with_names 2.79ms 3.42ms 283. 1.53MB 24.6 920 80 3.25s
Relative slowdown: 1.21
```
FWIW, dropping names before doing this is costly, e.g. `Z2 <- lapply(Y_names, unname)` is 10 times slower and `Y2 <- Y_names; for (kk in seq_along(Y2)) dimnames(Y2[[kk]]) <- NULL; Z2 <- do.call(rbind, Y2)` is 3 times slower.
Code:
```r
library(matrixStats)
set.seed(42)
dim <- c(5, 3)
n <- 10e3
## Two lists of many named and unnamed matrices
X <- array(rnorm(prod(dim)), dim = dim)
dimnames <- list(letters[seq_len(dim[1])], LETTERS[seq_len(dim[2])])
X_names <- structure(X, dimnames = dimnames)
Y <- rep(list(X), times = n)
Y_names <- rep(list(X_names), times = n)
gc()
stats <- bench::mark(
without_names = do.call(rbind, Y),
with_names = do.call(rbind, Y_names),
iterations = 1000,
relative = FALSE,
check = FALSE
)
print(stats, n_extra = 0L)
dt <- as.numeric(stats[["median"]])
cat(sprintf("Relative slowdown: %.3g\n\n", dt[2]/dt[1]))
```
# Add useNames argument at R level
```
namesFromRownames <- function(X, rows) {
}
namesFromColnames <- function(X, cols) {
}
foo <- function(X, useNames = NA) {
res <- do_something(X)
if (!is.na(useNames)) {
if (useNames) {
stop("useNames = TRUE is currently implemented")
names(res) <- namesFromRownames(X, rows)
} else {
names(res) <- NULL
}
}
res
}
```
# rowSums2() example
```
rowSums2 <- function(x, rows = NULL, cols = NULL, na.rm = FALSE, dim. = dim(x), useNames = NA, ...) {
dim. <- as.integer(dim.)
na.rm <- as.logical(na.rm)
has_nas <- TRUE
res <- .Call(C_rowSums2, x, dim., rows, cols, na.rm, has_nas, TRUE)
## Update names attributes?
if (!is.na(useNames)) {
if (useNames) {
names <- rownames(x)
if (!is.null(names)) {
if (!is.null(rows)) names <- names[rows]
names(res) <- names
}
} else {
names(res) <- NULL
}
}
res
}
```