# Group 144 - Homework 2
## Exercise 1
Array A store order: A[0][0]-A[0][n-1], A[1][0]-A[1][n-1], ...
Listing 1 read order: A[0][0]-A[0][n-1], A[1][0]-A[1][n-1], ...
Listing 2 read order: A[0][0]-A[n-1][0], A[0][1]-A[n-1][1], ...
1. The read cache miss rate is 1 : 16, since with each A-element 15 other elements are loaded which will be used afterwards.
2. The read cache miss rate is 16 : 16 (=100%), since in each iteration a different row get's accessed and loaded into the cache, which is not usful in the following iterations.
3. Listing 1 cannot be further improved, since the array is contiguous memory and read from left to right and each time a miss happens a full block is loaded into the cache. Simply switching the i- and j-loop in Listing 2 would fix the problem.
## Exercise 2
```
int count_number_of_edges(int** A, int n) {
int e = 0;
// compute number of edges
#pragma omp parallel for collapse(2) reduction(+: e)
for (int i = 0 ; i < n; i++)
for (int j = 0; j < n; j++)
e += A[i][j];
return e;
}
```
## Exercise 3
```
void compute_outdeg_and_indeg(int** A, int n, int outdeg[], int indeg[]) {
// compute outdeg and indeg for each graph node
// out - degree should be stored in outdeg
// in - degree should be stored in indeg
#pragma omp parallel for
for (int i = 0; i < n; i++) {
indeg[i] = 0;
outdeg[i] = 0;
}
#pragma omp parallel for collapse(2)
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
#pragma omp atomic update
indeg[j] += A[i][j];
#pragma omp atomic update
outdeg[i] += A[i][j];
}
}
}
```
## Exercise 4
1. a[k] contains the number of the thread that ran the loop body with i=k. t[i] contains the number of loop bodies, the thread i ran in total.
2.
| case / i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |
| ---------- | - | - | - | - | - | - | - | - | - | - | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- |
| static | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 |
| static, 1 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 |
| static, 4 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 |
| static, 5 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 3 |
| dynamic, 1 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 | 0 | 1 | 2 | 3 | 4 |
| dynamic, 2 | 0 | 0 | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 | 0 | 0 | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |
| guided, 4 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 4 | 4 | 4 | 4 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 |
| case / t | 0 | 1 | 2 | 3 | 4 |
| ---------- | - | - | - | - | - |
| static | 4 | 4 | 4 | 4 | 4 |
| static, 1 | 4 | 4 | 4 | 4 | 4 |
| static, 4 | 4 | 4 | 4 | 4 | 4 |
| static, 5 | 5 | 5 | 5 | 5 | 0 |
| dynamic, 1 | 4 | 4 | 4 | 4 | 4 |
| dynamic, 2 | 4 | 4 | 4 | 4 | 4 |
| guided, 4 | 4 | 4 | 4 | 4 | 4 |
3.
## Exercise 5
1. See the below geometric idea for the calculation of the work-function. The work is proportional to the number of started tasks. The numbers in the rectangle (i.e. $s$) represent the number of tasks that require them. They can each be calculated by multiplying the lengths of their sides as shown.

The number of started tasks is in $O(n^2 \cdot k^2)$.
2. The implementation is correct but inefficient for two reasons:
(1) The threads have to wait for each other, e.g if we consider values near the side of the triangle, b0 might be much faster than b1.
(2) Values get recomputed multiple times, because intermediate values get discarded immedately.
3. The minimal amount of work ($n \cdot k$) can be achieved by taking a more memory-heavy approach. Save the pascal-triangle in a 2d-array of roughly size $n \cdot k$ and loop through it row by row (for increasing $n$). For a row, additions can be run in parallel. As each row is dependant on the previous one, however, this is where parallelization ends.
## Exercise 6
1. Since sum is a shared variable, we cannot write to it within a thread without proper locks/atomic etc. `sum = 0;` at the beginning is therefore a race condition, some threads might be in the for loop while one thread writes `0` to `sum`. `#pragma omp for reduction(+:a)` should be `#pragma omp for reduction(+:sum)`.
2. The `min` and `max` values are not correctly computed! Similarly to 1. we need to inititialize `mini` and `maxi` outside of the `parallel` clause.
Here is a version produces correct results:
```
int sum = 0, min, max;
int mini = 0, maxi = 0;
int i;
#pragma omp parallel private(i)
{
#pragma omp for reduction(+:sum)
for (i=0; i<n; i++) {
sum += a[i];
}
#pragma omp for
for (i=1; i<n; i++) {
#pragma omp critical(MIN)
if (a[i]<a[mini]) mini = i;
#pragma omp critical(MAX)
if (a[i]>a[maxi]) maxi = i;
}
#pragma omp single nowait
{
max = a[maxi];
min = a[mini];
}
}
```
3. This code is a great example of `serialization`. The two `critical` sections get executed very frequently, which results in threads spending most of their time waiting for other threads to finish. It would be better to either make `mini` and `maxi` private to each thread and then to compare the results afterwards or to declare the two variable as shared arrays of size of requested threads. We decided to declare `mini` and `maxi` as private variable and then have each thread update the shared `min` amd `max` in turns. This is much faster becuase the only waiting happens once per thread with a fast operation while the loop can properly happen in parallel.
```
int sum = 0, min = 0, max = 0;
int mini, maxi;
int i;
#pragma omp parallel private(i, mini, maxi)
{
#pragma omp for reduction(+:sum)
for (i=0; i<n; i++) {
sum += a[i];
}
mini = maxi = 0;
#pragma omp for
for (i=1; i<n; i++) {
if (a[i]<a[mini]) mini = i;
if (a[i]>a[maxi]) maxi = i;
}
#pragma omp critical(MIN)
if (min > a[mini]) min = a[mini];
#pragma omp critical(max)
if (max < a[maxi]) max = a[maxi];
}
```
## Exercise 7
1. relative speedup:
n = 460
| ID/p | 1 | 2 | 4 | 8 | 16 | 24 | 32 |
| ---- | - | ---- | ----- | --- | --- | --- | --- |
| 0 | 1 | 1.157| 1.315 | 1.320| 1.320| 1.316 | 1.136|
| 1 | 1 | 1.024| 1.024 | 1.017| 0.998| 1.036| 1.027|
| 2 | 1 | 1.273| 1.253 | 1.150| 1.259| 1.259| 0.896|
| 3 | 1 | 1.087| 1.445 | 2.270| 1.751| 0.372| 0.159|
n = 1350
| ID/p | 1 | 2 | 4 | 8 | 16 | 24 | 32 |
| ---- | - | ---- | --- | --- | --- | --- | --- |
| 0 | 1 | 0.990| 0.992| 0.990| 0.986| 0.998| 0.998|
| 1 | 1 | 1.037| 1.033| 1.035| 1.036| 1.032| 1.009|
| 2 | 1 | 0.997| 0.992| 0.993| 0.998| 0.998| 0.989|
| 3 | 1 | 1.658| 2.805| 4.324| 6.760| 3.582| 3.093|
2. absolute speedup (let Tseq be OpenBLAS-time with p=1, all n=1350):
| ID/p | 1 | 2 | 4 | 8 | 16 | 24 | 32 |
| ---- | - | ---- | --- | --- | --- | --- | --- |
| 0 | 0.027| 0.027| 0.027| 0.027| 0.026| 0.027| 0.027|
| 1 | 0.096| 0.100| 0.100| 0.100| 0.100| 0.099| 0.097|
| 2 | 0.019| 0.019| 0.019| 0.019| 0.019| 0.019| 0.019|
| 3 | 1 | 1.658| 2.805| 4.324| 6.760| 3.582| 3.093|
3. plots for n=1350:
As the OpenBLAS algorithm's speedup dominates the charts and hardly anything about the other algorithms can be seen when it is included, we provide three charts, where only one includes the OpenBLAS data. This is because these numbers are the same for absolute and relative speedup (because the time of the algorithm with p=1 was chosen as serial time).



Weak scaling analysis
Work is fixed: $$w = 2 * 1800^3$$ operations
1) $$ 1 = \frac{n^3}{1800^3*p} \rightarrow n = 1800 * p ^ \frac{1}{3}$$
2)
1 |1800
----|----------------
2 |2267
4 |2857
8 |3600
16 |4535
24 |5192
32 |5714
3)

4)

5)
Both algorithm 1 and 3 seem to be weakly scalable in good approximation, altough for algorithm 1 it is more visible.
## Exercise 8
1. pseudo code + complexity:
```
function calc_transitive_closure(A, n):
ext = I+A # extended A
m = 1
do:
old = ext
ext = ext * ext
m *= 2
while old != ext && m<n
return ext
```
T is in O(log(n) * T_mult) = O(log(n) * n^3/p)
2. The algorithm is not work-optimal. The sequential complexity is in O(n^3)
3. mmm_power c implementation
We think this code should work, but we get a Segmentation Fault. If you know what the problem is, we would be interested.
```
#include "parmmm.h"
#include "stdlib.h"
#include "utils.h"
// destructive (inplace) matrix power computation
void mmm_power(base_t **a, int n) {
int i, j;
#pragma omp parallel for
for(i=0; i<n; i++) // a = a + I
a[i][i] = 1;
int m = 1;
base_t **old = (base_t**)malloc(sizeof(base_t[n][n]));
int diff;
do{
#pragma omp parallel for collapse(2)
for(i=0; i<n; i++)
for(j=0; j<n; j++)
old[i][j] = a[i][j];
mmm_3j(old, old, a, n);
m *= 2;
//check difference
diff = 0;
#pragma omp parallel for collapse(2)
for(i=0; i<n; i++)
for(j=0; j<n; j++)
if(old[i][j] != a[i][j])
diff=1;
}while(diff>0 && m<n);
free(old);
}
```