owned this note
owned this note
Published
Linked with GitHub
# Computing Pi with prefetch
contributed by <`ierosodin`>, <`oiz5201618`>
###### tags: `sysprog-hw` `prefetch`
# π calculation
>> 請詳述動機、參考資料,以及如何設計實驗 [name=jserv]
# 論文
出處:
* [When Prefetching Works, When It Doesn’t, and Why](http://www.cc.gatech.edu/~hyesoon/lee_taco12.pdf)
* [Data Cache Prefetching Using a Global History Buffer](http://www.eecg.toronto.edu/~steffan/carg/readings/ghb.pdf)
* [Streaming Prefetch](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.5924&rep=rep1&type=pdf)
* [Adaptive prefetching using global history buffer in multicore processors](http://link.springer.com/article/10.1007/s11227-014-1088-y)
> 真的是論文生論文= = [name=阿Hua]
## 動機
已經有許多基於硬體及軟體的 prefetch 機制,但 prefetch 的使用方法及使用時機卻沒有一份詳細完整的說明,因此在該論文中將探討:
* What are the limitations and overheads of software prefetching?
* What are the limitations and overheads of hardware prefetching?
* When is it beneficial to use software and/or hardware prefetching?
>> 哪篇論文?請明確標注並且提供超連結 [name=jserv]
## 論文中提到 (讀完再做整理)
### When Prefetching Works, When It Doesn’t, and Why
> 這篇論文中,提到了很多沒解釋的名詞,都要參考其他論文才能看懂,像是 GHB prefetcher、streaming prefetcher、stride prefetch、...[name=阿Hua]
<div class="ui-view-area">
We show that when software prefetching targets short array streams, irregular memory address patterns, and L1 cache miss reduction, there is an overall positive impact with code examples.
</div>
<hr>
<div class="ui-view-area">
We also observe that software prefetching
can interfere with the training of the hardware prefetcher, resulting in strong negative
effects, especially when using software prefetch instructions for a part of streams.
</div>
<hr>
![](https://i.imgur.com/NXea47z.png)
<div class="ui-view-area">
As Table I indicates, array and some RDS data structure
accesses can be easily predicted, and thereby prefetched, by software. However,
data structures like hashing are much harder to prefetch effectively
</div>
<hr>
stride of an array => Array 中的一個元素的大小
### Streaming Prefetch
<div class="ui-view-area">
Aside from reducing the number of cache misses through code and cache optimizations, there are two major
approaches to reducing the average miss penalty:
<li><b>hiding memory latency with multi-level caches</b></li>
<li><b>hiding cache misses with prefetching.</b></li>
</div>
<hr>
<div class="ui-view-area">
Clearly, data prefetching has not yet emerged as a widespread commercial optimization. The problem is to understand why the approaches to prefetching that have been proposed or implemented up to now are not satisfactory. There are basically two aspects to distinguish: accuracy of prediction and hardware support for prefetching.
</div><hr>
<div class="ui-view-area">
In addition to the significant compiler overhead of
software prefetching, hardware support is still necessary (prefetch on miss, prefetch buffers ) and the numerous
additional instructions corresponding to prefetch requests and the associated address computations can have a
non-negligible impact on instruction cache performance
</div>
<hr>
### Two prefetching approaches
<div class="ui-view-area">
<b>SW</b>
compiler issues prefetch instructions.
(problems with extra instruction overhead)
<b>HW</b>
hardware decides which memory addresses to prefetch based on past accesses or future instructions.
(problems with lateness, inaccurate addresses, lengthening the critical path)
* Hardware prefetching mechanisms are generally categorized to ==sequential, stride and context methods==.
* Sequential prefetching mechanisms are simple and efficient which exploit spatial locality and work well for ==simple data structures like arrays==.
* Stride-based methods ==monitor miss addresses and detect constant strides== from loop structures. The idea here is to ==build a table==, record miss addresses and compare successive addresses to find constant strides.
* Context-based methods, also named Markov predictors, use a set of past values for prefetching. It can capture ==linked list and pointer== chasing activities.
</div>
---
# 實作cache latency test
設計實驗,利用 rdtsc 量測 cycle,連續讀取一塊記憶體位址,觀察 AVX 在 prefetch 前後 cache 的行為
## ASM
在 C 語言中使用 asm 是 AT&T 規格,因此指令的 source 跟 destination 必須跟 intel 規格相反
### _mm256_load_pd() 組語
```
movaps:Move Aligned Packed Single-Precision Floating-Point From xmm/mem to mem/xmm
movapd:Move Aligned Packed Double-Precision Floating-Point From xmm/mem to mem/xmm
vmovaps:Move Aligned Packed Single-Precision Floating-Point From ymm/mem to mem/ymm
vmovapd:Move Aligned Packed Double-Precision Floating-Point From ymm/mem to mem/ymm
```
```clike=
double a[4] __attribute__ ((aligned (32))) = {1.0, 2.0, 3.0, 4.0};
asm (
"vmovapd %0, %%ymm0\n\t" //move a to ymm0
:: "m" (a)
);
```
## RDTSC Overhead count
```clike=
uint64_t overhead_count()
{
uint32_t upper, lower, temp1, temp2;
uint64_t overhead;
asm (
"mfence\n\t" // memory fence
"rdtsc\n\t" // get cpu cycle count
"mov %%edx, %2\n\t"
"mov %%eax, %3\n\t"
"mfence\n\t" // memory fence
"mfence\n\t"
"rdtsc\n\t"
"sub %2, %%edx\n\t" // substract cycle count
"sbb %3, %%eax" // substract cycle count
: "=a" (lower), // a for eax
"=d" (upper), // d for edx
"=r" (temp1),
"=r" (temp2)
);
overhead = ((uint64_t)upper << 32) | lower;
return overhead;
}
```
## 統計學處理資料
* 67%信賴區間
* 第三四分位數
## 實驗數據
在 software prefetch 前,連續使用 rand() 隨機抓取陣列 load 進 ymm0 ,cache-misses 並沒有因此而增加(約2~3%)
> 似乎是沒有考慮到 hardware prefetcher [name=ierosodin]
# 嘗試使用不同的pi計算方式
## Geometric Constructions
<div class="ui-view-area">
Newton's approximation of Pi ,近似於<br>
<img src="https://i.imgur.com/Ny4A9EH.gif"/><br>
詳細的數學推導可以參考<a href="http://egyptonline.tripod.com/newton.htm">Newton's approximation of Pi</a><br><br>
這些式子再由 Beeler 經過一些數學轉換變成<br>
<img src="https://i.imgur.com/SIFGXSu.png"/><br><br>
最後 Dik T. Winter 寫成 C code
</div>
<hr>
```clike=
for (i = 0; i < 2800; i++) {
r[i] = 2000;
}
for (k = 2800; k > 0; k -= 14) {
d = 0;
i = k;
for (;;) {
d += r[i] * 10000;
b = 2 * i - 1;
r[i] = d % b;
d /= b;
i--;
if (i == 0) break;
d *= i;
}
printf("%.4d", c + d / 10000);
c = d % 10000;
}
```
### 效能測試
```
Performance counter stats for './a.out' (100 runs):
501 cache-misses # 3.679 % of all cache refs ( +- 10.43% )
13,620 cache-references ( +- 0.76% )
16,997,255 cycles ( +- 0.05% )
0.010747486 seconds time elapsed ( +- 2.41% )
```
>>記得更新github連結喔!
>>[color=red][name=課程助教]
## Spigot Algorithm for Pi
<div class="ui-view-area">
An algorithm which generates digits of a quantity one at a time ==without using or requiring previously computed digits==
</div>
<hr>
# Reference
* [When Prefetching Works, When It Doesn’t, and Why](http://www.cc.gatech.edu/~hyesoon/lee_taco12.pdf)
* [stride of an array](https://en.wikipedia.org/wiki/Stride_of_an_array)
* [Data Cache Prefetching Using a Global History Buffer](http://www.eecg.toronto.edu/~steffan/carg/readings/ghb.pdf)
* [Streaming Prefetch](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.5924&rep=rep1&type=pdf)
* [Stride Prefetch](http://www.ics.uci.edu/~amrm/slides/amrm_structure/pta/tsld049.htm)
* [Adaptive prefetching using global history buffer in multicore processors](http://link.springer.com/article/10.1007/s11227-014-1088-y)
* [cache latency measurment](http://stackoverflow.com/questions/21369381/measuring-cache-latencies)
* [asm in c](http://sp1.wikidot.com/gnuinlineassembly)
* [mov for SSE SIMD](http://stackoverflow.com/questions/8671438/how-do-you-move-128-bit-values-between-xmm-registers)
* [MOVAPS & VMOVAPS](http://www.felixcloutier.com/x86/MOVAPS.html)
* [x86 assembly/sse](https://en.wikibooks.org/wiki/X86_Assembly/SSE)
* [printf int64_t in c](http://stackoverflow.com/questions/9225567/how-to-print-a-int64-t-type-in-c)
---
* [pi calculation algorithm](https://en.wikipedia.org/wiki/Category:Pi_algorithms)
* [pi from Dik T.Winter](https://crypto.stanford.edu/pbc/notes/pi/code.html)