Try   HackMD

2024q1 Homework1 (lab0)

執行人: popo8712
專題解說錄影
contributed by < cheezad >

Reviewed by ChenFuhuangKye

  1. 可以在開發過程中注記 git commmit 資訊。

開發環境

$ gcc --version
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  12
  On-line CPU(s) list:   0-11
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
    CPU family:          6
    Model:               158
    Thread(s) per core:  2
    Core(s) per socket:  6
    Socket(s):           1
    Stepping:            10
    CPU max MHz:         4500,0000
    CPU min MHz:         800,0000
    BogoMIPS:            5199.98

開發過程

lab0-c

q_new()

原本的程式碼在 malloc(sizeof(struct list_head)) 之後並沒有檢查是否 malloc 成功,在老師提醒之後有加上檢查。

struct list_head *q = malloc(sizeof(struct list_head));
+    if(!q)
+       return NULL;

了解 lab0 裡面的資料結構

第一次 接觸這類型的程式,所以做了一點整理。

不用強調「第一次」,在資訊科技領域,每天都少不了「第一次」,若沒有遇到這樣的事,你可能不在這領域。

list_head : 雙向鏈結串列
prev : 指向上一個節點
next : 指向下一個節點
如果是空的鏈結,則head指向自己

struct list_head {
    struct list_head *prev;
    struct list_head *next;
};

避免非必要的項目縮排 (即 * - ),以清晰、明確,且流暢的漢語書寫。

element_t : 取得該位置的值
value : 取得該節點的值
list : 雙向鏈結的節點

typedef struct {
    char *value;
    struct list_head list;
} element_t;

queue_contex_t
q : 指向序列的head
chain : 串起不同鏈結的head
size : 序列的長度
id : 序列的編號

typedef struct {
    struct list_head *q;
    struct list_head chain;
    int size;
    int id;
} queue_contex_t;

Linux Kernel API

透過這些已經寫好的程式碼讓自己寫的程式更精簡。

避免非必要的項目縮排 (即 * - ),以清晰、明確,且流暢的漢語書寫。

範例:list_splice

  • list 接到 head前面
  • 結果就是 head 後面先接 list ,再接 head 自己的節點

改進你的漢語表達。

static inline void list_splice(struct list_head *list, struct list_head *head)
{
    struct list_head *head_first = head->next;
    struct list_head *list_first = list->next;
    struct list_head *list_last = list->prev;

    if (list_empty(list))
        return;

    head->next = list_first;
    list_first->prev = head;

    list_last->next = head_first;
    head_first->prev = list_last;
}

整合網頁伺服器

執行 qtest 裡面的 web 之後,在瀏覽器打 http://<myip>:9999/new在未修改任何網頁相關程式碼之前 更動網頁相關的程式碼之前,執行結果如下

$ ./qtest 
cmd> web
listen on port 9999, fd is 3
cmd> l = []
cmd> Unknown command 'favicon.ico'

當在還沒執行 web 之前,按下 tab 會自動補齊剩下的字,但在開啟 web 之後,這個功能便會消失。


lib/list_sort.c Study

Research Paper

Golin, M., Sedgewick, R. (1993) Queue-mergesort

In a list of queue Q

  1. Pick out the first two list
  2. Merge them together
  3. Put the result at the end (tail) of the queue
  4. Repeat until there is only one list left in the queue

Queue Q







G



Head
Head



ele1

1



Head->ele1





Tail
Tail



ele5

1



Tail->ele5





ele2

1



ele1->ele2





ele1n

D



ele1->ele1n





ele3

1



ele2->ele3





ele2n

A



ele2->ele2n





ele4

1



ele3->ele4





ele3n

B



ele3->ele3n





ele4->ele5





ele4n

C



ele4->ele4n





ele5n

F



ele5->ele5n





Pick out the first two list and merge them together







G1



Head
Head



ele1

2



Head->ele1





Tail
Tail



ele4

1



Tail->ele4





ele2

1



ele1->ele2





ele2n

A



ele1->ele2n





ele3

1



ele2->ele3





ele3n

B



ele2->ele3n





ele3->ele4





ele4n

C



ele3->ele4n





ele5n

F



ele4->ele5n





ele1n

D



ele2n->ele1n





Move it to the end of the list







G1



Head
Head



ele2

1



Head->ele2





Tail
Tail



ele1

2



Tail->ele1





ele2n

A



ele1->ele2n





ele3

1



ele2->ele3





ele3n

B



ele2->ele3n





ele4

1



ele3->ele4





ele4n

C



ele3->ele4n





ele4->ele1





ele5n

F



ele4->ele5n





ele1n

D



ele2n->ele1n





Repeat until there is only one list left in the queue







G1



Head
Head



ele1

5



Head->ele1





Tail
Tail



Tail->ele1





ele2n

A



ele1->ele2n





ele1n

D



ele5n

F



ele1n->ele5n





ele3n

B



ele2n->ele3n





ele4n

C



ele3n->ele4n





ele4n->ele1n





Prove that queue-mergesort is optimal mergesort by checking number of the comparisons performed in worse case is no more then that performed by any mergesort.
Define the weight of a binary tree T to be w(T) where the value is the sum of all internal nodes' weight of the tree.
Merge tree T for queue-mergesort run on 11 elements







B



ele1

11



ele2

7



ele1->ele2





ele3

4



ele1->ele3





ele4

4



ele2->ele4





ele5

3



ele2->ele5





ele6

2



ele3->ele6





ele7

2



ele3->ele7





ele8

2



ele4->ele8





ele9

2



ele4->ele9





ele10

2



ele5->ele10





ele11

1



ele5->ele11





ele12

1



ele6->ele12





ele13

1



ele6->ele13





ele14

1



ele7->ele14





ele15

1



ele7->ele15





ele16

1



ele8->ele16





ele17

1



ele8->ele17





ele18

1



ele9->ele18





ele19

1



ele9->ele19





ele20

1



ele10->ele20





ele21

1



ele10->ele21





After adding up the weight of merge tree T's internal nodes, w(T) = 39.
Queue-mergesort is optimal mergesort -> w(T) <= w(T’) for all merge trees T’ with n leaves.
Queue-mergesort and Top-down mergesort are both optimal mergesort, but bottom-up mergesort isn't. However, with a simple modification, we can make bottom-up mergesort optimal. At the end of a pass, in case there are an odd number of lists, we do not leave the last list by itself but merge it with the new list that was just created. By doing so, the merge-tree created becomes optimal.

Bottom-up mergesort A detailed analysis

Bottom-up Mergesort like the top-down variant follows the divide-and conquer paradigm. However, bottom-up uses "power of two" splitting strategy instead of "half and half" strategy in top-down variant, making it possible to be written in recursive.

  • Top-down mergesort and bottom-up mergesort has the exact same number of comparisons in base case.
  • Top-down has the best worst-case behavior among all possible splitting strategies.

dudect Paper and Random Number

〈Dude, is my code constant time?〉

Time attack is a broad class of side-channel attacks that measure execution time of cryptographic implementation in order to infer secrets.
People use tools to collect time attack information. Traditionally these tools usually rely on details of the hardware when CPU manufactures publish little information on, and may be subject to change on an update.
Dedect is a tool that collects time attack information and checks whether a piece of code runs in constant-time or not. They use statistical analysis of execution timing based on leakage detection tests to determine whether the code runs in constant-time.
Steps of their experiment are as following:

  1. Collect data using cycle counters or timers.
  2. Post-processing is applied, like cropping out the measurements that are interrupted by the OS.
  3. A statistical test is performed to try to disprove the null hypothesis.

Student's t-test / Welch's t-test

t-test is a statistical test used to determine whether the difference between two groups is statistically significant or not. Normally used under the testing of null hypothesis. t-test requires two independent and random samples that have normal distributions.
The difference between Student's t-test and Welch's t-test:

  • Student's t-test requires equal population variances and equal sample sizes
  • Welch's t-test can be used when two groups don't have equal sample variance or equal sample sizes

Welch's t-test

General Form

TestStat=EstimateValueWeHypothesizeStandardError

t=(x¯Ax¯B)(μAμB)sA2nA+sB2nB
x¯
: mean of group
μ
: difference we hypothesize for true population means (normally the result of subtraction is zero)
S
: standard deviation
n
: sample number

Source: Welch's t-test

Processing data

Setting threshold for cropping measurements in dudect.c

static void prepare_percentiles(dudect_ctx_t *ctx) {
  qsort(ctx->exec_times, ctx->config->number_measurements, sizeof(int64_t), (int (*)(const void *, const void *))cmp);
  for (size_t i = 0; i < DUDECT_NUMBER_PERCENTILES; i++) {
    ctx->percentiles[i] = percentile(
        ctx->exec_times, 1 - (pow(0.5, 10 * (double)(i + 1) / DUDECT_NUMBER_PERCENTILES)),
        ctx->config->number_measurements);
  }
}

This part of the code uses the function percentile

static int64_t percentile(int64_t *a_sorted, double which, size_t size) {
  size_t array_position = (size_t)((double)size * (double)which);
  assert(array_position < size);
  return a_sorted[array_position];
}

Couldn't understand why 1 - (pow(0.5, 10 * (double)(i + 1) / DUDECT_NUMBER_PERCENTILES)) is the second parameter of percentile

#### 〈Dude, is my code constant time?〉論文整理
dudect 運用定時攻擊的概念以及評估程式碼在特定平台上是否以常數時間運行的需求。現有工具檢測可變時間程式碼有缺點,因此作者利用洩漏檢測技術將dudect呈現為一種簡潔、易於使用和易於維護的工具。此外,它強調了與先前解決方案相比的方法差異,特別是不需要建立硬體行為模型。

你忽略過多關鍵訊息,例如常數時間的判定和硬體指令有關

程式碼改進

經過與 GitHub 上原始程式碼與 lab0-c 的程式碼比對之後可以發現程式之所以不能通過 trace-17-complexity 是因為他少了 int64_t *percentilespercentiles 在這裡的作用是把例外的數值去除。

無論標題和內文中,中文和英文字元之間要有空白字元 (對排版和文字搜尋有利)

Random number

Fisher-Yates Shuffle

Durstenfeld_shuffle
Fisher-Yates Shuffle is an algorithm that randomizes a given sequence.
Steps of this algorithm:

  1. Get the number of cells in the sequence n
  2. Loop from n - 1 to 1 in a for loop with i
  3. In each for loop, randomly pick a number from 0 to i , and swap it with the i th cell.
  4. Repeat until i equals 1

Question: How do we know that this sequence is uniformly distributed? How do we know it is random?
Answer: If an event is uniformly distributed, the frequency of every occasion happening is equal, and independent, we know that it is random. As for verifying uniform distribution, we can use Pearson's chi-squared test to verify null hypothesis.

Pearson's chi-sqared test

Formula for Chi-Square

Xc2=i=1n(OiEi)2Ei
Xc : chi-square
Oi : Observed value
Ec : Expected value
c : Degree of freedom

Chi-square statistic is a test that measures how a model compares to actual observed data. It compares the size of any discrepancies between the expected results and the actual results. In this case, it compares the number of times a sequence appears to the uniform distributed number.

Null Hypothesis

Null hypothesis is used to check if there is any relationship between two sets of data being analyzed. If null hypothesis returns true, it means that the datas are effected by chance alone.

Degrees of Freedom

Degrees of freedom are the number of independent variables that can tell you how many items can be randomly selected before constraints must be put in place. In a data set, some initial numbers can be chosen at random, but when the data must meet a specific sum or mean, the data after the degrees of freedom need to be constrained to meet the set requirement.

When it comes to verifying whether the data set accepts or rejects the null hypothesis, we set a significance level (⍺). If the null hypothesis is true, the error of the data would be less than the ⍺. Commonly, ⍺ is set to 0.05.

We can check the matching chi-squared value matched by the degree of freedom and probability. With that value, we can evaluate whether Xc2 accepts or rejects the null hypothesis.

Randomness

Understand Entropy

Entropy is the measurement of uncertainty or disorder of system.
Shannon Entropy measures the uncertainty of a probability distribution.
Shannon's entropy formula:

H():=i=1npilog1pi
H(⍺) can be understood as the "the amount of information" in a random message ⍺.
We use the ideal entropy to compare to the entropy from our data, then calculate its chi-square value and check if it fits the null hypothesis. When it rejects the null hypothesis, it doesn't mean "H0 is true", it means "there is no evidence showing that H1 is false". (H0 here means it is uniformly distributed, H1 means it isn't uniformly distributed)
Resource: Shannon's Entropy, Understanding random generators

ttt

Monte Carlo tree search (MCTS)

Monte Carlo tree search is a heuristic search algorithm used on AI to play games. Heuristic means that it can proceed to a solution by tiral and error or by rules that are only loosely defined. MCTS can be stopped at any step and it can also find the optimal solution based on a partially constructed game tree. During the game, MCTS keeps track of the winning count and the total game count of each step, then applies Upper Confidence Bound (UCB) algorithm to the node. UCB is a function that is used to determine which step should be picked next. At first, the value of the UCB would be infinite, but as time progresses, the value would drop, leading to the node to pick other possible options. This way, it balances the exploration/exploitation of the algorithm.
The formula of UCB is as below

wini+cln tni
w : number of the wins simulated over the i-th move at that node
n : number of simulation after the i-th move at that node
c : exploration parameter, normally using
2

t : total number of simulation done after the i-th move.

In MCTS there are four basic steps:

  1. selection : picking out the largest UCB node until get to root
  2. expansion : add extra nodes to the tree
  3. simulation : do random simulation to find who wins
  4. back propogate : take the value from simulation and update the tree above

In simulation, random moves are made. Meaning that it needs a random generator that is random enough, or else, it would lead to uneven exploration/exploitation or undiscovered path.

Here I will provide an example

  1. Selection: As it is the start of the game, it would select the parent node which is not making any move.
    Screenshot 2024-07-05 at 12.27.17 AM
  2. Expansion: It expands a node.
    Screenshot 2024-07-05 at 12.29.06 AM
  3. Simulation: MCTS simulates until there is a winner.
    Screenshot 2024-07-05 at 12.30.39 AM
  4. Back Propogate: It updates the values of its parent nodes
  • updates the parent node
    Screenshot 2024-07-05 at 12.32.27 AM
  • updates the root node
    Screenshot 2024-07-05 at 12.32.58 AM
  1. Repeat until the designated simulation iteration is completed.

The pictures above are screenshots from this demo page, for more iterations, please visit this page.

Zobrist Hashing

Zobrist Hashing is a hash function that represents a board game as hash values. It prevents the same board position to be analyzed more than once. For example, when MCTS is simulating, it checks if the position is already been played using a transposition table without re-analyzing the board position each time.

How it works:
Zobrist hashing first generate keys, then use the keys to hash. For every available board cell and every possible game character, there is a random number generated as the key. Note that it is possible to have two positions with the same value, but it is very unlikely, thus it out weights the disadvantages. For example in a tic-tac-toe, there would be two players and 9 positions, meaning that there would be

29 keys in total. The hash is calculated with xor operation. The previously generated number of the character at a certain position is combined through the xor operation. To undo the previous move, simply xor the value again.

Youtube: Transposition Tables and Zobrist Keys

Implementing Timer that refreshes real time

Ideas

To use signal/setjmp/longjmp to keep calling the function while ttt is still in play.

  • Signal use SIGALRM to create preemptive scheduling.
  • setjmp / longjmp doesn't seem ideal in this situation.

Wrote a program using signal that prints out the time every second

#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <time.h>
time_t rawtime;
struct tm * timeinfo;

void display_message(int s){
    time( &rawtime );
    timeinfo = localtime( &rawtime );
    printf( "Current local time and date: %s", asctime (timeinfo) );
    signal(SIGALRM, display_message);
    alarm(1);
}
int main(void){
    signal(SIGALRM, display_message);
    alarm(1);
    while(1){}
    return 0;
}

Get the current time
Refreshing program real time (Last answer)
SIGALRM to print time
Signal Geeks
Jserv concurrency
preempt_sched GitHub

Problems I encountered

When I added agents/mcts.c to lab0-c, I only added agents/mcts.o under the Makefile OBJS. Not specifying @mkdir -p .$(AGENTS_DIR) under %.o: %.c prevents the makefile from finding agents/mcts.c.
Leading to this error:

cheezad@cheezad-ubuntu:~/linux2024/lab0-c$ make
  CC	agents/mcts.o
agents/mcts.c:150:1: fatal error: opening dependency file .agents/mcts.o.d: No such file or directory
  150 | }
      | ^
compilation terminated.
make: *** [Makefile:58: agents/mcts.o] Error 1

In fixed point numbers, how many bits should be fractional bits and how many should be integral bits?
I am currently working on ttt to change shannon_entropy.c and log2_lshift16.h to fixed point, but I am not sure whether the bits of fraction and integer should be self defined or predefined.
<Solution>
Check how percise shannon_entropy.c needs to be and decide the bits.


After two weeks of trying to understand shannon_entropy.c and log2_lshift16.h, still don't really understand how the mathematical part work.