執行人: popo8712
專題解說錄影
contributed by < cheezad
>
ChenFuhuangKye
git commmit
資訊。原本的程式碼在 malloc(sizeof(struct list_head))
之後並沒有檢查是否 malloc
成功,在老師提醒之後有加上檢查。
第一次 接觸這類型的程式,所以做了一點整理。
不用強調「第一次」,在資訊科技領域,每天都少不了「第一次」,若沒有遇到這樣的事,你可能不在這領域。
list_head : 雙向鏈結串列
prev
: 指向上一個節點
next
: 指向下一個節點
如果是空的鏈結,則head指向自己
避免非必要的項目縮排 (即 *
和 -
),以清晰、明確,且流暢的漢語書寫。
element_t
: 取得該位置的值
value
: 取得該節點的值
list
: 雙向鏈結的節點
queue_contex_t
q
: 指向序列的head
chain
: 串起不同鏈結的head
size
: 序列的長度
id
: 序列的編號
透過這些已經寫好的程式碼讓自己寫的程式更精簡。
避免非必要的項目縮排 (即 *
和 -
),以清晰、明確,且流暢的漢語書寫。
範例:list_splice
list
接到 head
前面head
後面先接 list
,再接 head
自己的節點改進你的漢語表達。
執行 qtest
裡面的 web
之後,在瀏覽器打 http://<myip>:9999/new
,在未修改任何網頁相關程式碼之前 更動網頁相關的程式碼之前,執行結果如下
當在還沒執行 web 之前,按下 tab
會自動補齊剩下的字,但在開啟 web 之後,這個功能便會消失。
lib/list_sort.c
StudyIn a list of queue Q
Queue Q
Pick out the first two list and merge them together
Move it to the end of the list
Repeat until there is only one list left in the queue
Prove that queue-mergesort is optimal mergesort by checking number of the comparisons performed in worse case is no more then that performed by any mergesort.
Define the weight of a binary tree T to be w(T) where the value is the sum of all internal nodes' weight of the tree.
Merge tree T for queue-mergesort run on 11 elements
After adding up the weight of merge tree T's internal nodes, w(T) = 39.
Queue-mergesort is optimal mergesort -> w(T) <= w(T’) for all merge trees T’ with n leaves.
Queue-mergesort and Top-down mergesort are both optimal mergesort, but bottom-up mergesort isn't. However, with a simple modification, we can make bottom-up mergesort optimal. At the end of a pass, in case there are an odd number of lists, we do not leave the last list by itself but merge it with the new list that was just created. By doing so, the merge-tree created becomes optimal.
Bottom-up Mergesort like the top-down variant follows the divide-and conquer paradigm. However, bottom-up uses "power of two" splitting strategy instead of "half and half" strategy in top-down variant, making it possible to be written in recursive.
Time attack is a broad class of side-channel attacks that measure execution time of cryptographic implementation in order to infer secrets.
People use tools to collect time attack information. Traditionally these tools usually rely on details of the hardware when CPU manufactures publish little information on, and may be subject to change on an update.
Dedect is a tool that collects time attack information and checks whether a piece of code runs in constant-time or not. They use statistical analysis of execution timing based on leakage detection tests to determine whether the code runs in constant-time.
Steps of their experiment are as following:
t-test is a statistical test used to determine whether the difference between two groups is statistically significant or not. Normally used under the testing of null hypothesis. t-test requires two independent and random samples that have normal distributions.
The difference between Student's t-test and Welch's t-test:
Welch's t-test
General Form
: mean of group
: difference we hypothesize for true population means (normally the result of subtraction is zero)
: standard deviation
: sample number
Source: Welch's t-test
This part of the code uses the function percentile
Couldn't understand why 1 - (pow(0.5, 10 * (double)(i + 1) / DUDECT_NUMBER_PERCENTILES))
is the second parameter of percentile
#### 〈Dude, is my code constant time?〉論文整理
dudect 運用定時攻擊的概念以及評估程式碼在特定平台上是否以常數時間運行的需求。現有工具檢測可變時間程式碼有缺點,因此作者利用洩漏檢測技術將dudect呈現為一種簡潔、易於使用和易於維護的工具。此外,它強調了與先前解決方案相比的方法差異,特別是不需要建立硬體行為模型。
你忽略過多關鍵訊息,例如常數時間的判定和硬體指令有關。
經過與 GitHub 上原始程式碼與 lab0-c 的程式碼比對之後可以發現程式之所以不能通過 trace-17-complexity
是因為他少了 int64_t *percentiles
,percentiles
在這裡的作用是把例外的數值去除。
無論標題和內文中,中文和英文字元之間要有空白字元 (對排版和文字搜尋有利)
Fisher-Yates Shuffle is an algorithm that randomizes a given sequence.
Steps of this algorithm:
n
n - 1
to 1
in a for loop with i
0
to i
, and swap it with the i
th cell.i
equals 1
Question: How do we know that this sequence is uniformly distributed? How do we know it is random?
Answer: If an event is uniformly distributed, the frequency of every occasion happening is equal, and independent, we know that it is random. As for verifying uniform distribution, we can use Pearson's chi-squared test to verify null hypothesis.
Formula for Chi-Square
Xc : chi-square
Oi : Observed value
Ec : Expected value
c : Degree of freedom
Chi-square statistic is a test that measures how a model compares to actual observed data. It compares the size of any discrepancies between the expected results and the actual results. In this case, it compares the number of times a sequence appears to the uniform distributed number.
Null hypothesis is used to check if there is any relationship between two sets of data being analyzed. If null hypothesis returns true, it means that the datas are effected by chance alone.
Degrees of freedom are the number of independent variables that can tell you how many items can be randomly selected before constraints must be put in place. In a data set, some initial numbers can be chosen at random, but when the data must meet a specific sum or mean, the data after the degrees of freedom need to be constrained to meet the set requirement.
When it comes to verifying whether the data set accepts or rejects the null hypothesis, we set a significance level (⍺). If the null hypothesis is true, the error of the data would be less than the ⍺. Commonly, ⍺ is set to 0.05.
We can check the matching chi-squared value matched by the degree of freedom and probability. With that value, we can evaluate whether Xc2 accepts or rejects the null hypothesis.
Entropy is the measurement of uncertainty or disorder of system.
Shannon Entropy measures the uncertainty of a probability distribution.
Shannon's entropy formula:
H(⍺) can be understood as the "the amount of information" in a random message ⍺.
We use the ideal entropy to compare to the entropy from our data, then calculate its chi-square value and check if it fits the null hypothesis. When it rejects the null hypothesis, it doesn't mean "H0 is true", it means "there is no evidence showing that H1 is false". (H0 here means it is uniformly distributed, H1 means it isn't uniformly distributed)
Resource: Shannon's Entropy, Understanding random generators
Monte Carlo tree search is a heuristic search algorithm used on AI to play games. Heuristic means that it can proceed to a solution by tiral and error or by rules that are only loosely defined. MCTS can be stopped at any step and it can also find the optimal solution based on a partially constructed game tree. During the game, MCTS keeps track of the winning count and the total game count of each step, then applies Upper Confidence Bound (UCB) algorithm to the node. UCB is a function that is used to determine which step should be picked next. At first, the value of the UCB would be infinite, but as time progresses, the value would drop, leading to the node to pick other possible options. This way, it balances the exploration/exploitation of the algorithm.
The formula of UCB is as below
w : number of the wins simulated over the i-th move at that node
n : number of simulation after the i-th move at that node
c : exploration parameter, normally using
t : total number of simulation done after the i-th move.
In MCTS there are four basic steps:
In simulation, random moves are made. Meaning that it needs a random generator that is random enough, or else, it would lead to uneven exploration/exploitation or undiscovered path.
Here I will provide an example
The pictures above are screenshots from this demo page, for more iterations, please visit this page.
Zobrist Hashing is a hash function that represents a board game as hash values. It prevents the same board position to be analyzed more than once. For example, when MCTS is simulating, it checks if the position is already been played using a transposition table without re-analyzing the board position each time.
How it works:
Zobrist hashing first generate keys, then use the keys to hash. For every available board cell and every possible game character, there is a random number generated as the key. Note that it is possible to have two positions with the same value, but it is very unlikely, thus it out weights the disadvantages. For example in a tic-tac-toe, there would be two players and 9 positions, meaning that there would be keys in total. The hash is calculated with xor
operation. The previously generated number of the character at a certain position is combined through the xor
operation. To undo the previous move, simply xor
the value again.
Youtube: Transposition Tables and Zobrist Keys
To use signal/setjmp/longjmp to keep calling the function while ttt is still in play.
SIGALRM
to create preemptive scheduling.Wrote a program using signal that prints out the time every second
Get the current time
Refreshing program real time (Last answer)
SIGALRM to print time
Signal Geeks
Jserv concurrency
preempt_sched GitHub
When I added agents/mcts.c
to lab0-c
, I only added agents/mcts.o
under the Makefile OBJS
. Not specifying @mkdir -p .$(AGENTS_DIR)
under %.o: %.c
prevents the makefile from finding agents/mcts.c
.
Leading to this error:
In fixed point numbers, how many bits should be fractional bits and how many should be integral bits?
I am currently working on ttt to change shannon_entropy.c
and log2_lshift16.h
to fixed point, but I am not sure whether the bits of fraction and integer should be self defined or predefined.
<Solution>
Check how percise shannon_entropy.c
needs to be and decide the bits.
After two weeks of trying to understand shannon_entropy.c
and log2_lshift16.h
, still don't really understand how the mathematical part work.