contributed by < Jerejere0808 >
注意規範,登記在 https://hackmd.io/@sysprog/linux2023-homework1 的超連結,要用「固定網址」(參見 用固定網址發布筆記),也就是如 https://hackmd.io/@itsme/XXXX 的形式,設定公開發表,沒有後方的 /edit
。
$ gcc --version
gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
BogoMIPS: 6604.80
struct list_head *q_new()
{
struct list_head *head =
(struct list_head *) malloc(sizeof(struct list_head));
if (!head) {
free(head);
return NULL;
}
INIT_LIST_HEAD(head);
return head;
}
不用張貼完整程式碼,僅列出關鍵部分,並充分討論即可。
這邊可以用 q_release_element()
釋放節點,但 head 要自己釋放
void q_free(struct list_head *l)
{
element_t *entry, *safe;
if (!l || list_empty(l)) {
free(l);
return;
}
list_for_each_entry_safe (entry, safe, l, list) {
q_release_element(entry);
}
free(l);
return;
}
注意書寫規範:中英文間用一個空白字元區隔。
因為是 circular doubly-linked list所以可以用 constant time 存取到頭尾,所以以下四個函式應該都是 qtest
執行 trace-17-complexity.cmd
時,雖然大部分時候都通過,但有時會出現 Probably not constant time 的錯誤,不知原因為何
最近幾次測都是可以通過的了
bool q_insert_head(struct list_head *head, char *s)
{
if (!head)
return false;
element_t *new_ele = (element_t *) malloc(sizeof(element_t));
if (!new_ele)
return false;
new_ele->value = (char *) malloc((strlen(s) + 1) * sizeof(char));
if (!new_ele->value) {
free(new_ele);
return false;
}
strncpy(new_ele->value, s, (strlen(s) + 1));
list_add(&new_ele->list, head);
return true;
}
bool q_insert_tail(struct list_head *head, char *s)
{
if (!head)
return false;
element_t *new_ele = (element_t *) malloc(sizeof(element_t));
if (!new_ele)
return false;
new_ele->value = (char *) malloc((strlen(s) + 1) * sizeof(char));
if (!new_ele->value) {
free(new_ele);
return false;
}
strncpy(new_ele->value, s, (strlen(s) + 1));
list_add_tail(&new_ele->list, head);
return true;
}
element_t *q_remove_head(struct list_head *head, char *sp, size_t bufsize)
{
if (!head || list_empty(head))
return NULL;
element_t *remove_ele = list_first_entry(head, element_t, list);
list_del(head->next);
if (sp) {
strncpy(sp, remove_ele->value, bufsize - 1);
sp[bufsize - 1] = '\0';
}
return remove_ele;
}
element_t *q_remove_tail(struct list_head *head, char *sp, size_t bufsize)
{
if (!head || list_empty(head))
return NULL;
element_t *remove_ele = list_last_entry(head, element_t, list);
list_del(head->prev);
if (sp) {
strncpy(sp, remove_ele->value, bufsize - 1);
sp[bufsize - 1] = '\0';
}
return remove_ele;
int q_size(struct list_head *head)
{
if (!head)
return 0;
int len = 0;
struct list_head *li;
list_for_each (li, head)
len++;
return len;
}
注意 資訊科技詞彙翻譯
上課老師提過若為雙向連結可以使用兩個指標前後從頭尾迭代直到重疊來取代快慢指針的方法,雖然一樣為 遍歷 –- 的節點可以從 1.5 list 長度減少到 1 list 長度。
參考 laneser 作業使用到兩個指標前後從佇列頭尾迭代,透過交會點/即將交會點來判斷佇列中間點
bool q_delete_mid(struct list_head *head)
{
if (!head || list_empty(head))
return false;
struct list_head *l = head->next;
struct list_head *r = head->prev;
while (1) {
if (l == r) {
list_del(l); // remove list firstly
q_release_element(list_entry(l, element_t, list));
break;
} else if (l->next == r) { // remove later one
list_del(r); // remove list firstly
q_release_element(list_entry(r, element_t, list));
break;
}
l = l->next;
r = r->prev;
}
return true;
}
用迭代法實做一個merge sort,參考Merge Sort 與它的變化 把分割和合併分割成兩個獨立的部分來實作,分割把list分割成單一節點,這邊並沒有採用stack來分割出節點,而是用list_for_each_safe()和 INIT_LIST_HEAD()遍歷並將結點分割出來放到array中,比較直觀好理解。合併的部分採分段合併,原因是如果遇到在快排序好的情況下,頭尾合併要比較更多次。
另外參考了lab0 Linux 核心的鏈結串列排序,內容提到 "top down 是會先做 partition 再來 merge,但 partition 本身對 cache 不友善,在 cache 移進移出(內容不斷更新),導致 cache thrashing。" 這邊不是很理解,partition跟資料在cache移進移出的關係為何,是因為在partition要存取到所有資料,但是結束後又只會先處理其中一半的資料導致另一半的資料用不到又要被搬出去嗎? 這部分還在收尋相關的資料
不用花力氣在「搜尋」,回顧你的知識,並善用課程提及的工具來分析。Image Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
jservImage Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
#define list_size 1000000
struct list_head *mergeTwoLists(struct list_head *list1,
struct list_head *list2)
{
struct list_head *head;
char *value1 = list_entry(list1, element_t, list)->value;
char *value2 = list_entry(list2, element_t, list)->value;
if (strcmp(value1, value2) <= 0) {
head = list1;
list1 = list1->next;
} else {
head = list2;
list2 = list2->next;
}
list_del_init(head);
while (list1->next != head && list2->next != head) {
value1 = list_entry(list1, element_t, list)->value;
value2 = list_entry(list2, element_t, list)->value;
if (strcmp(value1, value2) <= 0) {
list1 = list1->next;
list_move_tail(list1->prev, head);
} else {
list2 = list2->next;
list_move_tail(list2->prev, head);
}
}
if (list2->next != head) {
struct list_head *head_last = head->prev;
struct list_head *list_last = list2->prev;
head->prev = list_last;
list_last->next = head;
list2->prev = head_last;
head_last->next = list2;
} else {
struct list_head *head_last = head->prev;
struct list_head *list_last = list1->prev;
head->prev = list_last;
list_last->next = head;
list1->prev = head_last;
head_last->next = list1;
}
return head;
}
mergeTwoLists 的實作是先將最小的node當成合併後的第一個node,接下來一直比較兩個lists的頭部並選出最較小的加入合併的list並往後移動指向被選中的list頭部指標(ex. list1 = list1->next),但是一直想不到要如何判斷哪個list已經被完全併入合併的list,所以就參考kdnvt的方法,發現因為若是最後節點,其下一次頭部的指標也是指向同個節點(ex. list1 == list1->next),但因為已經被併入了所以其next指標會指向合併list的head(因為其為目前最後被併入的),就能用(list1->next != head)來判斷這個list是不是已經完全被併入了,方法非常巧妙。
void q_sort(struct list_head *head)
{
if (!head || list_empty(head))
return;
if (list_is_singular(head))
return;
struct list_head *lists[list_size];
struct list_head *cur, *safe;
int count = 0, n = q_size(head);
list_for_each_safe (cur, safe, head) {
INIT_LIST_HEAD(lists[count++] = cur);
}
for (int interval = 1; interval < n; interval *= 2) {
for (int i = 0; i + interval < n; i += interval * 2) {
lists[i] = mergeTwoLists(lists[i], lists[i + interval]);
}
}
list_add_tail(head, lists[0]);
return;
}
這個q_sort因為用array固定大小放nodes,預測可能會有空間不足的情況。後來果然無法通過trace-14-perf.cmd測資出現Segmentation fault occurred. You dereferenced a NULL or invalid pointerAborted錯誤。於是我試著偷懶一下,把list 改成3000000看看,結果更糟,只要有sort的測資都過不了,所以用valgrind測試看看發生什麼事,結果如下:
看來似乎是array太大超過stack的最大範圍了,於是就用massif觀察一下記憶體配置
$valgrind –tool=massif –stacks=yes ./qtest
cmd> new
cmd> it RAND 10000
cmd> sort
cmd> quit
ms_print massif.out.xxxx
massif-visualizer massif.out.xxxx
下圖為 #define list_size 1000000 的情況(可以通過大部分測資,資料太大就不行),可以看到stack memory大小在8MB到9MB之間,大部分都是分配給struct list_head *lists[list_size]。因為struct list_head pointer size = 8byes, 8 * 1000000 = 8MB。
下圖為 #define list_size 3000000的情況(無法通過任何測資),基本上程式執行到一半還沒實際分配array就被中斷了,所以stack memory乍看很小,其實執行到上面情況的前面一部分未sort就發生中斷了。
所以這裡的結論是方法不能跟資料量呈現O(n)的關係,否則stack會不夠用,這裡就根據你所不知道的 C 語言: linked list 和非連續記憶體裡有關indirect pointer修改next pointer的方法和參考kdnvt的作法,用indirect pointer紀錄每次要合併的第一個list,並用指標本身的連結找到第二個list再合併就不用分配額外的記憶體紀錄每個節點的位置了。
void q_sort(struct list_head *head)
{
if (!head || list_empty(head))
return;
if (list_is_singular(head))
return;
struct list_head *node, *safe;
list_for_each_safe (node, safe, head) {
node->prev = node;
}
struct list_head *first = head->next;
INIT_LIST_HEAD(head);
while (first->prev->next != head) {
struct list_head **cur;
struct list_head *next_l, *nnext_l;
cur = &first;
next_l = (*cur)->prev->next;
nnext_l = next_l->prev->next;
while (*cur != head && next_l != head) {
(*cur)->prev->next = (*cur);
next_l->prev->next = next_l;
*cur = mergeTwoLists(*cur, next_l);
cur = &((*cur)->prev->next);
*cur = nnext_l;
next_l = (*cur)->prev->next;
nnext_l = next_l->prev->next;
}
}
list_add_tail(head, first);
}
新的sort就不需要根據資料的大小分配額外的記憶體了
終於拿到 100 points!!
不過最後用valgrind -q –leak-check=full個別檢查了一下traces,發現traces/trace-14-perf.cmd會出錯,且都是blocks are still reachable的錯誤,其代表程式結束時有記憶體來為釋放掉且可以透過指標存取到,檢查是哪個function分配的發現都是test_malloc,也就是element_t和裡面的value記憶體因程式中斷沒正常釋放所致。time_limit就是在harness.c 的exception_setup()設定的,若將其改大發現就能通過測試。
root@LAPTOP-PFT5O577:~/linux2023/lab0-c# valgrind -q –leak-check=full ./qtest -f traces/trace-14-perf.cmd
#Test performance of insert_tail, reverse, and sort
l = []
ERROR: Time limit exceeded. Either you are in an infinite loop, or your code is too inefficient
ERROR: Insertion of dolphin failed (1 failures total)
l = [dolphin … ]
ERROR: Time limit exceeded. Either you are in an infinite loop, or your code is too inefficient
ERROR: Insertion of gerbil failed (2 failures total)
l = [dolphin … ]
l = [gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil … ]
ERROR: Time limit exceeded. Either you are in an infinite loop, or your code is too inefficient
l = []
Freeing queue
ERROR: Freed queue, but 1797802 blocks are still allocated
9272 20,520,623 bytes in 436,609 blocks are still reachable in loss record 1 of 4
9272 at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
9272 by 0x10F74B: test_malloc (harness.c:133)
9272 by 0x10FCBB: q_insert_tail (queue.c:77)
9272 by 0x10C8F9: do_it (qtest.c:322)
9272 by 0x10E435: interpret_cmda (console.c:181)
9272 by 0x10E9EA: interpret_cmd (console.c:201)
9272 by 0x10EDEB: cmd_select (console.c:610)
9272 by 0x10F6D7: run_console (console.c:705)
9272 by 0x10D4C8: main (qtest.c:1353)
9272
9272 22,189,968 bytes in 462,291 blocks are still reachable in loss record 2 of 4
9272 at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
9272 by 0x10F74B: test_malloc (harness.c:133)
9272 by 0x10FC15: q_insert_head (queue.c:58)
9272 by 0x10CC04: do_ih (qtest.c:233)
9272 by 0x10E435: interpret_cmda (console.c:181)
9272 by 0x10E9EA: interpret_cmd (console.c:201)
9272 by 0x10EDEB: cmd_select (console.c:610)
9272 by 0x10F6D7: run_console (console.c:705)
9272 by 0x10D4C8: main (qtest.c:1353)
9272
9272 27,943,040 bytes in 436,610 blocks are still reachable in loss record 3 of 4
9272 at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
9272 by 0x10F74B: test_malloc (harness.c:133)
9272 by 0x10FCA2: q_insert_tail (queue.c:74)
9272 by 0x10C8F9: do_it (qtest.c:322)
9272 by 0x10E435: interpret_cmda (console.c:181)
9272 by 0x10E9EA: interpret_cmd (console.c:201)
9272 by 0x10EDEB: cmd_select (console.c:610)
9272 by 0x10F6D7: run_console (console.c:705)
9272 by 0x10D4C8: main (qtest.c:1353)
9272
9272 29,586,688 bytes in 462,292 blocks are still reachable in loss record 4 of 4
9272 at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
9272 by 0x10F74B: test_malloc (harness.c:133)
9272 by 0x10FBFC: q_insert_head (queue.c:55)
9272 by 0x10CC04: do_ih (qtest.c:233)
9272 by 0x10E435: interpret_cmda (console.c:181)
9272 by 0x10E9EA: interpret_cmd (console.c:201)
9272 by 0x10EDEB: cmd_select (console.c:610)
9272 by 0x10F6D7: run_console (console.c:705)
9272 by 0x10D4C8: main (qtest.c:1353)
9272
Freeing queue
time_limit = 1 測試結果
root@LAPTOP-PFT5O577:~/linux2023/lab0-c# valgrind -q –leak-check=full ./qtest -f traces/trace-14-perf.cmd
#Test performance of insert_tail, reverse, and sort
l = []
l = [dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin … ]
l = [dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin … ]
l = [gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil gerbil … ]
l = [dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin dolphin … ]
Freeing queue
time_limit = 5 測試結果
不過目前未確定為何會用了valgrind就過不了時間限制
lib/list_sort.c
原本我想把整個檔案放進專案,但是它會需要include一些放在/usr/include/linux的標頭檔,嘗試引入會出現如錯誤(ex.cannot open source file "linux/bug.h")如下圖,因為系統中沒有這些檔案,所以就暫時先用別的方法。這裡就自己稍微修改list_sort.c並直接放進qtest.c(原本想放在queue.c再透過queue.h引入qtest.c,但queue.h似乎沒權限修改)
本來就不該修改公開介面
queue.h
,這是規範!
jservImage Not Showing Possible ReasonsLearn More →
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
文字訊息不要用圖片展現,尊重視障者閱讀的權益。
#define likely(x) __builtin_expect(!!(x), 1)
#define unlikely(x) __builtin_expect(!!(x), 0)
struct list_head *merge(struct list_head *a, struct list_head *b)
{
struct list_head *head, **tail = &head;
for (;;) {
char *value1 = list_entry(a, element_t, list)->value;
char *value2 = list_entry(b, element_t, list)->value;
if (strcmp(value1, value2) <= 0) {
*tail = a;
tail = &a->next;
a = a->next;
if (!a) {
*tail = b;
break;
}
} else {
*tail = b;
tail = &b->next;
b = b->next;
if (!b) {
*tail = a;
break;
}
}
}
return head;
}
static void merge_final(struct list_head *head,
struct list_head *a,
struct list_head *b)
{
struct list_head *tail = head;
// int count = 0;
for (;;) {
char *value1 = list_entry(a, element_t, list)->value;
char *value2 = list_entry(b, element_t, list)->value;
if (strcmp(value1, value2) <= 0) {
tail->next = a;
a->prev = tail;
tail = a;
a = a->next;
if (!a)
break;
} else {
tail->next = b;
b->prev = tail;
tail = b;
b = b->next;
if (!b) {
b = a;
break;
}
}
}
tail->next = b;
do {
b->prev = tail;
tail = b;
b = b->next;
} while (b);
/* And the final links to make a circular doubly-linked list */
tail->next = head;
head->prev = tail;
}
void q_kernelSort(struct list_head *head)
{
struct list_head *list = head->next, *pending = NULL;
size_t count = 0;
if (list == head->prev)
return;
head->prev->next = NULL;
do {
size_t bits;
struct list_head **tail = &pending;
/* Find the least-significant clear bit in count */
for (bits = count; bits & 1; bits >>= 1)
tail = &(*tail)->prev;
/* Do the indicated merge */
if (likely(bits)) {
struct list_head *a = *tail, *b = a->prev;
a = merge(b, a);
/* Install the merged result in place of the inputs */
a->prev = b->prev;
*tail = a;
}
/* Move one element from input list to pending */
list->prev = pending;
pending = list;
list = list->next;
pending->next = NULL;
count++;
} while (list);
/* End of input; merge together all the pending lists. */
list = pending;
pending = pending->prev;
for (;;) {
struct list_head *next = pending->prev;
if (!next)
break;
list = merge(pending, list);
pending = next;
}
/* The final merge, rebuilding prev links */
merge_final(head, pending, list);
}
再透過ADD_COMMAND新增指令kernelSort,如此一來就能用兩種不同sort做實驗比較
bool do_kernelSort(int argc, char *argv[]){}
ADD_COMMAND(kernelSort, "kernelSort queue in ascending order", "");
用kernel sort的情況,也不需要額外根據資料量分配記憶體就能完成任務。
這邊用time來測試不同資料量sort和kernelSort的時間
it RAND 50000
sort: Delta time = 0.084
kernelSort: Delta time = 0.050
it RAND 100000
sort: Delta time = 0.017
kernelSort: Delta time = 0.015
it RAND 1000000
sort: Delta time = 1.246
kernelSort: Delta time = 0.832
static inline void swap(struct list_head *n1, struct list_head *n2)
{
if (n1 == n2)
return;
struct list_head *n1_prev = n1->prev;
struct list_head *n2_prev = n2->prev;
if (n2->prev != n1)
list_move(n1, n2_prev);
list_move(n2, n1_prev);
}
void q_shuffle(struct list_head *head)
{
if (!head || list_empty(head) || list_is_singular(head))
return;
struct list_head *last = head->prev;
int len = q_size(head);
while (len) {
int k = rand() % len;
struct list_head *cur = head->next;
while (k--)
cur = cur->next;
swap(cur, last);
last = cur;
last = last->prev;
len--;
}
return;
}
利用lab0_亂數 + 論文閱讀提供的亂度測試檔案,給定 queue = [1 2 3 4],並做10000次shuffle,得到統計如下:
Expectation: 416
Observation: {'1234': 426, '1243': 401, '1324': 412, '1342': 386, '1423': 459, '1432': 409, '2134': 441, '2143': 435, '2314': 399, '2341': 409, '2413': 447, '2431': 398, '3124': 407, '3142': 374, '3214': 423, '3241': 401, '3412': 405, '3421': 426, '4123': 417, '4132': 424, '4213': 438, '4231': 414, '4312': 416, '4321': 433}
chi square sum: 21.466346153846153
H0(虛無假說): shuffle 的結果發生的機率相同,遵守 Uniform distribution(使entropy最大)
H1(對立假說): shuffle 的結果發生的機率至少有一個不同
原本的專案已經有整合web的部分了,不過是透過一個 use_linenoise 變數紀錄是否要使用 linenoise() ,只要執行 do_web() 之 linenoise() 就無法使用了,所以這裡就參考老師給的提示和 laneser的作法,在 line_edit() 用 select 監聽 stdin_fd 和 web_fd 是否有準備好的資料,這樣就可以同時處理鍵盤輸入和來自網站的要求了。
要在位於 line_edit() 的 readset 中加入 web_fd 就必須將其一層層傳入以下三個函數
char *linenoise(const char *prompt, int web_fd)
static int line_raw(char *buf, size_t buflen, const char *prompt, int web_fd)
static int line_edit(int stdin_fd,
int stdout_fd,
char *buf,
size_t buflen,
const char *prompt,
int web_fd)
只要 web_fd != 0 代表有建立 http server 就使用 select 監聽,這裡的 select timeout parameter 設為NULL 這樣 select 就會 block 直到收到訊息。 若是收到web_fd的訊息後就把內容處理並複製到 buf 讓linenoise() 可以傳出,並馬上關閉連結到的 client socket ,若是stdin_fd 就跟原本的處理方式一樣一個一個字元讀入。
if (web_fd) {
fd_set set;
FD_ZERO(&set);
FD_SET(web_fd, &set);
FD_SET(stdin_fd, &set);
int rv = select(web_fd + 1, &set, NULL, NULL, NULL);
switch (rv) {
case -1:
perror("select"); /* an error occurred */
continue;
case 0:
printf("timeout occurred\n"); /* a timeout occurred */
continue;
default:
if (FD_ISSET(web_fd, &set)) {
FD_CLR(web_fd, &set);
struct sockaddr_in clientaddr;
socklen_t clientlen = sizeof(clientaddr);
int web_connfd = accept(
web_fd, (struct sockaddr *) &clientaddr, &clientlen);
// buf = strweb_recv(web_connfd, &clientaddr);
strncpy(buf, web_recv(web_connfd, &clientaddr), buflen);
char *buffer =
"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\n";
web_send(web_connfd, buffer);
close(web_connfd);
return strlen(buf);
} else if (FD_ISSET(stdin_fd, &set)) {
nread = read(l.ifd, &c, 1);
if (nread <= 0)
return l.len;
}
break;
}
} else {
nread = read(l.ifd, &c, 1);
if (nread <= 0)
return l.len;
}
這樣除了要直接讀檔案之外都可以用 linenoise() 處理並傳回 cmdline 去做解析並執行對應的 queue 操作。
bool run_console(char *infile_name)
{
if (!push_file(infile_name)) {
report(1, "ERROR: Could not open source file '%s'", infile_name);
return false;
}
if (!has_infile) {
char *cmdline;
while ((cmdline = linenoise(prompt, web_fd))) {
interpret_cmd(cmdline);
line_history_add(cmdline); /* Add to the history. */
line_history_save(HISTORY_FILE); /* Save the history on disk. */
line_free(cmdline);
while (buf_stack && buf_stack->fd != STDIN_FILENO)
cmd_select(0, NULL, NULL, NULL, NULL);
has_infile = false;
}
} else {
while (!cmd_done())
cmd_select(0, NULL, NULL, NULL, NULL);
}
return err_cnt == 0;
}
注意書寫規範:中英文間用一個空白字元區隔。
dudect 會以固定資料 和 隨機資料 二種不同類型的資料作為輸入,並觀察二種資料作為輸入時其執行時間的分布,進行比較。
量測執行時間:多次量測固定資料和隨機資料的執行時間,論文中有提到因考慮到執行環境隨時間可能有變化,所以兩種資料是每次隨機選擇來執行,並不是一種執行完再執行另一種。
執行後的資料處理:
執行時作業系統或硬體中斷都可能影響到正在執行中的待測程式,所以超過某個閾值的執行時間並不列入統計分析。
統計分析:
虛無假說(H0):二種輸入類型執行時間分佈一致。若成功拒絕此虛無假說,則可說明程式執行不是常數時間。Welch’s t-test 的作用是藉由樣本平均來判斷兩種分布是否相足夠相近適合用在我們獲得兩筆不同隨機獨立樣本資料,此兩筆獨立樣本抽樣自兩母群體,且兩母群體都為常態分布且變異數未知但不相同時。
用 LaTeX 重新製作數學式
這裡的資料就是執行時間,由上圖可見當隨機資料的平均執行時間與固定資料的平均執行時間差異太大會導致 t' 跟著變大,在考慮自由度和顯著差異之後可以找到臨界值,若 t' 大於這個臨界值,那就代表H0被推翻,也就是說隨機資料的執行時間並不是
注意書寫規範:中英文間用一個空白字元區隔。