決策樹 Decision Tree

模型樣貌

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

原理

將資料的特徵分割成不同的部分,並使分割能得到最大的資訊增益(Information gain, 簡稱IG)。每增加一個分支,都需要重新使用以下的式子,可以利用規定每次資訊量增加的最小值來避免overfitting。

何謂資訊量高

假設下方三個圓圈內的資訊分布是其中三種特徵分類後的結果,左邊的結果明顯要比右邊的結果需要更多資訊才能說明清楚分類狀態,也就是說用左邊的特徵下去分類,給予的資訊量相對少,反之,右邊的特徵給予的資訊量較多,因此會選擇右邊的特徵優先做決策條件。

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

資訊增益

IG(Dp,f)=I(Dp)j=1mNjNpI(Dj)
IG(Dp,f)
=獲得的資訊量
I(Dp)
=原本的資訊量
j=1mNjNpI(Dj)
=經分割後的資訊量
m
=接下來要分幾個類別
NjNp
=全部資料分之當時類別的資料數

資訊增益的計算方法

由於我們希望獲得的資訊量要最大,因此經由分割後的資訊量要越小越好。

常見的計算分割後的資訊量方式有兩種:熵(Entropy) 以及 Gini不純度(Gini Impurity)。

熵資訊量函式

IH(t)=i=1cp(i|t)log2p(i|t)

Gini Impurity資訊量函式

IG(t)=i=1cp(i|t)(1p(i|t))=1i=1cp(i|t)2
p(i|t)
=全部猜一個選項答對的最高機率
c
=分支有幾個類別

舉例

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

假設有80筆資料,有40是1類別、40筆是2類別。使用兩種不同的切割方法,第一種切割法(A)會變把資料變成各40筆,其中左邊那份包含30筆1類別資料、10筆2類別資料,右邊包含10筆1類別資料、30筆2類別資料。第二種切割法(B)會把資料切成60/20筆,其中左邊那份包含20筆1類別資料、40筆2類別資料,右邊那份包只含了20筆1類別的資料。

原本的資料量利用熵資訊量函式

IH(t)=i=1cp(i|t)log2p(i|t)
I(Dp)=(0.5log2(0.5)+0.5log2(0.5))=1

在一開始,資料分布是(40,40)時,全部猜是1類答對的機率是0.5可得
0.5log2(0.5)
。而全部猜是2類答對的機會是0.5,可得
0.5log2(0.5)
,且只分兩類故
c
=2。

A:

I(DL)=(34log2(34)+14log2(14))=0.81

在左邊是(30,10),全部猜1類答對的機率是

34,可得
34log2(34)
。而全部猜2類答對的機率是
14
,可得
14log2(14)

A: