Try   HackMD

Probability & Statistics for Machine Learning & Data Science(Week 3 - Sampling and Point estimation)

tags: coursera Linear Algebra math

Week3 - Lession1 - Population and Sample

Population and Sample

課程連結

Population and Sample

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Population指的是我們想要研究的項目的整體分佈,而Sample則是我們所觀察的比較小的子集。

舉例來說,你被聘請去一個島上做平均身高的研究,島上有10000人,這不大可能逐一去詢問,所以可能只能抽問幾個人。

在這個範例中的10000指的就是Population,以N表示,而抽幾個人指的就是Sample,以n表示,可能是1~9999。

Random Sampling

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

以一個比較小的數值來看,假設島上有10個人,我們希望從4個人的觀測樣本值來推論,這時候有兩種作法:

  1. 隨機抽4人
  2. 身高排序之後選擇前4人

那一種比較好?

事實上是1,隨機抽4人的作法比較好的。

Independent Sample

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

假設,第一次隨機選出的4人不喜歡,想要做第二次的隨機選擇,有兩種作法:

  1. 排除掉第一次採樣的4人之後做第二次採樣
  2. 所有的樣本一起再做一次隨機採樣

那一種比較好?

事實上是2,如果是方法1的話,那第二次的採樣就會依賴於第一次的採樣,這對實驗結果來說並不好。

Identically Distributed Samples

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

採樣過程中我們必需確保它們的分佈是一致的。

Population and Sample in Machine Learning

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

我們在機器學習中不管資料集有多少,所使用的資料集基本上都是樣本(sample),而不是全部的資料(population)。

重要的是必需要是一個有代表性的資料集。一個有代表性的資料集代表這個樣本資料集跟總體資料集的分佈是一致的。

Recap

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

總結。看一下,就是再提一次population與sample的概念與符號。

Sample Mean

課程連結

Population and Sample Mean

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

一樣的範例,並且人口數一樣是10。

以這個範例來看,如果我們想知道整個島上的平均身體並不難,怒算一發就可以得到

μ=160

Population and Sample Mean

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

基於某種原因,也許我們找不而10個人,所以只能從中找出6人來計算,最終得到的是160.97,因為這只是其中的6人的子集,所以稱為樣本均值(sample mean),寫為

x¯=160.97,這是第一個估計值,寫為
x1¯

隔天我們又找了6個人來計算一次平均,這是第二個估計值,寫為

x2¯=156

兩個樣本均值的樣本都是6,但明顯第一次的採樣有比較好的結果,不難明白這跟採樣的值是有關的。第一次的採樣較為隨機,而第二次的採樣則集中在身高不高的人身上。

Population and Sample Mean

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

假如現在只有2個樣本(

n=2),得到樣本均值為
x3¯
,很明顯的,
x1¯
的結果是比較好的,因為第一次的採樣有6個人,第三次的採樣只有2個人,樣本數愈大,估測結果就會愈好。

Sample Proportion

課程連結

Proportion

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

範例一樣,人口數不變,都是10人。現在假設這10個人都有各自的交通工具,有汽車、機車跟腳踏車,那擁有腳踏車的人口比例有多少?

範例來看,我們只需要把腳踏車的數量當做分子,總數當做分母,不難明白就是40%,這是在每個人都只有一種交通工具的情況下。這種指標又稱為population proportion,以

p表示。

Proportion

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

這邊給出population proportion的公式。

Sample Proportion

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

如果我們沒有辦法普查10個人的交通工具,只能隨機找6個人來問問的話,那分母就是6,分子就看我們查的是什麼,範例來看就是腳踏車,也就是2,2/6=33.3%。注意,這是Sample Proportion,所以是以

p^來表示,這也是population proportion的估計值。

Sample Variance

課程連結

Sample Variance

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

先前的課程提過,方差是用以計算資料點偏離均值的程度。

上圖來看,上面的資料較為集中,因此其方差較小,下面的資料較為分散,因為此方差較大。這邊計算的是整體的資料,為population size,均值也是整體資料的均值,為population mean。

問題,我們該如何利用樣本計算總體方差?

Variance Estimation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

用一個三面骰子來說明,這個骰子的點數有1、2、3,均值很明顯的就是2,也就是

1+2+33=2

其方差就是把資料點跟均值相減之後平方取平均,也就是

23

注意,這邊計算的都是為population,也就是為population mean與為population variance,是根據整個資料集所計算而得。

Variance Estimation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

現在來嚐試用估測的,每次骰兩次的骰子,因此

n=2,得到一個資料集如上圖所示,計算如下:

  1. 計算每一筆資料的均值
    x¯
  2. 計算每一筆資料的方差
  3. 加總方差取平均

what,最後得到的竟然是0.333,也就是

13,這跟我們用總體資料所得有所落差,肯定是那邊做錯什麼事了。

Variance Estimation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

最重要的部份在於,利用樣本計算總體方差的時候,分母不能是

n,而是要
n1

Variance Estimation

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

利用全部的資料計算總體方差與利用樣本計算總體方差的公式。

Variance Estimation

image

這邊給出四面骰子的範例。過程一樣,就不多說明,總之就是計算均值,計算方差。

這邊得到的就是總體方差。

Variance Estimation

image

接著一樣的,如果樣本總體方差用著採樣數來做為分母(此例為

n=2)的話,那就會有所落差。

Variance Estimation

image

如果分母調整為

n1就會正確了。

Law of Large Numbers

課程連結

Law of Large Numbers

image

大數定理,資料愈多估測值就愈準。

以四面骰子為例,分別為1、2、3、4,它的均值為2.5。

如果我們只骰兩次,上圖下的左圖是所有可能的組合,中圖為所有可能的均值,右圖為所有可能的均值的總體均值。

Law of Large Numbers

image

這邊用範例來說明為什麼資料點愈多就愈接近均值。上圖右的x軸是採樣次數,y軸是所有樣本點的均值,水平線為總體均值。

可以看的出來,採樣次數愈多的情況下,其樣本均值是愈來愈接近總體均值。

Law of Large Numbers

image

所以,根據大數理論,假設:

  • n
    是採樣次數
  • Xi
    是樣本大小
    i
    的估測值

n趨近於無限的時候,樣本均值就會愈來愈接近
x
的期望值,也就是
x
的均值,也就是總體均值

Law of Large Numbers

image

當然這有一些條件,如下:

  • 樣本必需從母體資料集中隨機選取
  • 樣本要夠大,這種情況下均值才會準
  • 樣本間的觀測值必需要互相獨立

Central Limit Theorem - Discrete Random Variable

課程連結

課程說明中央極限定理,把每次採樣的平均繪製出來,終究會是一個高斯分佈

Central Limit Theorem(CLT) - Example 1

image
image

image

image

這是課程中常見的丟硬幣的範例,其中

X表示丟到正面的次數,並且正反面的機率各半。相關機率分佈如上圖右所示。由上至分為丟1~4顆硬幣的分佈。

Central Limit Theorem(CLT) - Example 1

image

丟到十顆的時候會發現到,其實就長的一臉高斯。

Central Limit Theorem(CLT) - Example 1

image

把圖表攤開來看,確實就是隨著觀測數量的增加,分佈變的愈來愈高斯。

Central Limit Theorem(CLT) - Example 1

image

我們丟了

n次的硬幣,正面的機率是
p
,它的均值
μ
就是
np
,也可以寫成
nP(H)
,其中
P(H)
指的就是正面的機率。

它的方差

σ2=np(1p)=nP(H)P(T)

Central Limit Theorem(CLT) - Example 1

image

如果丟1次硬幣,也就是

n=1,那它的均值
x¯=1×0.5=0.5
,標準差就是0.25。

Central Limit Theorem(CLT) - Example 1

image

把剛剛範例中的通通怒算一發,得到每一個採樣的均值與方差。

Central Limit Theorem(CLT) - Example 1

image

只要採樣的次數夠大,我們就可以得到一個均值為

np,標準差為
np(1p)
的正態分佈。

Central Limit Theorem - Continuous Random Variable

課程連結

Uniform Distribution: Motivation

image

這次說明的是連續變數情況下的中央極限定理。範例仍然是課程中常見的客服電話等待時間。其中

X表示等待的時間,客服會在0~15分鐘內接電話,以
XU(0,15)
表示,並且這是一個均勻分佈,代表每一個時間點的機率是相等的,也就是說這個PDF(機率密度函數)的輸出是恆定的。

Central Limit Theorem(CLT) - Example 2

image

平均等待時間的話,就是每一通電話被接起來的時間除上撥打的次數,如果撥打的電話次數是

n的話,那平均時間就可以寫為
Yn=1ni=1nXi

Central Limit Theorem(CLT) - Example 2

image

n=1的情況下,可能得到的就是一個如上圖右的分佈,看起來非常平均。

Central Limit Theorem(CLT) - Example 2

image

n=2的時候看起來就有一點三角形了,而且會在
n=7.5
的左右呈現對稱狀態(因為0~15分,平均就是7.5分)

Central Limit Theorem(CLT) - Example 2

image
image

image

接著是

n=3,n=4,n=5,不難發現整個分佈愈來愈像吊鐘燒,隨著
n
的變大,整個分佈愈來愈接近高斯分佈。

Central Limit Theorem(CLT) - Example 2

image

來算一下

Yn的平均值與標準差。首先是平均值,它是
n
Xi
的平均值的期望值,也就是
E[Yn]=E[1ni=1nXi]
。根據望值期的線性,這又等於
1ni=1nE[Xi]
,也就是每個變數的期望值加總之後取平均。因為分佈都是一致的,所以說這就直接是
1nnE[X]=E[X]=7.5
。這合理,因為我們的等待時間沒有變過,就是0~15,平均就是7.5。

方差的話,常數項

1n來自於方差,不過它是平方的。另外,因為變數之間是獨立的,所以總和的方差(
Var(1ni=1nXi)
)變成方差的總和
1n2i=1nVar(Xi)
。因為
Xi
是同分佈,所以是
1n
的平方倍。然後那個加總就是
n
倍,所以再抽出來變成
1n2nVar(X)
,最終就是方差除
n
。這並不是常見的狀況,就只是因為它的獨立性所以成立。

Central Limit Theorem(CLT) - Example 2

image

現在我們就利用剛剛得到的均值與變異數來畫出高斯分佈,隨著

n的增加我們也可以發現,高斯分佈曲線是愈來愈擬合資料分佈,也可以看的到那個高斯曲線的左右是愈來愈窄,這也很合理,
n
愈大,變異數愈小,那資料分佈就會愈集中。

基本上從

n=3之後,這個高斯分佈就有點匹配資料分佈了,不過這並不常見,一般會在
n=30
之後才會更為匹配,總之就是取決於原始的資料分佈就是。

Central Limit Theorem(CLT) - Example 2

image

總結,當我們考慮

n個獨立的相同分佈的隨機變數的平均值時,其均值
Yn
跟總體均值是相同的,也就是
μYn=μ
。且其變異數就會是總體變異數的平均,也就是
σYn2=σ2n

這就造成一個情況,也就是均值不變,但是變異數卻會隨著

n的變大而變小。這也合理,因為
n
愈大就會愈接近總體平均,資料集中了就會讓變數異變小。

Central Limit Theorem(CLT) - Example 2

image

中央極限定理指出,當

n無限大,標準化平均值會依循著標準正態分佈。不過要記得,
n
一般要大於等於30,才會形成正態分佈。

另外也有一種中央極限定理是以總和而不是平均值來衡量。

Central Limit Theorem(CLT) - Example 2

image

我們可以說,當

n接近無限大的時候,
n
個相同獨立隨機變數的總和減去
n
乘上總體平均值
E[X]
,然後再除
n
的平方根乘上總體變異數也會依循著標準常態分佈,這就是中央極限定理。