# Introduction to Artificial intelligence ###### tags: `課堂筆記` # Lecture 1 ### What is AI - 1956年創造的詞彙,其中包含 - ==經由process而產生的== - Thinking Humanly (人類化思考) - Thinking Rationally (理性化思考) - ==經由behavior產生的== - Acting Humanly (人類化行動) - Acting Rationally (理性化行動) ----------------- ## Four Categories of AI and process ### 其中**Acting Humanly** 包含 * NLP * Knowledge representation * Automated reasoning * Machine Learning * Computer vision * Robotics ### 其中**Acting Rationally** 代表著機器必須 * 決定人類如何思考 * through introspection * through psychological experiment * through brain imaging 其中 ==Cognitive science== 名詞 又代表 **一種帶入AI 以及可測試的理論及人類心智資料的分類*** ### 其中**Thinking Rationally** 又包含 因希望到1965之前解決所有邏輯問題,所以必須要處理以下兩種困難點 ```a! 對於 non logicist notation questions,很難把他表示成邏輯式 對於 "solving it in principle" 和 "Solving it in practice" 兩者之間有太大差別 ``` ### 其中**Acting Rationally** 又包含 * **Ration agent** :A rational agent could be anything that makes decisions, as a person, firm, machine, or software. 也就是desicion maker,其中的套件 以下分兩種優點: ```a! It is more general than the "Laws of thought" approach It is more amenable "經得起檢驗的" ``` ## Foundations of AI * 以下分成幾個需要考慮的領域 * Philosophy (是否得到合乎現實的結論) * Mathematics (是否可以被計算) * Economics (合乎成本嗎? 或是怎麼得到高回報) * Neuroscience (是否和大腦一樣思考) * Psycology (人腦如何思考?) * Computer Engineering (怎麼做出高速計算的計算機) * Control Theory and cybernetics (如何控制) * Linguistics (語言是否有關連) ## History os AI ### Awards winners | Year | Event(s) | | ---- |:--------------------------------------------------------- | | 1969 | Marvin Minsky /Turing award | | 1971 | John McCarthy / Turing award | | 1975 | Allen Newell/ Herbert Simon /symbolic model /Turing award | | 1994 | Ed Feigenbaum / Raj Reddy/ expert systems/ Turing award | | 2011 | Judea Pearl / probabilistic reasoning / Turing award | | 2019 | Yoshua B, Geoffrey, H, Yann L / deep learning | ### Events | Year | Event(s) | | --------- | ------------------------------------------------------------------------------------------------------------------- | | 1943 | Warren, M and Walter ,P found out any computable function can be found using ML | | 1949 | Donald Hebb demonstrated simple updating fule for modifying neurons | | 1950 | Marvin, M, Dean ,E built the first NN computer | |1952|Arthur,S wrote an AI that can played at amateur level| | 1956 | lotta people organized a two-month workshop at Dartmouth | | 1952-1969 | GPS = General Problem Solver, first "thinking humanly" approach | | 1976 | Newell and Simon state that **intelligence being must operate by manipulating data structures composed of symbols** | | 1959 | Herbert G, constructed the Geometry Theorem Prover, it was able to prove theorems | | 1958 | McCarthy defined the language called **LISP**| | 1966-1973| "the spirit is willing but the flesh is weak" points out the machine translation in that time | | 1969-1986 | Expert systems was out, and the **DENDRAL program** was developed to solve the problem that was harder| |1969-1986|Feigenbaum and others at stanford developed MYCIN to diagnose blood infections ///domain knowledge| |1969-1986|In 1980s, AI industry boomed from millions to billions| |**AI winter**|[faliures](https://zh.wikipedia.org/zh-tw/%E4%BA%BA%E5%B7%A5%E6%99%BA%E6%85%A7%E4%BD%8E%E8%B0%B7) 很多公司原本說AI可以做到的事情都失敗了| | in mid 80s| a lot of diff groups reinvented the "back propagation learning algo"/ also, **Hidden Markov Model** came to dominate the area of ASR| |1987->present|Probability rather than boolean, ML rather than coding gains good claims| |1988| Judea,P developed "Bayesian networks" gain huge success| |1988| Rich Sutton -> reinforcement learning <- Markov decision | |2001->now|Big data, 因為互聯網被發明--> 可以得到大量資料 --> AI 顯得更有用| |2011->now|Deep Learning, Convolutional neural networks ///AlphaGo| * Scalability: A program can find a solution in principle does not mean that the program contains any of the mechanisms needed to find it in practice * 主要AI 會變厲害的原因出在:數學模型/Probabilistic Reasoning/ * AI的好處:幫人類做很多重複的事情 * AI的壞處:有機會導致網路安全降低,而且導致失業人口上升 # Lecture 2 ## Agent and Environments * 基本上,Agent就是可以和環境互動的裝置,其中包含sensors<感知裝置>以及Actuators<促動裝置> >>>>可以翻譯成==代理裝置== * 老師給了一個裝置的例子 ::: success 吸塵器藉由sensors 感知哪邊要被吸,Actuators 讓吸塵器去吸 ::: :::warning Rational Agent <理性代理裝置> def >>> 他會做出==正確的選擇== **何謂正確?**>>> 如果他做的東西是人覺得滿意的,就是正確的選擇 更好的定義:**Rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequnce and whatever built-in knowledge the agent has** 勒勒等... 基本上就是他的選擇要最好 而在吸塵器的定義上面,我們可以用以下的方式去測量他的performance 1. 他一次可以吸多少 2. 乾淨的地板 ::: * 如果因為設計不良,導致這個 agent 沒有辦法得到最好的outcome,則此agent 沒有 autonomy * 但 Rational agent 應 autonomous ==可自主學習== ## Task Environments ### PEAS -> Performance analysis 以計程車司機為例子 * P = Performance Measure (表現) * 安全且合法、舒服的旅程 * E = Environment (環境) * 路、交通、行人、客戶 * A = Actuators (行動) * 剎車、加速、按喇叭 * S = Sensors (感知) * 照相機、GPS ### Task Environments 的參數 列出+解釋 | Fully Observable (可全然觀測) | Partially observable(可部分觀測) | | ----------------------------- | -------------------------------- | | Sensor可以看到所有細節 | Sensor可以看到部分細節 | |Single Agent|Multi-Agent| |-|-| |A 的結果會影響到B的結果| B的結果被 {A,B,C,D,E,F ...}很多東西影響| |Episodic|sequential| |-|-| |前個結果不會影響|前一個結果會影響| |Static|Dynamic| |-|-| |Agent不會跟著環境變動|Agent會跟著環境變動| |Discrete|continuous| |-|-| |**Known**|**Unknown**| ### Agents ==代理== 的結構 / Agent Programs * agent = architecture + program :::success 以下為 Agent Programs 的 pseudo code ::: ```javascript! percepts = [] //empty table = [ ['shit','wipe'] ['sit','shit'] ] // a table of actions, indexed by percept sequences, initally fully specified function Table_Driven_agent(percept) ->action{ percept.append(percepts); action = LOOKUP(percepts,table); return action; } ``` 上面的程式會把他觀察到的存在percepts中>>記憶體,並且隨時更改,然後計算他要做甚麼 :::danger 以下為 Simple Reflex Agents ::: ```javascript! function Reflex_vacuum_agent(list = [location,status]) -> action{ if(status == 'Dirty') return 'suck'; else if(location == 'A') return 'go Right'; else if(location == 'B') return 'go Left'; } ``` 又或者是 ```javascript! rules = {action_rules} function Simple_Reflex_Agent(percept) -> action{ state = Interpret_input(percept); // 先解釋行為,告訴程式狀態 rule = Rule_Match(state,rules); // 找出state 中對應的rule action = rule.action(); return action; } ``` ## Model-based Reflex Agents (模組化代理裝置) * 此裝置會存在 ==Internal State 內部狀態==,而此==Internal State==會根據收藏到的資料決定,原文中提到的 The agent keeps track of the part of the world it cants see now意旨他會紀錄所有資訊,進而推導出要有何作為 :::warning 原文: The agent keeps track of the part of the world it cant's see now The agent should maintain some sort of internal state that depends on the percept history ::: * 以下提供簡易 pseudo code,可以把model-based 想成DP ```javascript! state = {所有紀錄的狀態}; model = {決定下個行為的依據}; rules = {規定一些行為的rule}; action = {最近的行為} function Model_Based_Reflex_Agent(percept)->action{ state = Update_state(state,action,percept,model); //找出目前的狀況 rule = Rule_Match(state,rule) // 找出對應的規定 action = rule.action() //找出此規定下相對應的行動 return action } ``` ## Goal-Based Reflex Agents (依據目標代理裝置) * 因為知道現在的State 對於決定 action 有點不足,所以給予==代理裝置== **目標** 來讓結果更好 * Goal-Based 相較起Model-based 更容易決定它的最終結果 :::info 通常來說,Model-based 跟 Goal-Based 兩者之間的創造模式根本沒有太大差別,唯一的差別就在於有給予目標,以下為pseudo-code ::: ```javascript! state = {所有紀錄的狀態}; model = {決定下個行為的依據}; rules = {規定一些行為的rule}; action = {最近的行為} Goal = 'goal' function Goal_Based_Reflex_Agent(percept)->action{ state = Update_state(state,action,percept,model,Goal); //根據目標找出目前的狀況 rule = Rule_Match(state,rule) // 找出對應的規定 action = rule.action() //找出此規定下相對應的行動 return action } ``` ## Utility-Based Reflex Agents * 開始覺得 只有Goal 沒有辦法得到最好的結果了,goal可能給予太明確的方向導致agent無法給予當下的Rational (理性的)抉擇,此時 Utility-based agent 還是可以做出理性判斷,例如: * 當有不同(多個)目標時,只有一個可以被選擇 * Rational Rtility-based agent 會根據行動(action) 得到的效果作選擇,也就是跟上兩個比起來,多了一個優化的參數,以下提供pseudo code ```javascript! state = {所有紀錄的狀態}; model = {決定下個行為的依據}; rules = {規定一些行為的rule}; action = {最近的行為} Goal = 'goal' function Utility_Based_Reflex_Agent(percept)->action{ state = Update_state(state,action,percept,model,Goal); //根據目標找出目前的狀況 rule = Rule_Match(state,rule) // 找出對應的規定 action = Max_reward(rule.actions()) //注意此時action 變成actions // 我們要根據他提供的所有結果,找出一個對於當下來說最好的行為 return action } ``` ## Learning Agents (學習介質) * Learning element (學習因子) 讓程式進步 * Performance element (行為因子) 決定外在行為 * Problem Generator (分支產生器) 會提供新的且有效率的行動建議 # Lecture 3 :::danger 這章節主要在探討其中一種 Goal-based agent 也稱作 Problem-solving agent 除此之外也會先定義怎麼樣去測量一個好的演算法(代理裝置) ::: ----------------- 1. Completeness (完整性) : 當有解時,是否可以找到他? 2. Optimality (最佳性) : 是否能找到最佳解? 3. Time Complexity (時間) :要花多久找到解? 4. Space Complexity (空間) : 要花多少記憶體? ------------------- ## Uninformed Search (未被告知的,也就是沒有目標) * also called **Blind Search** * 並沒有 Strategy可言>>>>>也被稱作 Heuristic Search(直覺的) :::warning 以下為例子 ``` 傳統BFS就是一種 Uninformed Search 其中,DFS 也是一種Uninformed Search ``` ::: * 而 Uninformed Search 通常會找到最佳解(可是比較慢) * 因為這些Uninformed Search 花掉太多成本,所以以下會介紹一些折衷方案 :::danger 折衷方案: 1. Depth-Limited Search ,基本上就是閹割版本的DFS,不會走到一個固定的深度 2. Iterative Deepening Search ,這個比較特別,是利用DFS 用到比較少的記憶體的特性,套用BFS,每次會規定層級,直到找到為止,他並不會太誇張的花道記憶體,因為通常上方的層級不會花掉你太多時間 3. Bidirectional search,從兩邊開始找,希望他們在中間遇到 4. Best-first Search,每次找下一個點的時候,先用一個evaFunction = f(n)來看看他值不值得往下走,基本上就是 DFS 的 greedy版本,跟所有greedy一樣,他通常不會是全域最佳解 ::: ## A* Search 1. 最有名的 best-first search (greedy) 2. 每次都會搜尋 ```abc! g(n) -> 到那個點的cost "Cost to reach the node" f(n) -> 從這個點到終點的cost "The cost to get from the node to the goal" g(n)就是給邊(edge)cost h(n) -> 預測到的解(最佳) 可以得出 f(n) = g(n) + h(n) ``` >>>> h(n) 是一個 admissible heuristic,他永遠都不會高估結果的cost >>>> 其中一個重點,就是此式子 h(n)<=c(n,a,n') + h(n') # Lecture 4 ## Local Search Algo and Optimization Algo > **Local Search** >> 通常只會搜尋點的鄰居 >> cost很低<常數> >> 對於最佳化問題來說是一個好的Search, 而且他會給予 ==Objective function== :::info 以下開始介紹不同的Local Search ::: ## Hill Climbing Search * 很直覺性的search,通常在山頂<區域最佳> 時會停下來。 * 以下為pseudo Code ```javascript! function Hill_Climbing(problem) -> Local_MAX{ current = Make_Node(problem.init_state()); while(True){ neighbor = max([current.neighbors]); // 在他的鄰居找最大 if(neighbor.val() <= current.val()){ return current.state(); // 最大 } current = neighbor; // 去鄰居找 } } ``` 對於 Hill Climbing Search 有以下幾種變化 > Stochastic hill climbing >> 隨機找起始點 >> > First-choice hill climbing >>套用上方的方法,隨機找起始點,但只要一找到比現在好的 就套用 >> > Random-restart hill climbing >>隨機生成一組起點,通通都找,找到為止 ## Simulated Annealing 模擬退火法 * 退火(Annealing),在冶金學或材料工程,是一種改變材料微結構且進而改變如硬度和強度等機械性質的熱處理。過程為將金屬加溫到高於再結晶溫度的某一溫度並維持此溫度一段時間,再將其緩慢冷卻 * 以下為pseudo-code ```python! problem = 'a problem' schedule = {(time,tempeture)} # 時間以及溫度的集合 def Simulated_Annealing(problem,schedule) -> solution_state: current = Make_Node(problem.Initial_State) for t in range(無限大): T = schedule[t] # 給予此時間要有的溫度 (此解法好不好) if T == 0: return current #如果溫度為0,回傳現在的node next_node = random.random_selection(current.neighbor) 隨機找鄰居 delta_E = next_node.value - current.value # ΔE 就是現在NODE 與鄰居的差 if delta_E > 0: current = next_node # 如果差大於0的話,就換鄰居找 else: if(random < e.exp(delta_E/T)): # 依據火力的大小給予機率,機率為 e^(ΔE/T) current = next_node ``` 也就是說,他是機率based 的hill_climbing search,如果下個結果好的話,永遠都接受,不好的話,有一定機率會去他那邊 ## Local Beam Search (局部光線尋找法) * 此 algo 會存 k 個state 而非只有一個state(==跟退火法不太一樣==) * 一開始存在 k 個 隨機的state,先在這裡找答案,如果沒有的話, 每次在此k個state中找這k個的鄰居(最佳的) * 以下為pseudo code ```python! nodes = ['randomly choose k states'] def Local_Beam_Search(nodes) -> state: while(True): if 'Goal' in nodes: return nodes.goal('Goal') #找到goal就回傳 else: nodes = [nodes.best_neighbors()] # 更新所有node ``` ## 基因演算法 有做功課,跳過 ## Newton's Method > 此時開始在探討如何找下一步的問題,Δf(x) 可以找出下一步,但到底要怎麼做才可以比較好? > 看一下以下的圖片就好 ![](https://i.imgur.com/j0KJEX9.png) # Lecture 5 ## Acting Under Uncertainty 這個章節接下來會探討機率模型,因為很多時候,agent 無法知道現在的情況,所以要套用機率 * 用通常的 => 是沒有辦法找到好的解答的,而且解答也會dangling,而設計不良的理由有以下幾點 * Laziness (懶惰) : 太多要列出的因子了 * Theoretical ignorance (無知): domain knowledge 不夠 * Partical ignorance (部分不知道): 跟上面一樣,不過狀況輕微 * 而機率模型就可以好好帶領我們了解這些無知 * 對於真正的選擇來說,可以參考以下公式: ```sass! Decision theory = probability theory + utility theory 實際選擇 = 機率模型 + 實用模型 也就是統計兩種機率進而得出你的選擇 ``` ## Product Rule ![](https://i.imgur.com/2LYzduT.png) ## Inclusion - exclusion principle: ![](https://i.imgur.com/qKg3WlZ.png) ## Bayes'r Rule ![](https://i.imgur.com/u16OeC3.png) * 好用的原因是因為 當前面幾個機率都很好算出來的時候,第四個算不出來時,就可以用貝是定理 * 原文: Bayes' rule is useful in practice because there are many cases where we do have good probability estimates for these three numbers and need to compute the forth # Lecture 6 ## Bayesian network * 是一個 directed graph (有向圖) * 點中包含機率變數 -> 有連續亦有離散 * 沒有Directed cycles 唯一DAG(Directed Acyclic Graph) * 每個點 X 都有相對應的機率 P(X|Parents(X)) for each X in G ![](https://i.imgur.com/l3FK8Nv.png) 以下給予此 貝氏網絡的例子 :::success 現在你有一個```警報器```,他會偵測**竊盜**(高機率) 以及**地震**(低機率) 而你也有一對鄰居 **John** 跟 **Mary** ,他們跟你保證,聽到警報會打給你 **John** 的耳朵靈敏,當他打給你時有機會是==警報器響了==(高機率) 或是 ==電話響了==(低機率) **Mary**的耳朵不好,他幾乎都不會打給你 :::info 那我們來算算看無論在有沒有人打電話的狀況,竊盜產生的機率吧,下面的圖片告訴你每一個關係 ::: ![](https://i.imgur.com/jwi6YBx.png) * 我們可以計算各種情況,以下的情況 ![](https://i.imgur.com/dgQfmKC.png) 是 John 打給你了,Mary 打給你了,警報響了,沒有竊盜,也沒有地震的機率 > 怎麼樣可以建構出這個式子? > 先利用乘法公式攤開 ,然後把在式子中,彼此互不相干的機率刪掉 > 因為P(a|b) = P(a),P(b|a) = P(b) ==在 a跟b 不相干的情況下== > ![](https://i.imgur.com/3f4hXaU.png) >> 阿Mary 就跟其他人沒有關係吼,他只跟Alerm有直接性的關聯 ::: * CPT (Contionial Probability Table) 就是機率的表格 * Bayesian network 中沒有==多餘==的機率 ### Chain Rule ![](https://i.imgur.com/lFtaJXV.png) ### 接續剛剛的話題,萬一我們把它搞得更複雜呢? * 萬一地震發生時,John, Mary 直接認定為警報是地震產生的而不打電話呢?如此一來就有更多關係圖了,下圖為例: ![](https://i.imgur.com/7QpDEIG.png) 上面新增了以下幾點: :::info 1. MaryCalls ==No Parents== 2. JohnCalls ==如果Mary打電話,代表警報一定響了,John也一定會打== 3. Alerm : 如果兩個都打電話了,那就代表警報超高機率響了,如果只有一個人打,那就讓那個人當作Node的Parent 4. Buglary: 如果我們知道 Alerm的狀況,基本上我們就可以知道有沒有竊盜發生 ![](https://i.imgur.com/I7A55sh.png) 5. Earchquake: 如果警報器響了也代表地震的機率更高 ::: * 通常要解決的問題,在每一個node都是呈現連續的狀況(剛剛有提到連續以及離散),但,連續的值稍稍困難去給予機率,所以我們這邊採用了一個方法叫做: Discretization(離散化) * 假設{溫度}∈R,那我們可以把溫度切成 * <0 * 0-100 * \>100 * 這三種狀況 * 去給定這三種狀況也比較好給予機率 * 最常見的方法就是去定義 "Standard Families of probability density function" 一種被有限的變數給規範的function * 例: Gaussian/Normal Distribution (高斯/一般 分配函數) * ![](https://i.imgur.com/ECYsgas.png) * 有限的變數為: μ : 標準差 σ: 變異差 * 如果這個network中有離散以及連續的機率,我們把她稱作 hybrid(混合) Bayesian network * 對於此種狀況,我們可以利用 Linear Gaussian distribution * 也就是 μ : 標準差(線性變化) σ: 變異差(固定) 的情況 ## chap 13 的中間都是數學公式 自己看 ## The Variable elimination algo * 對於找樹葉(每個Bayes'network 的機率模型都可以變成一棵樹),如果變數太多,會導致performance不良,所以這邊提供了一個演算法提升效能 * 以下數學公式要從右看到左 ![](https://i.imgur.com/blX8fZA.png) * 單一變數通通被存取下來,只有dependant(需要依靠別人的) 變數到時候再計算 (讓我們回到竊盜的例子) ![](https://i.imgur.com/G1i6CCd.png) * 以上每一個 f(x),其中都是一個矩陣,而因為 J,M 只有兩種機率,所以f4 f5 可以表示成這樣 ![](https://i.imgur.com/DKCndps.png) * f3 是一個 2x2x2 的矩陣,表示法如下 $$ \begin{pmatrix} P(a|b,e) & P(a|\neg{b},e) & P(a|b,e)\\ P(\neg{a}|b,e) & P(a|b,e) & P(\neg{a}|\neg{b},\neg{e})\\ \end{pmatrix} $$ 等等的矩陣,可是這邊因為空間太小,很難寫出來,姑且這樣ㄅ 表示法如下![](https://i.imgur.com/qtN6jzD.png) * 首先把後面三個合在一起 ![](https://i.imgur.com/z90YZw4.png) 變成 2x2的矩陣 = f6 如此一來本來的樣子變成 ![](https://i.imgur.com/A3edjJy.png) 再把前面的放在一起,變成這樣 -> 多了f7: ![](https://i.imgur.com/XPPE9Io.png) 最後原本的公式就變成 ![](https://i.imgur.com/YyrucH9.png) 進而得到最後的表格 ![](https://i.imgur.com/LIb3I9f.png) :::danger 如果沒有要計算時,矩陣都擺著不要動 ::: ## Approximate Inference in Bayesian Nets - 近似值推斷(Monte Carlo algo) * 對於大型的 Network(網絡) 來說,他非常難以追蹤,而其中,近似值推斷也是很重要的一環 * 這邊會跟你說一個 "Randomized Sampling algorithm"(隨機取樣) 叫做 Monte Carlo Algo 此algo會依據你給他的 sample 決定他的準確率 * 而此方法又分成了兩大種類 * ==Direct Sampling (直接取樣)== * ==Markov Chain Sampling (馬可夫鍊取樣)== ### Direct Sampling (直接取樣) * The primitive element in any sampling algorithm is the generation of samples from a known probability distribution. * 例如,丟硬幣 == P(coin) = [0.5,0.5] * 直接取樣的想法就是: * 根據拓譜順序 (Topological order) 對每一個點取樣 * The probability distribution from which the value is sampled,which is conditioned on the values already assigned to the variable's parents ```python! bn = [a bayesian network = P(x1 ... xn)] def Prior_sample(bn) ->event_sampled: X = [x1 ... xn] #where xi = events for x in X: x[i] = random_sample(P(xi|parent(xi)) return X ``` 以下來解析演算法在幹嘛 :::danger 假設 有一個network X = [cloudy, sprinkler, rain,wetgrass] 關係如下: P(Cloudy) = [0.5,0.5] = ==True== P(Sprinkler|Cloudy=true) = [0.1,0.9] = ==False== P(Rain|Cloudy = true) = [0.8,0.2] = ==True== P(WetGrass|Sprinkler = False , Rain = True) = [0.9,0.1] = ==False== 如此一來,上面的function 應該會回傳 X = [true,false,true,true] #對於他的機率隨機取樣 ::: :::warning First, let SPS (x1, . . . , xn) be the probability that a specific event is generated by the PRIOR-SAMPLE algorithm. Just looking at the sampling process, we have ![](https://i.imgur.com/JS0tbKD.png) because each sampling step depends only on the parent values. ::: ![](https://i.imgur.com/2Rv14t2.png) * Rejection Sampling in Bayesian networks * RS 是一種在很難取樣時取樣的演算法 * 也就是找 P(X|e) 以下為 RS 演算法的 pseudo code ```python! X = [the query variable] e = [observed values for variables E] bn = [Bayesian network] N = Samples_amount def Rejection_Sampling(X,e,bn,N) -> extimate_of_P(X|e): for i in range(N): x = Prior_sample(bn) # find prior_sample 上面有 if x is consistent_with_e: N[x] = N[x] +1 # where x is the val of X in x return Normalize(N) ``` * Rejection Sampling 最大的問題就是,太多sample了! * 而且一一探訪的速度,雖然結論會converge到正確解答,但卻花太多時間 ## inference by Markov chain simulation * MCMC (Markov Chain Monte Carlo) 和前一個演算法有點不一樣 * 他不會每次都從0開始生成sample * 他會藉由上次的sample 生成 下次的sample * MCMC - **Gibbs Sampling** * Starts with an arbitrary state * Generates next state by randomly sampling a value for one of the nonevidence var Xi * 利用 Markov Blanket 來決定下一個 sample * Markov Blanket = partnes, children, children's partents 以下為 Gibb Sampling 的 pseudo code ```python! N = 0 # a vector of counts for each value of X, initially 0 Z = Non_evidence_var_in_bn x = e # the current state of the network def Gibbs_Ask(X,e,bn,N) -> estimate_of_P(X|e): for i in range(N): for z in Z: # set the val of Zi in x by sampling from P(Z|mb(z|mb(z)) N[x] = N[x] +1 return Normalize(N) ``` # 看不下去了,接下來只有題目的答案 * inference over time, 為了找到下一個state, 藉由前面的state 尋找的方法 ```這些東西都在Transition and sensor models 底下``` # 考古題 ## Markov Model 與 Sensor model 的差別 * Sensor Model * ![](https://i.imgur.com/mj6BSGa.png) * Markov Model * ![](https://i.imgur.com/FwDjt4W.png) ## Kalman Filters ![](https://i.imgur.com/yXlLKyF.png) ## when doing inference over time, we usually make -8 * 1. Markov assumption * 2. the assumption of stationary process 1. Markov assumption: * the current state depends on only a finite fixed number of previous states. * 最簡單的狀況:first-order Markov process, * 也就是目前的state只會被上一個state決定(並非所有前面的) * 此做法也有一個問題, t 要取多少? * t = 往回看的數量 2. Stationary process * a process of change that is governed by laws that do not themselves change over time * 假設所有state都是固定的,所以只需要觀察 probability table即可 ## In a hidden Markov Model -9 * to solve the ==filtering problem== and the ==smooth problem==, what is the main idea of the forward-backward algorithm that can largely reduce the time complexity > 先講名詞 >> Filtering: >>> This is the task of computing the belief state >>> 也就是 "預測state", >>> 而 filtering algo 必須要時時刻刻更新,如此一來就不需要每次都看所有歷史紀錄 >>> 也就是說,可以給予遞迴式子:![](https://i.imgur.com/0l9KWmt.png) >>> >> Smoothing: >>> Smoothing is the process of computing the distribution over past states given evidence up to the present * the main idea of forward-backward algo, is to define a recursive function that only takes constant amount of time per step, hence reducing the time complexity from O(t^2)[recursive each time], to O(t)[save the state] * Main Idea: Dynamic Programming, 把算過的東西記起來(因為可以用遞迴定義他) ## What is Overfitting -10-1 * 在統計學中,過適(英語:overfitting,或稱擬合過度)是指過於緊密或精確地匹配特定資料集,以致於無法良好地調適其他資料或預測未來的觀察結果的現象 * Overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably" ## We may suffer from overfitting when learning a decision tree. What strategies we can apply to reduce overfitting? -10-2 * Decision Tree Pruning (修剪決策樹) 是一種對抗overfitting 的方法 * 假設一顆樹(被 Decision_Tree_Learning生成),我們先去其中的Node(他只有leaf當作小孩)。如果此Node只存在雜質,那我們就砍掉他,然後用他的小孩替換他,重複此process直到沒有那種Node ## What is the kernel trick in SVM * SVM create a linear separating hyperplane(超平面), but they have the ability to embed the data into a higher-dimensional space, using ==kernel trick== * ==kernel trick==: Kernel trick在機器學習的角色就是希望當不同類別的資料在原始空間中無法被線性分類器區隔開來時,經由非線性投影後的資料能在更高維度的空間中可以更區隔開。 * 基本上 kernel trick 就是給定一個function, 他會把目前這個維度的資料(向量),經由投影的方式投影到一個更高維的空間。 ## The Idea of ensemble learning (整合學習法) * The idea of ensemble learning methods is to select a collection, or ensemble, of hypotheses from the hypothesis space and combine their predictions. * 反正就是大雜燴,自己看 ## The idea of **Random Forests** * The random forest model is a form of decision tree bagging in which we take extra steps to make the ensemble of K trees more diverse. * The key idea is to randomly vary the attribute choices. At each split point in constructing the tree, we select a random sampling of attributes, and then compute which of those gives the highest information gain. * 「隨機森林」演算法是自動個人化和自動鎖定目標活動中,主要採用的基本個人化演算法。隨機森林將數百個決策樹合併在一起,可達成比獨自單個樹更好的預測。