---
title: 機器學習導論 - part I
tags: 2021 Fall - 機器學習導論
GA: G-77TT93X4N1
---
# 機器學習導論 - Part I
---
## Instructor
* Instructor: 林得勝
Office: SA242 ext. 56422
Email: tslin@math.nctu.edu.tw
Office Hours: By appointment
---
## Course information
<!--
* Online meeting at butter - [https://app.butter.us/teshenglin/2021machinelearning](https://app.butter.us/teshenglin/2021machinelearning)
---
-->
### Textbook:
> There is no required textbook.
#### Reference:
* Lecture notes
1. [Stanford CS229 - Lecture notes1](http://cs229.stanford.edu/notes2020fall/notes2020fall/cs229-notes1.pdf)
2. [Stanford CS229 - Lecture notes2](http://cs229.stanford.edu/notes2020fall/notes2020fall/cs229-notes2.pdf)
---
### Grading Policy
1. In class (5%)
* (2%) Either be on duty for one lecture or share your lecture notes.
* (3%) Other in-class activities.
3. Assignments (15%)
---
## Course calander:
| 週一 | | 週四 | |
|------|---|------|--------|
| 9/13 | 課程介紹 - Lin | 9/16 | Lin |
| 9/20 | **Holiday** | 9/23 | Lin, [Youtube - MLE](https://youtu.be/6NVx1Pd2DKc) |
| 9/27 | Lin | 9/30 | Lin |
| 10/4 | Lin | 10/7 | Lin |
| 10/11 | **Holiday** | 10/14 | Lin |
---
## Assignments
* Assignment 0 - Slef-practice on `python` (No hand-in)
* [Learn X in Y minutes, X=python](https://learnxinyminutes.com/docs/python/)
* CS229 - Python Review Code - [pdf](http://cs229.stanford.edu/notes2021spring/notes2021spring/python-review-code.pdf), [code in ipynb](http://cs229.stanford.edu/notes2021spring/notes2021spring/cs229-python-review-code.ipynb)
* Remark on Colab
* Mount your drive - [Google Colab 實用奧步篇 ( 連結硬碟、繪圖中文顯示問題 )](https://ithelp.ithome.com.tw/articles/10234373)
* The location of your Google Drive
* /content/drive/MyDrive/Colab Notebooks
* The location of pre-installed packages
* /usr/local/lib/python3.7/dist-packages
* Assignment 1
> Extract the size and price data of your homwtown, use linear regressoin to find the best linear model
>
> You should
> * Specify and describe properly your data
> * Explain how the data is extracted
> * submit a 'ipynb file' to E3
* Colab ipynb
* [LMS_GD](https://colab.research.google.com/drive/1MylGpZs7wkodcYGyWnyQ5ONYByOFx7DX?usp=sharing)
* [房價分析](https://nbviewer.jupyter.org/github/teshenglin/Intro_machine_mearning/blob/main/房價分析.ipynb)
* [Pandas - Data frame](https://hackmd.io/@teshenglin/r16ok9uXK)
* Examples
* [實價登錄房價分析.ipynb](https://nbviewer.jupyter.org/github/teshenglin/Intro_machine_mearning/blob/main/house_price_analyze.ipynb)
* [資料如何取得及吃進 Python - hackmd](https://hackmd.io/@DiamondSheep/SkGIx41VK)
* Assignment 2
> You should submit a single PDF file to e3.
1. Show that
$$
\int^{\infty}_{-\infty} \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)\,\mathbb{d}x = 1
$$
2. Given sample $\{x_i\}^N_{i=1}$ from random distribution $N(\mu, \sigma)$, show that the critical point for the likelihood function found in class is indeed a global maximum.
* Assignment 3
1. (ipynb) Explore the data (or try to find real data from somewhere) and find a way to see the data such that it has a two-dimensional curve-like structure.
> You should
> * Specify and describe properly your data
> * Explain how the data is obtained and extracted
>
> In particular, you should make sure the explanation is clear enough so that the graph and data can be reproduced by the coursemate
2. (PDF) For binary classification problem, consider the cost function as mean square error and derive the corresponding gradient descent rule.
3. (PDF) Consider the cross entropy loss for classification problem of $2$ classes where the feature is $\mathbb{R}^2$ data. Derive the corresponding gradient descent rule.
* Remark: $x^i\in\mathbb{R}^2$, $y^i\in\mathbb{R}^2$, $h(x^i)\in\mathbb{R}^2$.
* Assignment 4
1. Show that
$$
\int_{\mathbb{R}^2} \frac{1}{2\pi|\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right)\,\mathbb{d}x = 1,
$$
where $\Sigma$ is a $2\times 2$ symmetric positive definite matrix.
2. Let $f\in C^2(\mathbb{R})$ that satisfies $f''(x)\ge 0$ for all $x\in \mathbb{R}$. Show that $f$ is convex.
* Assignment 5
1. Find a function $f(x,y)$ such that $(x_0, y_0)$ is a critical point, i.e., $\nabla f(x_0,y_0)=0$, besides, $\partial^2_x f(x_0, y_0)<0$ and $\partial^2_y f(x_0, y_0)<0$, but $f$ does not attain a local maximum at $(x_0, y_0)$.
---
<!--
2. Find a function $f:\mathbb{R}\to\mathbb{R}$ such that $f$ has a unique critical point at $x=$ and $f''(0)<0$, but $f$ does not attain a global maximum at $x=0$.
### References
* [playground.tensorflow.org](https://playground.tensorflow.org)
#### Lecture 1
#### Lecture 2 - p1.-p.10(sec.2) (9/23, 9/27)
#### Lecture 3 - p.10(sec.3)-p.19(sec.7) (9/29, 9/30)
#### Lecture 4 - sec.6, 8, 9 (10/4)
#### Lecture 5 partIV (10/7, 10/14)
---
#### References2 - 剖析深度學習
* [(1):為什麼Normal Distribution這麼好用?](https://www.ycc.idv.tw/deep-dl_1.html)
* [(2):你知道Cross Entropy和KL Divergence代表什麼意義嗎?談機器學習裡的資訊理論](https://www.ycc.idv.tw/deep-dl_2.html)
* [(3):MLE、MAP差在哪?談機器學習裡的兩大統計觀點](https://www.ycc.idv.tw/deep-dl_3.html)
* [(4):Sigmoid, Softmax怎麼來?為什麼要用MSE和Cross Entropy?談廣義線性模型](https://www.ycc.idv.tw/deep-dl_4.html)