# GCP 2024
[toc]
---
## Week1 09/10
[學習資源1](https://b23.tv/7mjcAiu)
建立帳號與專案

---
## Week2 09/17 中秋放假
---
## Week3 09/24
雲端技術可以帶來什麼好處?
> 1. 以使用量計價
> 2. 高可靠
> 3. 低延遲
> 4. 避免資料外洩(到其他地區,國家)
雲端可以提供的三大服務:運算、儲存、網路
---
雲端服務抽象化與管理程度比較

<--控制的越少但能夠只專注於特定任務-- --控制的越多但需要處理更多細節-->
#### Firebase
抽象程度:最高
管理程度:高度管理
特性:適合快速開發和部署,主要用於行動應用和網頁。
#### Cloud Functions
抽象程度:高
管理程度:高度管理
特性:無伺服器架構,僅需撰寫功能程式碼即可執行, 只專注於提供服務(功能的實現), 不考量OS或負載。
#### App Engine
抽象程度:中高
管理程度:高度管理
特性:適合自動擴展的應用程式,專注於程式碼而非基礎設施。
#### Cloud Run
抽象程度:中
管理程度:中等管理
特性:支援容器化應用的執行,提供更靈活的部署選項。
#### Kubernetes Engine
抽象程度:較低
管理程度:中等可自定義
特性:專為容器協調設計,提供彈性和擴展性。
#### Compute Engine
抽象程度:最低
管理程度:高度可自定義
特性:虛擬機,提供最大的控制權和自定義空間。
---
一個帳戶可以建立不同的專案(Projects), 不同的專案可以存放不同資源(Resources)

---
付費帳戶可以進一步管理(加入Team分類)

---
### GCP資料存儲

#### Cloud SQL
類型:關聯式資料庫(SQL Database)
用途:適合傳統應用程式的結構化資料儲存。
特性:支援 MySQL、PostgreSQL、SQL Server 等。
#### Cloud Spanner
類型:分散式關聯式資料庫(Distributed SQL Database)
用途:適合全球範圍的大規模應用程式。
特性:高可用性、強一致性、可自動擴展。
#### Big Query
類型:資料倉儲(Data Warehouse)
用途:適合即時資料分析、大數據處理。
特性:支援 SQL 查詢,適合批量數據分析。
#### Big Table
類型:NoSQL 資料庫(Wide-column Database)
用途:適合時間序列資料、IoT 資料、高吞吐量應用。
特性:高效能,適合低延遲需求。
#### Cloud Storage
類型:物件儲存(Object Storage)
用途:適合非結構化資料,如媒體檔案、備份。
特性:可擴展性高,適合長期儲存。
#### DataStore - FireStore
類型:NoSQL 文件型資料庫(Document Database)
用途:適合行動應用和網頁的即時資料同步。
特性:即時性高,支援離線模式。
---
### 雲端計算:

IAAS(基礎架構即服務)
支持不同OS的虛擬機, 可以根據需求自定義 RAM, CPU, Hard Drive...
---
### 建立VM (Google Compute Engine, GCE)
需要付費的資源:
CPU, Memory (僅開機時需要付費)
Hardware (開機或關機時都要付費)
Zone 可以增加可靠度 (備源) 每個Region至少有3個Zone

<!-- 補10:56~ ?-->

---
連線到 VM 有三種方法
1. SSH
2. cloud shell
3. SDK
---
## Week4 10/01
[重要]不同服務的使用時機

[重要]
ssh進入VM後
1. 更新軟體套件`sudo apt update`
2. 安裝網頁伺服器`sudo apt install apache2 -y`
3. 檢查伺服器狀態 `sudo systemctl status apache2`

4. 安裝netstat `sudo apt install net-tools`
5. 確認 80port LISTEN`sudo netstat -tunlp | grep 80`
>tcp6 0 0 :::80 :::* LISTEN 2496/apache2
6. 連線到網頁


7. 建立網頁(先`cd /var/www/html`) `sudo bash -c 'echo "hi" > hi.htm'`
> sudo echo "hi" > hi.htm 權限不夠, 因為 > 把命令切開了 所以要用bash -c '' 包起來 [重要]
8. 連線到網頁查看

9. 將 Internal IP address 動態寫入到網頁 `sudo bach -c 'echo "$(hostname -I)" > test.htm'`

查看Internal IP address 可以用 `ifconfig`, `ip addr show`

---
讓VM開機時就準備好Web Server
1. 重新創一台VM
在Advanced-Automation加入:
```sh
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><p>Linux startup script added directly. $(hostname -I) </p></body></html>
```

2. 啟動機器後直接查看網頁

---
建立鏡像(image)
1. 從現有的VM(狀態為stopped)建立image

從image建立VM [create instence]


可以存取Web Server

---
### 改變VM的資源配置
1. 將現有的VM關機(狀態為stopped)
2. 點擊


3. 修改配置
---
### 固定公有IP
機器每次開機時(公有)IP都不同, 不適合架設伺服器
1. 關閉VM並進入編輯頁面
2. 


最後 [SAVE]
此外, 為了方便用戶連線可以使用Domain Name [註冊](https://dynv6.com/)

透過Domain Name存取

---
## Week5 10/08
### Billing Alert




### 在雲端上上傳我的網頁
0. 建立Local html file
1. Local html file -> cloud storage
2. cloud storage -> GCE(website) (注意權限)
> A資源要存取B資源時, 要注要Service Account是否有足夠的權限
0. 建立Local html file


儲存時會產生一個資料夾與一個html file(如下)

---
1. Local html file -> cloud storage
建立bucket


網頁上傳到bucket

> upload 可選file, floder
---
另外, 也可以透過cloud shell(不同於GCE的獨立機器, 有5G儲存空間)


`gsutil ls gs://` 查看 bucket

拷貝到另一個測試bucket (gs://guangjhe-web2-bucket/)


---
2. cloud storage -> GCE(website)
先建立一台提供web服務的VM, 使用預設權限

把檔案複製到VM

建立web server
>sudo apt update
sudo apt install
sudo apt install apache2 -y
sudo cp -r ~/guangjhe-web-bucket /var/www/html
網頁存取 104.199.224.63/guangjhe-web-bucket/"hello world".html

---
#### 延伸-將VM中的檔案下載到本地
方法1. GCP -> cloud shell -> local
可以直接透過UI

或是先拷貝到bucket, 但service account權限不足 (`gsutil cp test.txt gs://guangjhe-web-bucket`)

service account權限不足的解決方法
>(設定IAM, 無效) IAM(Identity Access Management)


將VM stop後編輯, 改service account後重啟

再拷貝到bucket
gsutil cp test.txt gs://guangjhe-web-bucket
> 還是權限不足(下週再試試別的方法)
使用預設service account, 設定:

修改權限後需要刪除權限的快取
>sudo rm -rf .gstil

### GCE透過ssh連線到另一台GCE(使用root連線)
建立兩台VM
web1(10.140.0.6)
web2(10.140.0.7)
web1 可以ping web2

---
web1, web2 都做:
產生root密碼
sudo passwd root
> 12345678

嘗試連線
web1:
ssh root@10.140.0.7

> Permission denied, Ubuntu中ssh是不允許root登入的
在web2:
編輯ssh設定檔, 加上PermitRootLogin yes
root@myweb2:/home/linkim0914# `vim /etc/ssh/sshd_config`
>sudo apt install vim -y
sudo systemctl restart sshd
嘗試連線
web1:
ssh root@10.140.0.7
> Permission denied, 還需要給web2 public key

將web1的id_rsa.pub內容寫入web2的authorized_keys
成功連線

---
## Week6 10/15
### 透過Web存取DB
1. 建立VM
建立 WWW server
>automation 填入:
```
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><p>Linux startup script added directly. $(hostname -I) </p></body></html>
```
> Firewall: Allow HTTP traffic
> ubuntu 20.04

建立 DB (mydb)
> Firewall: 不用選
> ubuntu 22.04
> 
在mydb上安裝DB [reference](https://blog.tarswork.com/post/mariadb-install-record)
> 
> 
> sudo apt install net-tools
[允許遠端登入] 為了讓www server方便存取db, 需要更改ip設定 `sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf`

改成0.0.0.0

[允許root登入] DB預設不允許遠端root登入
先從本地root連線DB `mysql -h 127.0.0.1 -u root -p`
`GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '12345678' WITH GRANT OPTION;`
> `%` 是anywhere的意思
更新系統權限
`FLUSH PRIVILEGES;`

---
www server(10.140.0.8) 連線到 db(10.140.0.10)
www: `sudo apt install mysql-client`
mysql -h 10.140.0.10 -u root -p

---
增加db 內容
設置環境 [reference](https://docs.ossii.com.tw/books/ubuntu-server-2004-apache-mariadb-php)

[安裝php 套件](https://docs.ossii.com.tw/books/ubuntu-server-2004-apache-mariadb-php/page/php-81)

test.php
```
<?php
$servername="10.140.0.10";
$username="root";
$password="12345678";
$dbname="testdb";
$conn = new mysqli($servername, $username, $password, $dbname);
if($conn->connect_error){
die("connection failed:" . $conn->connect_error);
}
$sql="select name, phone from addrbook";
$result = $conn->query($sql);
if($result->num_rows >0){
while($row = $result->fetch_assoc()){
echo "name:" . $row["name"]." phone:".$row["phone"]."<br>";
}
} else {
echo "0 result";
}
?>
```

---
### 快速創VM的小技巧
將圈選部分直接貼到Cloud Shell可以不透過GUI建立VM

更方便寫成腳本一次創建多台VM

---
## Week7 10/22
### 免費試用過期後的付款帳號(billing account)轉移
[reference](https://www.shanyemangfu.com/gcp-300-old-account.html)
---
### Firewall
在GCP, 同個VPC下不同subnet可以互相通訊, 防火牆規則是針對VPC下的
設置了一個叫做allow-3306 允許10.140.0.0/16可以連線到db, 並更改了db上的firewall設定


建立www2
> asia-east2 (Hong Kong), www和db都在Taiwan

在預設的VPC防火牆設置中, 預設的防火牆規則 命名規則:網路名稱-允許或拒絕-服務名稱, [重要]優先比對priority(範圍是1000-65534)小的 若都不符合就block

default-allow-internal規則中匹配了 所有條件(tcp,udp 0~65535), 因此上週即使設置了allow-3306還是無法拒絕特定的連線
> 這條規則可以讓網路操作更有彈性但安全性下降
### 建立VPC
default VPC中, 在相同的VPC中, 即使VM在不同的Zone(Region)仍然可以直接連線, 相反地, 即使Zone相同, VPC不同就無法直接通訊(可用VPC peering, [重要]但IP不可重複)
建立VPC

Subnet creation mode
- [ ] Custom: 42個Zone的IP都要手動設定
- [X] Automatic: 提供42個Zone的IP自動設定

>[關閉] myvpc-allow-custom規則 匹配了所有IP, 若啟用這個規則將允許所有機器互相連線
其他都使用預設設置

再建立一個myvpc-2, Subnet creation mode:Custom

> 沒有設置firewall
為myvpc-2建立firewall rule



---
#### 實驗 自訂VPC並互相ping
建立vm1(10.140.0.2): myvm-myvpc, N1, ubuntu, myvpc, asia-east1-a
建立vm2(192.168.1.2): myvm-myvpc2, N1, ubuntu, myvpc2, asia-east1-a

即使都使用相同region, 無法互相ping

### VPC peering (要建立雙向)




Ping OK

#### 實驗 自訂VPC並互相ssh

vm1 ssh-keygen 將公鑰複製給vm2 便可以連線到vm2
---
#### 架設www server
1. vm2: 安裝python`sudo apt install python3`
2. vm2: 建立網頁`echo hi > hi.htm`
3. vm2: 啟動server`python3 -m http.server 9000`
再開一個終端機頁面(vm2) 存取網頁`curl http://127.0.0.1:9000/hi.htm`

vm1無法存取, 因為vm2 9000 port沒有對vm1開放, 需要新增防火牆規則


透過vm2, external IP連線

<!-- 11/5 期中考 -->
#### 改變ssh port
修改 /etc/ssh/sshd_config

#### 測試
Running the http server at port 8888 in the vm located in myvpc2. But only the vm in myvpc1 can browse the webpage. The other computer cannot browse the webpage.

## Week8 10/29
### Cloud SQL(serverless)
[SQL](https://console.cloud.google.com/sql/instances?referrer=search&authuser=1&hl=en&project=myfirstproject-437300)
Choose MySQL

Edition preset:
Production 效能較高
- [X] Development 較低
使用MySQL 8.0
啟用API [Cloud SQL Admin API](https://console.cloud.google.com/apis/api/sqladmin.googleapis.com/metrics?project=myfirstproject-437300&authuser=1&cloudshell=true&hl=en)
Cloud Shell> `sudo apt install mysql-client` 安裝客戶端MySQL
Cloud Shell> `gcloud sql connect mydb --user=root --quiet`
Cloud Shell> `mysql -h 35.229.175.129 -u root -p` 連線到MySQL

預設會提供250GB 當空間不足時會自動增加容量(雲端的好處)
改成private ip增加安全性後 也可以使用default vpc中的vm連線的sql
`sudo apt install mysql-client`
`mysql -h 10.87.176.3 -u root -p`

在db中新增資料
```
* 顯示目前有的資料庫 */
show databases;
/* 創建資料庫 */
create database testdb;
/* 使用資料庫 */
use testdb;
/* 創建資料表 */
create table addrbook(name varchar(50) not null, phone char(10));
/* 加入資料 */
insert into addrbook(name, phone) values ("tom", "0912123456");
insert into addrbook(name, phone) values ("mary", "0912123567");
/* 選擇資料 */
select name,phone from addrbook;
```
先在vm安裝套件
sudo apt install apache2 php libapache2-mod-php php-mysql
sudo systemctl restart apache2
在vm中新增webpage
sudo vim /var/www/html/test.php
```
<?php
$servername="10.87.176.3";
$username="root";
$password="12345678";
$dbname="testdb";
$conn = new mysqli($servername, $username, $password, $dbname);
if($conn->connect_error){
die("connection failed: " . $conn->connect_error);
}
else{
echo "connect OK!" . "<br>";
}
$sql="select name,phone from addrbook";
$result=$conn->query($sql);
if($result->num_rows>0){
while($row=$result->fetch_assoc()){
echo "name: " . $row["name"] . "\tphone: " . $row["phone"] . "<br>";
}
} else {
echo "0 record";
}
?>
```

### Load Balancer

>在myvpc(region: Taiwan)中, 從外部存取load balancer會分散地導向3台www server並向db存取資料
LB分兩種
unmanaged (先手動創建好server在配置LB)
managed (先創建instance template, 根據網路狀況 擴/縮 容)
創建instacne group

創建[LB](https://console.cloud.google.com/net-services/loadbalancing/list/loadBalancers?authuser=1&hl=en&project=myfirstproject-437300)




---
#### LB壓力測試
Cloud Shell> siege -c 200 -r 1000 35.229.184.248
模擬200人同時存取1000次http://lbIP/
直接存取web(上) vs 透過LB(下)

---
## Week9 11/05 期中測驗: LB
題目: 讓LB將流量分散到myVPC1中的兩台HttpServer, 每一台HttpServer都可以存取myVPC2中的DB
> You need to create two vpc networks, i.e. myvpc1 and myvpc2. In myvpc1 (any zone), create two VMs with http server. In myvpc2, create DB (vm or cloud sql). make http servers connect to DB. Also add one load balancer. If a customer connect to LB, LB will dispatch the traffic to the backend http server.
步驟:
<!--初始設置:
建立新專案
啟用 Compute Engine API-->
1. 建立myVPC1, myVPC2 (相同zone) 並建立instance group
2. 設置防火牆(and VPC peering)使兩個VPC之間可以通訊(3306)
3. 建立 Cloud SQL(DB), 啟用API [Cloud SQL Admin API](https://console.cloud.google.com/apis/api/sqladmin.googleapis.com/metrics?project=myfirstproject-437300&authuser=1&cloudshell=true&hl=en) 這部分由於只使用private ip時其他vpc無法連線到db, 不確定原因, 因此改成自己建立DB
4. 建立VM for db(使用Ubuntu 22.04, 根據[步驟](https://blog.tarswork.com/post/mariadb-install-record)安裝 [注意版本]使用20.04會報錯
連線到DB後建立以下資料
```
show databases;
create database testdb;
use testdb;
create table addrbook(name varchar(50) not null, phone char(10));
insert into addrbook(name, phone) values ("tom", "0912123456");
insert into addrbook(name, phone) values ("mary", "0912123567");
select name,phone from addrbook;
```
5. 設定權限以允許外部root登入
6. vim /etc/mysql/mariadb.conf.d/50-server.cnf 將bind-address 127.0.0.1改成0.0.0.0 並進入mysql設置權限GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '12345678' WITH GRANT OPTION; 與更新權限 FLUSH PRIVILEGES;
7. 建立VM1(HttpServer), VM2(HttpServer) 透過automation(包括了存取db需要的所有套件安裝, 需要預先寫入DB的IP)直接建HttpServer並透過php腳本存取DB,
```shell=
#!/bin/bash
apt update
apt -y install apache2 php libapache2-mod-php php-mysql mysql-client
cat <<EOF > /var/www/html/index.php
<html>
<body>
<p>Linux startup script added directly. <?php echo gethostname(); ?></p>
<?php
\$servername = "這裡是DB位置";
\$username = "root";
\$password = "12345678";
\$dbname = "mydb";
\$conn = new mysqli(\$servername, \$username, \$password, \$dbname);
if (\$conn->connect_error) {
die("Connection failed: " . \$conn->connect_error);
} else {
echo "Connect OK!" . "<br>";
}
\$sql = "SELECT name, phone FROM addrbook";
\$result = \$conn->query(\$sql);
if (\$result->num_rows > 0) {
while (\$row = \$result->fetch_assoc()) {
echo "Name: " . \$row["name"] . " - Phone: " . \$row["phone"] . "<br>";
}
} else {
echo "0 records found";
}
\$conn->close();
?>
</body>
</html>
EOF
```
7. 建立Load Balancer並綁定instance group後根據 external IP 使用瀏覽器存取 http://35.234.3.100/index.php

---
## Week10 11/12
### Unmanaged Load Balancer w/ private IP
在vpc1 建立2台vm 並組成instance group與load balancer綁定

接著設置防火牆規則, 在vpc1中, 規則1設置tag為`http2`, [預設]範圍為`10.0.0.0/8'(與proxy-only ip range相同[注意]), 允許TCP:80, 規則2設置tag為`hc`, 範圍為`35.191.0.0/16 和 130.211.0.0/22`(固定用法), 允許TCP:80
> 查看我的proxy-only ip range , 在創建LB時設置
把vm的external ip關閉並移除所有的防火牆規則 使用tag 加上剛才建立的http2, hc
就能透過LB存取 internal ip

---
## Managed Load Balancer
> 相較於Unmanaged, 多了auto-scaling功能
1. 建立instance template
> 基本上與建立instance操作一樣
2. 創建instance group
>Location
>>Single zone
- [x] Multiple zones 可以將vm放在不同zone, 提高可靠度
可以設置最小與最大機器數 還有擴容條件

設置health check(與LB的health check有差別, 這邊的用法是為了確保即使機器存在也能夠進一步**確保服務提供**)
測試
透過`cat /dev/zero > /dev/null`增加CPU負載
`htop` 

當CPU負載過高(>60%)時, 自動scale up

當command停止, CPU負載下降(<60%)時, 自動scale down

使用GCP web查看負載

## Week11 11/19
### Cloud Router
當VM不設置External IP(為了安全型)時, 若VM需要存取Internal時(例如軟體更新), 可以透過Network Address Translation(NAT) 連線到Internet, 並且Internet無法透過NAT存取VM
1. 創建一台沒有Public IP的VM
> 
> default VPC
> asia-east1
2. 創建Cloud NAT
> 
> asia-east1
VM可以直接ping到外網

---
### Monitoring & Alerting
當CPU, Memory, Hard Disk, 流量負載... 超過閾值時, 自動通知
創建一台VM (www server)
> asia-east1
> Allow HTTP traffic
> automation
```
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><p>Linux startup script added directly. $(hostname -I) </p></body></html>
```
在未額外安裝其他軟體之前是無法查看 Memory, Disk... 相關資訊的

啟用API
https://console.cloud.google.com/apis/enableflow?apiid=compute.googleapis.com,monitoring.googleapis.com,logging.googleapis.com&project=mygcp-436602
mygcp-436602 要改成對應的Project id
點擊安裝Ops Agent 就能夠監測更多訊息了(但需要人工主動地檢測, 所以還需要進一步設置Alerting)
#### Alerting
Edit Notification channels

建立一個Email通知

設定 Ops Agent
在VM中輸入:
```
# Configures Ops Agent to collect telemetry from the app and restart Ops Agent.
set -e
# Create a back up of the existing file so existing configurations are not lost.
sudo cp /etc/google-cloud-ops-agent/config.yaml /etc/google-cloud-ops-agent/config.yaml.bak
# Configure the Ops Agent.
sudo tee /etc/google-cloud-ops-agent/config.yaml > /dev/null << EOF
metrics:
receivers:
apache:
type: apache
service:
pipelines:
apache:
receivers:
- apache
logging:
receivers:
apache_access:
type: apache_access
apache_error:
type: apache_error
service:
pipelines:
apache:
receivers:
- apache_access
- apache_error
EOF
sudo service google-cloud-ops-agent restart
sleep 60
```
監測流量負載
流量測試`timeout 120 bash -c -- 'while true; do curl localhost; sleep $((RANDOM % 4)) ; done'`
持續curl localhost 120秒, 每次隨機間隔1-4秒

監測CPU

E-mail alert


---
### 機器位置的差異
繼續使用剛剛建立的VM (VM-tw)
創建一台位置在us的VM (VM-us)
> us-central1-b
> Allow HTTP traffic
> automation
```
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><p>Linux startup script added directly. $(hostname -I) </p></body></html>
```
使用 `curl -o /dev/null -s -w 'Total: %{time_total}\n' http://IP`

---
讓LB透過https去存取www server(only http)
繼續使用(VM-tw)並加入到unmanged group
建立LB

設定前端為Https

設定憑證
先到https://dynv6.com/ 建立Domain Name

先建立LB再到 https://dynv6.com/ 設定IP (使用LB的IP)
在憑證生效之前會出現

需要等待最多24hr

---
### Cloud CDN
Content Delivery Network (CDN)
將多份資料放在不同的地區的End Point, 若需要存取時先詢問最近的End Point, 沒有資料的話就找鄰近的, 都沒有才從較遠的伺服器存取
使用剛才的VM-us
加入到新的instance group 並 建立unmanaged group

前端使用http
後端要把CDN打勾

測試
使用 `curl -o /dev/null -s -w 'Total: %{time_total}\n' http://IP`
vm-us: 34.57.60.100
lb-cdn: 34.160.205.122
CDN 更快

## Week12 11/26
### Managed LB without External IP
>The requirement is that the VM with no external ip. Steps: Instance Template -> Instance Group (Managed) —> Load Balancer.
Hints: 1) In the instance template , you need to write some scripts to install apache server 2)Firewall rules for load balancer to check the health of www servers.
需要先建立Cloud NAT, Firewall Rule
透過Cloud NAT 以便VM安裝 Apache2
> 
設置HealthCheck(80 port, 注意Source IP range), Http, SSH 防火牆規則
> 
刪除instance groups時需要把AutoScaling刪除(不是關閉
>
---
1. Instance Template
>Allow HTTP traffic


2. Instance Group
>
3. 從創建出來的(no external ip)VM中 測試
>
4. LB
>
LB可以存取沒有公有IP的VM
>
---
### Cloud Amor
> 進階防火牆, 但需要付費用戶才能使用

Backend security policy -> LB

更多資訊[參考](https://mile.cloud/zh/resources/blog/ddos-protection-cloud-armor-ip-load-balancing_565), [參考2](https://medium.com/@kellenjohn175/how-to-guides-gcp-security-%E4%BB%A5cloud-armor-%E5%BC%B7%E5%8C%96-beyondcorp-%E5%AE%89%E5%85%A8%E6%A8%A1%E5%9E%8B-c277d5e0cb15)
> 以規則條目計價, 所以可以使用 || 或 && 等方法合併條目
---
### 不同Region間的LB(GCP的特色)
情境:
Instance Group1:
>VM-TW1
VM-TW2
Instance Group2:
>VM-US1
VM-US2
LB w/ Domain Name:
>Instance Group1
Instance Group2
Client -> example.com -> VM-TW1, VM-TW2
Client -> example.com/US -> VM-US1, VM-US2
---
建立4台 www server 分別在TW, US
> myvpc
```sh
#! /bin/bash
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><p>Linux startup script added directly. $(hostname -I) vm-tw1</p></body></html>
```

建立兩個Instance Group
建立LB並連接到2個BackEnd(Instance Group)




確保`/us/*`路徑下有網頁
>

---
### Internal LB (using Cloud DNS, Cloud NAT)

## Week13 12/03
### Cloud Run functions
#### Default Example
創建


使用GCP支持的程式語言(py12)撰寫程式, 會自動包裝成Docker再處理, 若想要使用的程式語言不支持也可以自行建立Docker服務

使用POST方法交互, 回傳python結果
#### Example2 ([參考](https://towardsdatascience.com/machine-learning-model-as-a-serverless-endpoint-using-google-cloud-function-a5ad1080a59e), [code](https://github.com/saedhussain/gcp_serverless_ml))
將 `/home/s111010558/test_iris/gcp_serverless_ml/Iris_model/create_iris_model.py` 輸出的檔案 Iris_model.pkl 存放在 Bucket

從`https://github.com/saedhussain/gcp_serverless_ml/tree/main/Iris_http_cloud_func`複製以下兩個檔案的內容, 並貼到新建立的Cloud Function

 需要修改內容
 部署
---
#### Example3 事件處理 ([參考](https://medium.com/google-cloud/move-large-files-from-gcs-bucket-using-cloud-function-232852b10a4c))
準備兩個Bucket


[Event type 詳細說明](https://cloud.google.com/functions/docs/calling/storage)

```python
functions-framework==3.*
google-cloud-storage
google-cloud
```

```python
import functions_framework
from google.cloud import storage
from google.cloud.storage import Blob
# Triggered by a change in a storage bucket
@functions_framework.cloud_event
def hello_gcs(cloud_event):
data = cloud_event.data
event_id = cloud_event["id"]
event_type = cloud_event["type"]
bucket = data["bucket"]
name = data["name"]
metageneration = data["metageneration"]
timeCreated = data["timeCreated"]
updated = data["updated"]
print("="*30)
print(f"Event ID: {event_id}")
print(f"Event type: {event_type}")
print(f"Bucket: {bucket}")
print(f"File: {name}")
print(f"Metageneration: {metageneration}")
print(f"Created: {timeCreated}")
print(f"Updated: {updated}")
print(f"Processing file: {name}.")
storage_client = storage.Client(project='inspiring-bee-435202-a4')
source_bucket=storage_client.get_bucket('bkt-gjlin-ori')
destination_bucket=storage_client.get_bucket('bkt-gjlin-lar')
blobs=list(source_bucket.list_blobs(prefix=''))
print(blobs)
for blob in blobs:
if blob.size > 1000000 and blob.name == name:
source_blob = source_bucket.blob(blob.name)
new_blob = source_bucket.copy_blob(source_blob, destination_bucket, blob.name)
blob.delete(if_generation_match=None)
print(f'File moved from {source_blob} to {new_blob}')
else:
print("File size is below 1MB")
```
在bkt-gjlin-ori中 上傳兩個檔案 大小分別是 <1M 與 >1M, 若檔案大小超過1M將會自動移動到bkt-gjlin-lar中


---
#### Example4 創建資料庫 using CLI ([參考](https://sagadevan.medium.com/connecting-to-cloud-sql-with-cloud-functions-using-cli-c6bc1c47e5a7))
創建instance (DB)
Cloud Shell
```shell
gcloud sql instances create mydb --database-version=MYSQL_5_7 --cpu=2 --memory=4GB --root-password=admin1234 --assign-ip --zone=us-central1-a --availability-type=zonal --no-backup
```

創建資料庫
```shell
gcloud sql databases create demo-db --instance=mydb, -i mydb
```

存取DB (pwd: admin1234)
`gcloud sql connect mydb --user=root`

---
## Week14 12/10
### Cloud Function
接續上週, 可以從Cloud Shell連到DB並進行操作(創建資料庫)
創建資料
`Cloud Shell> mysql>`
```sql
use testdb;
CREATE TABLE info (
id INT NOT NULL AUTO_INCREMENT,
firstname VARCHAR(20),
lastname VARCHAR(20),
age VARCHAR(3),
collegename VARCHAR(150),
PRIMARY KEY (id)
);
```

#### 建立Cloud Function 用來處理 Post Request 並更新到DB
~/cf_mysql/main.py
```python
import sqlalchemy
#connection name we noted earlier
connection_name = "projectID:region:name" # ex: "mygcp-436602:us-central1:mydb"
#database name
db_name = "testdb" # 需要根據設定修改
db_user = "root" # 需要根據設定修改
db_password = "admin1234" # 需要根據設定修改
driver_name = 'mysql+pymysql'
query_string = dict({"unix_socket": "/cloudsql/{}".format(connection_name)})
def writeToSql(request):
#You can change this to match your personal details
stmt = sqlalchemy.text("INSERT INTO info ( firstname, lastname, age, collegename) values ('Sagadevan', 'Kounder', '21', 'XYZ College')")
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername=driver_name,
username=db_user,
password=db_password,
database=db_name,
query=query_string,
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)
try:
with db.connect() as conn:
conn.execute(stmt)
print("Insert successful")
except Exception as e:
print ("Some exception occured" + e)
return 'Error: {}'.format(str(e))
return 'ok'
```
~/cf_mysql/requirements.py
```python
SQLAlchemy==1.3.12
PyMySQL==0.9.3
```
#### 在Cloud Shell部署Cloud Function:
`gcloud functions deploy writeToSql --entry-point writeToSql --runtime python310 --trigger-http --allow-unauthenticated --no-gen2 --source .`
> 最後的 `--source .` 表示會在**當前目錄** 尋找 `main.py` 與 內部的 `writeToSql` Function
在測試Cloud Function前, 先觀察資料庫內容 mysql> `select * from info;` 為空
到Cloud Function > TESTING中 複製Trigger url

存取(`curl`)後將會呼叫Cloud Function

---
### Google App Engine, GAE (最原始的serverless服務)
GCP提供的Compute Resource
>Compute Engine
Cloud Functino
App Engine
Cloud Run
Kubernetes Engine
Cloud Shell>
```sh
cd
mkdir -p test-gae
cd test-gae
mkdir -p test-flask
cd test-flask
touch app.yaml main.py requirements.txt
```
放入對應的檔案內容
app.yaml
```yaml
runtime: python39
service: default
```
main.py
```python
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World! 2024/7/22"
if __name__ == "__main__":
app.run(debug=True)
```
requirements.txt
```python
flask
```
---
部署 Cloud Shell: `~/test-gae/test-flask/>`位置 `gcloud app deploy`

存取服務 使用瀏覽器開啟 `https://myflask-dot-mygcp-436602.de.r.appspot.com`

---
#### 將IRIS Prediction部署到GAE
以下code還沒測試過
Cloud Shell>
```sh
cd
mkdir -p test-gae
cd test-gae
mkdir -p test-iris
cd test-iris
touch app.yaml client.py main.py requirements.txt train_model.py
pip install -r requirements.txt
python train_model.py # 產生model.pkl
python main.py # 測試並確保程式碼正常運作
python client.py # 測試並確保程式碼正常運作
```
app.yaml
```yaml
runtime: python312
service: iris-predict
```
client.py
```python
import requests
# Change the value of experience that you want to test
url = '127.0.0.1:8080/api' # 部署前本地測試
# url = 'https://iris-predict-dot-mygcp-436602.de.r.appspot.com/api' # 測試正常後改成這行
feature = [[5.8, 4.0, 1.2, 0.2]]
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
r = requests.post(url,json={'feature': feature})
print(labels[r.json()])
```
main.py
```python
import pickle
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load the model
model = pickle.load(open('model.pkl', 'rb'))
labels = {
0: "versicolor",
1: "setosa",
2: "virginica"
}
@app.route("/", methods=["GET"])
def index():
"""Basic HTML response."""
body = (
"<html>"
"<body style='padding: 10px;'>"
"<h1>Welcome to my Flask API</h1>"
"</body>"
"</html>"
)
return body
@app.route('/api', methods=['POST'])
def predict():
# Get the data from the POST request.
data = request.get_json(force = True)
predict = model.predict(data['feature'])
return jsonify(predict[0].tolist())
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0', port=8080)
```
requirements.txt
```python
scikit-learn
flask
```
建立 model
train_model.py
```python
import pickle
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import tree
# simple demo for traing and saving model
iris=datasets.load_iris()
x=iris.data
y=iris.target
#labels for iris dataset
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.25)
classifier=tree.DecisionTreeClassifier()
classifier.fit(x_train,y_train)
predictions=classifier.predict(x_test)
#export the model
model_name = 'model.pkl'
print("finished training and dump the model as {0}".format(model_name))
pickle.dump(classifier, open(model_name,'wb'))
```
部署 Cloud Shell: `~/test-gae/test-iris/>`位置 `gcloud app deploy`
存取服務 使用瀏覽器開啟 `https://iris-predict-dot-mygcp-436602.de.r.appspot.com/features/1_2_3_4`
### Cloud Run
#### Docker 複習
`sudo docker imanges` is to check how many images you have
`sudo docker run hello-world:latest` will download hello-world image with latest version from DockerHub and execute it.
`sudo docker ps -a` will show all of images include exited images
---
##### apache2 server
sudo docker pull [ubuntu/apache2](https://hub.docker.com/r/ubuntu/apache2)
sudo docker run -d -p 8080:80 ubuntu/apache2
> `-d` if for deamon, run in background
---
<!--
##### Docker File
Cloud Shell>
`mkdir -p ~/test-dockerfile`
`cd ~/test-dockerfile`
`vim Dockerfile`
```sh
FROM centos:centos7
RUN yum update && yum -y install httpd
EXPOSE 80
ADD index.html /var/www/html
CMD ['apachectl', '-DFOREGROUND']
```
`echo "hi test" > index.html`
`docker build -t mywww:1.0 .`
-->
## Week15 12/07
前置:
```
s719113@cloudshell:~/test-dockerfile (sinuous-city-444915-u1)$ ls
Dockerfile index.html
s719113@cloudshell:~/test-dockerfile (sinuous-city-444915-u1)$ cat Dockerfile
FROM ubuntu/apache2:latest
ADD index.html /var/www/html
s719113@cloudshell:~/test-dockerfile (sinuous-city-444915-u1)$ cat index.html
hi test
```
使用Docker在本地端建立服務
`docker build -t mywww:1.0 .`
> 
`docker run -d -p 8080:80 mywww:1.0`
> 
> 
### Artifact Registry

使用預設設定建立(名稱為mydocker)
`docker build -t asia-east1-docker.pkg.dev/sinuous-city-444915-u1/mydocker/mywww:1.0 .`
`docker push asia-east1-docker.pkg.dev/sinuous-city-444915-u1/mydocker/mywww:1.0`
> 
部署到雲端

選擇 Allow unauthenticated invocations, Container port: 80. 其餘都預設
> 

#### 範例, IRIS
docker build -t myiris:1.0 .
docker run -d -p 8080:8080 myiris:1.0
docker build -t asia-east1-docker.pkg.dev/sinuous-city-444915-u1/test-iris/myiris:1.0 .

---
前置
s719113@cloudshell:~/test-iris-docker (sinuous-city-444915-u1)$ ls
client2.py client.py Dockerfile main.py model.pkl requirements.txt train_model.py
#### train_model.py
```python
# -*- coding: utf-8 -*-
import pickle
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import tree
# simple demo for traing and saving model
iris=datasets.load_iris()
x=iris.data
y=iris.target
#labels for iris dataset
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.25)
classifier=tree.DecisionTreeClassifier()
classifier.fit(x_train,y_train)
predictions=classifier.predict(x_test)
#export the model
model_name = 'model.pkl'
print("finished training and dump the model as {0}".format(model_name))
pickle.dump(classifier, open(model_name,'wb'))s719113@cloudshell:~/test-iris-docker (sinuous-city-444915-u1)$ cat
```
先執行`python train_model.py`產生模型
#### main.py
``` python
import pickle
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load the model
model = pickle.load(open('model.pkl', 'rb'))
labels = {
0: "versicolor",
1: "setosa",
2: "virginica"
}
@app.route("/", methods=["GET"])
def index():
"""Basic HTML response."""
body = (
"<html>"
"<body style='padding: 10px;'>"
"<h1>Welcome to my Flask API</h1>"
"</body>"
"</html>"
)
return body
@app.route('/api', methods=['POST'])
def predict():
# Get the data from the POST request.
data = request.get_json(force = True)
predict = model.predict(data['feature'])
return jsonify(predict[0].tolist())
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0', port=8080)s719113@cloudshell:~/test-iris-docker (sinuous-city-444915-u1)$ cat
```
client.py
```python
# -*- coding: utf-8 -*-
import requests
# Change the value of experience that you want to test
url = 'http://127.0.0.1:8080/api'
feature = [[5.8, 4.0, 1.2, 0.2]]
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
r = requests.post(url,json={'feature': feature})
print
在本地執行main.py與client.py測試
(labels[r.json()])s719113@cloudshell:~/test-iris-docker (sinuous-city-444915-u1)$ cat client2.py
```
測試沒問題後, 部署到雲端
先建立Artifact Registry名稱為tset-iris
Dockerfile
```
FROM python:3.9
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
EXPOSE 8080s719113@cloudshell:~/test-iris-docker (sinuous-city-444915-u1)$ cat client.py
```
requirements.txt
```
scikit-learn
flasks
```
`docker build -t asia-east1-docker.pkg.dev/your_project_id/test-iris/myiris:1.0 .`
`docker push asia-east1-docker.pkg.dev/your_project_id/test-iris/myiris:1.0`
並部署到雲端, 使用8080 port
最後根據部署後產生的url(包含於client2.py)測試
client2.py
```python
# -*- coding: utf-8 -*-
import requests
# Change the value of experience that you want to test
url = 'https://myiris-690297497796.asia-east1.run.app/api'
feature = [[5.8, 4.0, 1.2, 0.2]]
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
r = requests.post(url,json={'feature': feature})
print(labels[r.json()])
```

---
### Terraform
[更多資訊](https://devops-with-alex.com/day-4-terraform-install/)

在cloud shell安裝(方便測試)
>s719113@cloudshell:~ (sinuous-city-444915-u1)$ terraform -v
Terraform v1.10.2
on linux_amd64
建立Service account


建立key

上傳key

#### 建立Bucket
terraform apply

#### 建立EC2
main.tf
```
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "e2-micro"
zone = "asia-east1-b"
boot_disk {
initialize_params {
image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
```
provider.tf
```
##################################################################################
# CONFIGURATION
##################################################################################
terraform {
# 指定 terraform 的最小版本
required_version = ">=1.0"
required_providers {
# provider 中的最小版本
google = {
source = "hashicorp/google"
version = ">= 4.40.0"
}
}
}
##################################################################################
# PROVIDERS
##################################################################################
provider "google" {
# your project name
credentials = file("mySA.json")
project = "sinuous-city-444915-u1"
}
```

#### 取得資訊
回傳ip到本地端
main.tf
```
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "e2-micro"
zone = "asia-east1-b"
boot_disk {
initialize_params {
image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
# 成功案例,執行電腦本機路徑
provisioner "local-exec" {
command = "echo ${google_compute_instance.example.network_interface[0].network_ip} > ./ip_address_local_exec.txt"
}
# # 失敗案例,傳送到虛擬電腦本機
# provisioner "file" {
# content = google_compute_instance.example.network_interface[0].network_ip
# destination = "/tmp/ip_address_file.txt"
# }
# # 失敗案例,無法連線到遠端
# provisioner "remote-exec" {
# inline = [
# "echo ${google_compute_instance.example.network_interface[0].network_ip} > /tmp/ip_address_remote_exec.txt"
# ]
# }
}
```


#### 使用變數
更有彈性的寫法
main.tf
```
locals {
allow_ips = ["0.0.0.0/0", ]
}
resource "google_sql_database_instance" "instance" {
name = var.db_name
database_version = "MySQL_5_7"
deletion_protection = false
settings {
tier = "db-f1-micro" # 使用基本的硬體配備
disk_size = "10"
ip_configuration {
dynamic "authorized_networks" {
for_each = local.allow_ips
iterator = allow_ips
content {
name = "allow-${allow_ips.key}"
value = allow_ips.value
}
}
}
}
}
resource "google_sql_database" "this" {
name = var.db_name
instance = google_sql_database_instance.instance.name
}
resource "google_sql_user" "users" {
name = "root"
instance = google_sql_database_instance.instance.name
password = "12345678"
}
output "db_ip" {
value = google_sql_database_instance.instance.public_ip_address
}
```
variables.tf
```
variable "GCP_PROJECT" {
description = "GCP Project ID"
type = string
credentials = file("mySA.json")
project = "mygcp-436602"
}
variable "GCP_REGION" {
type = string
default = "asia-east1"
}
variable "db_name" {
type = string
default = "mydb2"
}
```
provider.tf
```
##################################################################################
# CONFIGURATION
##################################################################################
terraform {
required_version = ">=1.0"
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.40.0"
}
}
}
provider "google" {
project = var.GCP_PROJECT
region = var.GCP_REGION
# zone = var.zone
}
```