# 2024-12-17
## Docker
* 先疊一個Dockerfile
**Dockerfile**
```dockerfile=
FROM ubuntu:apache2:latest
ADD index.html /var/www/html
```
**index.html**
```
hihi 1217
```
**建成image**
`docker build -t mywww:1.0 .`
* 跑起來測試
```bash
$ docker run -d -p 8080:80 mywww:1.0
9ee75ea0866cd2bd4bdab83e18bd479a0a6546f499584a9f872dc5d5c3de1142
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9ee75ea0866c mywww:1.0 "apache2-foreground" 4 seconds ago Up 3 seconds 0.0.0.0:8080->80/tcp flamboyant_driscoll
$ curl 127.0.0.1:8080
hihi 1217
```
* 建儲存桶

* 把image 疊在儲存桶之上
`docker build -t asia-east1-docker.pkg.dev/lofty-entropy-436602-n8/mydocker/mywww:1.0 .`

* 放進儲存桶
`docker push asia-east1-docker.pkg.dev/lofty-entropy-436602-n8/mydocker/mywww:1.0`
* 查看上傳結果

* 部屬

**記得改port 80**

### IRIS
* 建立資料夾、檔案
```
$ touch requirements.txt client.py client2.py Dockerfile main.py touch train_model.py
```
* client2.py
```python=
# -*- coding: utf-8 -*-
import requests
# Change the value of experience that you want to test
url = 'http://127.0.0.1:8080/api'
feature = [[5.8, 4.0, 1.2, 0.2]]
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
r = requests.post(url,json={'feature': feature})
print(labels[r.json()])
```
* Dockerfile
```dockerfile=
FROM python:3.9
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 8080
CMD ["python", "main.py"]
```
* main.py
```python=
import pickle
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load the model
model = pickle.load(open('model.pkl', 'rb'))
labels = {
0: "versicolor",
1: "setosa",
2: "virginica"
}
@app.route("/", methods=["GET"])
def index():
"""Basic HTML response."""
body = (
"<html>"
"<body style='padding: 10px;'>"
"<h1>Welcome to my Flask API</h1>"
"</body>"
"</html>"
)
return body
@app.route('/api', methods=['POST'])
def predict():
# Get the data from the POST request.
data = request.get_json(force = True)
predict = model.predict(data['feature'])
return jsonify(predict[0].tolist())
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0', port=8080)
```
* requirements.txt
```
scikit-learn
flask
```
* train_model.py
```python=
# -*- coding: utf-8 -*-
import pickle
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn import tree
# simple demo for traing and saving model
iris=datasets.load_iris()
x=iris.data
y=iris.target
#labels for iris dataset
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.25)
classifier=tree.DecisionTreeClassifier()
classifier.fit(x_train,y_train)
predictions=classifier.predict(x_test)
#export the model
model_name = 'model.pkl'
print("finished training and dump the model as {0}".format(model_name))
pickle.dump(classifier, open(model_name,'wb'))
```
* 上傳前測試
```bash=
$ python main.py
$ python client2.py #setosa
```
* 建立image
`docker build -t myiris:1.0 .`
* 再測試
```bash=
$ docker run -d -p 8080:8080 myiris:1.0
$ python client2.py
```
* 上傳
```bash=
$ docker build -t asia-east1-docker.pkg.dev/lofty-entropy-436602-n8/mydocker/myiris:1.0 .
$ docker push asia-east1-docker.pkg.dev/lofty-entropy-436602-n8/mydocker/myiris:1.0
```
* client.py
```python=
# -*- coding: utf-8 -*-
import requests
# Change the value of experience that you want to test
url = 'https://myiris-43249429618.asia-east1.run.app/api' #建立容器之後的網址
feature = [[5.8, 4.0, 1.2, 0.2]]
labels ={
0: "setosa",
1: "versicolor",
2: "virginica"
}
r = requests.post(url,json={'feature': feature})
print(labels[r.json()])
```
* 最後執行client.py 一樣能取得結果

## Terraform
* 初始化 init
* 格式處裡 fmt
* 檢查內容 validate
* 模擬執行 plan
* 執行 apply
* 刪除 destroy
參考自: https://devops-with-alex.com/day-4-terraform-install/
* 安裝
[terraform install](https://devops-with-alex.com/day-4-terraform-install/#:~:text=Linux%20%E5%A5%97%E4%BB%B6%E5%AE%89%E8%A3%9D%20(Ubuntu%20/%20Debian%C2%A0%E7%AF%84%E4%BE%8B)%EF%BC%9A)
* 權限




### 範例一 - 建立bucket
* main.tf
```tf=
provider "google" {
credentials = "${file("mySA.json")}"
project = "lofty-entropy-436602-n8"
region = "asia-east1"
}
resource "google_storage_bucket" "quick-start-gcs" {
name = "lsx-gcs-bucket"
location = "asia-east1"
force_destroy = true
```
* validate、plan


* apply

### 建立VM
* main.tf
```tf=
resource "google_compute_instance" "example2" {
name = "example2-instance"
machine_type = "e2-micro"
zone = "asia-east1-b"
boot_disk {
initialize_params {
image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
```
* provider.tf
```tf=
##################################################################################
# CONFIGURATION
##################################################################################
terraform {
# 指定 terraform 的最小版本
required_version = ">=1.0"
required_providers {
# provider 中的最小版本
google = {
source = "hashicorp/google"
version = ">= 4.40.0"
}
}
}
##################################################################################
# PROVIDERS
##################################################################################
provider "google" {
# your project name
credentials = file("key.json")
project = "lofty-entropy-436602-n8"
}
```

### 建立VM 並把IP回傳回來
* provider 同前一個
* main.tf
```tf=
resource "google_compute_instance" "example" {
name = "example-instance"
machine_type = "e2-micro"
zone = "asia-east1-b"
boot_disk {
initialize_params {
image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
# 成功案例,執行電腦本機路徑
provisioner "local-exec" {
command = "echo ${google_compute_instance.example.network_interface[0].network_ip} > ./ip_address_local_exec.txt"
}
# # 失敗案例,傳送到虛擬電腦本機
# provisioner "file" {
# content = google_compute_instance.example.network_interface[0].network_ip
# destination = "/tmp/ip_address_file.txt"
# }
# # 失敗案例,無法連線到遠端
# provisioner "remote-exec" {
# inline = [
# "echo ${google_compute_instance.example.network_interface[0].network_ip} > /tmp/ip_address_remote_exec.txt"
# ]
# }
}
```

### 彈性化 (建立資料庫)
**根據需修改的變數,更動virable 即可**
* main.jf
```tf=
locals {
allow_ips = ["0.0.0.0/0", ]
}
resource "google_sql_database_instance" "instance" {
name = var.db_name
database_version = "MySQL_5_7"
deletion_protection = false
settings {
tier = "db-f1-micro" # 使用基本的硬體配備
disk_size = "10"
ip_configuration {
dynamic "authorized_networks" {
for_each = local.allow_ips
iterator = allow_ips
content {
name = "allow-${allow_ips.key}"
value = allow_ips.value
}
}
}
}
}
resource "google_sql_database" "this" {
name = var.db_name
instance = google_sql_database_instance.instance.name
}
resource "google_sql_user" "users" {
name = "root"
instance = google_sql_database_instance.instance.name
password = "12345678"
}
output "db_ip" {
value = google_sql_database_instance.instance.public_ip_address
}
```
* provider.tf
```tf=
**##################################################################################
# CONFIGURATION
##################################################################################
terraform {
required_version = ">=1.0"
required_providers {
google = {
source = "hashicorp/google"
version = ">= 4.40.0"
}
}
}
provider "google" {
credentials = file("key.json") # key
project = var.GCP_PROJECT
region = var.GCP_REGION
}
```
* virable.tf
```tf=
variable "GCP_PROJECT" {
description = "GCP Project ID"
type = string
default = "lofty-entropy-436602-n8"
}
variable "GCP_REGION" {
type = string
default = "asia-east1"
}
variable "db_name" {
type = string
default = "mydb2"
}
```
