# 12/11進度紀錄 ## 簡筠方 結合XGBoost, Random Forest, LightGBM 最後用stacking整合 1. 先對各個模型的參數進行排列組合 2. 找出f1分數最高者 3. 最後再進行一次訓練,並用stacking模型整合 ![image](https://hackmd.io/_uploads/ryUQJlwE1g.png) ![image](https://hackmd.io/_uploads/SJ8VJlDVkl.png) ![image](https://hackmd.io/_uploads/HyEHkew4yl.png) ![image](https://hackmd.io/_uploads/H1ASklPV1l.png) ## 游婷安 利用 gridsearchCV 調 ridgeclassifier 和 logistic regression 參數 Ridgeclassifier: (改變參數:alpha, tol) | | 版本一(初始) | 版本二 | 最佳版本 | | ------ | ------------ | -------- | -------- | | Alpha | 1 | 5 | 7 | | 平均F1 | 0.705907 | 0.710186 | 0.711025 | * 結論:alpa=7表現最好,tol機乎沒影響(會隨不同solver而有不同影響大小) 初始: ![image](https://hackmd.io/_uploads/By_QvmD41l.png =60%x) 最佳: ![image](https://hackmd.io/_uploads/BJODP7PEke.png =60%x) Logistic regression: (改變參數:penalty, C, solver, max_iter) | | 版本一 | 版本二 | 版本三 | 版本四 | 最佳版本 | | ---------- | ------ | ------ | ------ | ------ | -------- | | L1 penalty | o | o | x | x | o | | L2 penalty | x | x | o | o | x | | C | 0.3 | 0.5 | 2 | 1 | 1 | | 平均F1 | 0.672381 | 0.711371 | 0.717254 | 0.719187 | 0.724945 | * 結論:L1效果比較好,每個penalty 有比較適合的solver(沒有參考價值),max_iter越大越好,但只要500以上基本沒有影響 初始: ![image](https://hackmd.io/_uploads/HJTd3mD41g.png =60%x) 最佳: ![image](https://hackmd.io/_uploads/HyO927PNJe.png =60%x) 困難: 1. 不知道為甚麼 GridSearchCV 算出來的f1分數都比classification report 低,導致logisticRegression算出的最佳參數用classification report計算的f1值不是最好的? 2. 沒有時間+後來 server 不能用 ## lala * 調整訓練次數 * 嘗試替換掉tensorflow的函式(卡住中) ## 廖奕皓 CNN: 1. 利用Pytorch先處理數據集(TF-IDF) 2. 設定textDatasets和DataLoader 3. 建立模型(初始化以及維度方向) 4. 用loss函數和優化器 5. 預測並算出準確率 問題: 1. 因為進不去server沒有databass(train和test),要等測試後修改 2. 看不懂forword() import pandas as pd import torch from torch.utils.data import Dataset, DataLoader from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer df = pd.read_csv("train.csv") texts = df["text"].astype(str).values labels = df["target"].values # 分割數據集 X_train, X_val, y_train, y_val = train_test_split(texts, labels, test_size=0.2, random_state=42) # 文本數字化(Bag of Words 方法) vectorizer = CountVectorizer(max_features=10000) # 最多考慮 10,000 個特徵 # 轉成TF-IDF(TfidfVectorizer) X_train_vec = vectorizer.fit_transform(X_train).toarray() X_val_vec = vectorizer.transform(X_val).toarray() # 使用CNN需要先設定他的Dataset class TextDataset(Dataset): def __init__(self, texts, labels): self.texts = torch.tensor(texts, dtype=torch.float32) #轉成Pytorch張量 self.labels = torch.tensor(labels, dtype=torch.long) # 取得資料集長度,可知要迭代幾次 def __len__(self): return len(self.texts) # 比對第idx個texts和labels def __getitem__(self, idx): return self.texts[idx], self.labels[idx] # 創建 DataLoader(加載處理Datasets) train_dataset = TextDataset(X_train_vec, y_train) val_dataset = TextDataset(X_val_vec, y_val) train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)# batch是一次處理的量, shuffle是打亂數據庫避免模型凝合(集中) val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False) import torch.nn as nn import torch.nn.functional as F class TextCNN(nn.Module): #模型初始化 def __init__(self, input_dim): super(TextCNN, self).__init__() self.conv1 = nn.Conv1d(in_channels=1, out_channels=128, kernel_size=5) self.pool = nn.MaxPool1d(kernel_size=2) self.fc1 = nn.Linear(128 * ((input_dim - 5 + 1) // 2), 64) self.fc2 = nn.Linear(64, 2) # 二分類輸出 # 向前傳播 def forward(self, x): x = x.unsqueeze(1) # 添加 channel 維度 x = F.relu(self.conv1(x)) x = self.pool(x) x = x.view(x.size(0), -1) # 展平 x = F.relu(self.fc1(x)) x = self.fc2(x) return x # 創建模型 input_dim = X_train_vec.shape[1] model = TextCNN(input_dim=input_dim)# 傳入維度 import torch.optim as optim # 定義損失函數loss和優化器(用於調整偏差誤差) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # 訓練過程(將device調整成cuda(GPU)) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) for epoch in range(5): # 訓練 5 個周期 model.train() total_loss = 0 for texts, labels in train_loader: texts, labels = texts.to(device), labels.to(device) optimizer.zero_grad() outputs = model(texts) loss = criterion(outputs, labels) loss.backward() optimizer.step() total_loss += loss.item() print(f"Epoch {epoch+1}, Loss: {total_loss:.4f}") from sklearn.metrics import accuracy_score model.eval() all_preds, all_labels = [], [] with torch.no_grad(): for texts, labels in val_loader: texts, labels = texts.to(device), labels.to(device) outputs = model(texts) preds = torch.argmax(outputs, dim=1).cpu().numpy() all_preds.extend(preds) all_labels.extend(labels.cpu().numpy()) # 計算準確率 accuracy = accuracy_score(all_labels, all_preds) print(f"Validation Accuracy: {accuracy:.2%}") # 測試數據處理 test = pd.read_csv("test.csv") test_texts = test["text"].astype(str).values test_vec = vectorizer.transform(test_texts).toarray() test_dataset = TextDataset(test_vec, labels=[0]*len(test_vec)) # 假設沒有標籤 test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False) # 預測 model.eval() test_preds = [] with torch.no_grad(): for texts, _ in test_loader: texts = texts.to(device) outputs = model(texts) preds = torch.argmax(outputs, dim=1).cpu().numpy() test_preds.extend(preds) # 將結果保存到 CSV test["predicted_target"] = test_preds test[["id", "predicted_target"]].to_csv("predictions.csv", index=False)