# ADL final project
### Recommendation systems
#### Aim: Find a combination of 1. and 2. that can achieve best accuracy.
There are two broad types of Recommender systems:
1. Content-Based systems: These systems try to match users with items based on items’ content (genre, color, etc) and users’ profiles (likes, dislikes, demographic information, etc). For example, Youtube might suggest me cooking videos based on the fact that I’m a chef, and/or that I’ve watched a lot of baking videos in the past, hence utilizing the information it has about a video’s content and my profile.
2. Collaborative filtering: They rely on the assumption that similar users like similar items. Similarity measures between users and/or items are used to make recommendations.
### Recommended Recommendation repos
1. https://github.com/microsoft/recommenders
2. https://github.com/HarshdeepGupta/recommender_pytorch
3. https://github.com/SebastianRokholt/Hybrid-Recommender-System
### Used features
#### Users
user_id gender
occupation_titles
interests
recreation_names
#### Courses
course_id
course_name
course_price
teacher_id
teacher_intro
groups
sub_groups
topics
course_published_at_local
description will_learn
required_tools
recommended_background
target_group
### Data stats
**1. subgroups**
大約1.18%的使用者沒有購買課程。
Top 10 frequent subgroup
| Support | itemsets | name |
| -------- | -------- | - |
|0.235940 | (51) | 更多職場技能 |
|0.226724 | (59) | 職場溝通 |
|0.176565 | (7) | 求職 |
|0.171195 | (71) | 社會科學 |
|0.166808 | (59, 71) | 職場溝通,社會科學 |
|0.166113 | (3) | 平面設計 |
|0.158372 | (66) | 數位行銷 |
|0.154679 | (72) | 社群行銷 |
|0.152223 | (1) | 更多生活品味 |
|0.150257 | (50) | 個人品牌經營 |
Top 10 frequent course
| Support | itemsets | name |
| -------- | -------- | - |
|0.1437 | 5fc5ee1b08b74a6e3723abd2 | ҉唐鳳҉數位溝通社:就這樣把你增幅 |
|0.103 | 5f7c210b1de7982fb413a3e9 | Today at Apple:和設計師馮宇拆解商業 LOGO 案例 |
|0.0998 | 6030c9cd99e14cc2401e66b9 | 2021 驅動知識生態系論壇|Hahow 好學校 |
|0.0857 | 5f7c209762ad22756c7a1c74 | Today at Apple:和攝影師 Ada Lin 用 iPhone 學習專業商品攝影 |
|0.0734 | 60cb0a440dabda80019d5f7c | 遠距工作力:溝通協作到自我管理 |
|0.0693 | 5f7c212262ad2203e77a1cc9 | Today at Apple:和攝影師 Paddy 用 iPhone 拍出商業空間形象照 |
|0.0661 | 5ef099ab678184065fd4d426 | Seagate 講堂 平面設計師顏伯駿教你做履歷 |
|0.0521 | 5f7c212262ad2203e77a1cc9, 5f7c209762ad22756c7a1c74 |
|0.0487 | 5f7c210b1de7982fb413a3e9. 5f7c209762ad22756c7a1c74 |
|0.0477 | 6059aee039f2512548c187c6 | Notion 實戰課程:打造專屬數位工作術 |
|0.0412 | 5f7c210b1de7982fb413a3e9, 5f7c212262ad2203e77a1cc9 |
|0.0408 | 5fc5ee1b08b74a6e3723abd2, 5f7c210b1de7982fb413a3e9 |
### Evaluation on validation set
```python
import argparse
from functools import partial
import ml_metrics.average_precision as ap
import pandas as pd
map_50 = partial(ap.mapk, k=50)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--prediction", type=str)
parser.add_argument("--truth", type=str)
parser.add_argument("--col", type=str, default="subgroup", choices=["subgroup", "course_id"])
args = parser.parse_args()
pred = pd.read_csv(args.prediction)
truth = pd.read_csv(args.truth)
truth_items = []
pred_items = []
for item in pred[args.col]:
pred_items.append(item.split(" "))
for item in truth[args.col]:
if isinstance(item, float):
truth_items.append([])
else:
truth_items.append(item.split(" "))
score = map_50(actual=truth_items, predicted=pred_items)
print(score)
```
```requirements
ml_metrics
```
### Data preprocessing
**1. User data **
**2. Missing Value 處理**
以val_seen的ID為例,val_seen共有7748筆data
gender missed:297
occupation missed:3548
interest missed:3
ReactionNames missed:3185
不會有全部feature都沒有的情形,會有只剩一種feature的情況
需要想一下missing value怎麼處理 (全部補同個value? knn補值? 先刪除miss 過多feature的row? )
### Model/ Methodologies
1. Cosine similarities
考慮user_user之間的cosine similarties, 可以統計出相似的users
(a) based on purchasing record
user_item matrix (take training users for example):

From this matrix, we can directly calculate similarities among users. Notice that in practice we should get a sparse matrix. Also, we get a sparse similarity matrix.

We have also computed several other matrices such as occupation similarties, and preference similiarities, but the recommendation results weren't good.
推薦的部份, 藉由user購買記錄相似程度,我們找出相似程度較高的users,並累計他們曾經購買的其他課程,再計算出前五十名推薦的課。缺點是:有許多user購買記錄相似,但只購買一堂課。
結果:
val = 0.0112
(b)based on both occupation and purchasing
因此我們融入職業矩陣,把每個user相似程度進一步分化。我們採用的score of similarity是:
$1*purchasing similarity + 0.2*occupation similarity$
得到的結果為:validation = 0.0130
2. Collaborative filering
(a) Algorithms: ALS / Bayesian
ALS Bayesian
https://github.com/benfred/implicit
(b) Preference matrix: Purchasing records/ content similarity
除了購買記錄之外,我們也計算了該課出現在購買記錄的機率,並進行加權:結果如下。
```python =
pairwise sparse output:
(0, 48) 0.005942670706129112
(0, 593) 0.07399207643905809
(1, 560) 0.6945933348870229
(2, 74) 0.013633185737590274
(2, 581) 0.03379165695642026
(2, 589) 0.0749242600792352
(3, 416) 0.12363085527848913
(4, 360) 0.007107900256350507
(5, 424) 0.23386157072944522
(5, 425) 0.46038219529245955
(5, 426) 0.2555348403635643
(5, 533) 1.0000000000001572
(5, 560) 0.6945933348870229
(6, 425) 0.46038219529245955
(6, 462) 0.06804940573292902
(7, 246) 0.028897692845490434
(7, 533) 1.0000000000001572
(8, 606) 0.009438359356793284
(9, 652) 0.11745513866231579
(10, 424) 0.23386157072944522
(10, 502) 0.482288510836619
(10, 533) 1.0000000000001572
(10, 624) 0.1747844325332139
(11, 314) 0.12106735026800208
(11, 372) 0.01829410393847582
: :
(59719, 630) 0.015497553017944492
(59720, 657) 0.012584479142391027
(59721, 644) 0.1769983686786348
(59722, 595) 0.04218130971801424
(59723, 638) 0.12537869960382125
(59724, 87) 0.018760195758564375
(59724, 580) 0.06024236774644572
(59725, 613) 0.12712654392915354
(59725, 638) 0.12537869960382125
(59726, 409) 0.013749708692612413
(59727, 599) 0.3323234677231445
(59728, 613) 0.12712654392915354
(59729, 652) 0.11745513866231579
(59730, 353) 0.01829410393847582
(59731, 632) 0.07422512234910236
(59732, 112) 0.053367513400139545
(59732, 531) 0.1789792589140114
(59732, 583) 0.032509904451176734
(59733, 426) 0.2555348403635643
(59733, 599) 0.3323234677231445
(59734, 661) 0.006292239571195531
(59735, 500) 0.596364483803325
(59735, 501) 0.7170822652063037
(59735, 502) 0.482288510836619
(59736, 589) 0.0749242600792352
```
(c)Results on seen course
| | ALS | Bayesian |
| -------- | -------- | -------- |
| Validation | 0.04750 |0.04432|
| kaggle | 0.03370 |0.03227|
2. Content-based filtering
Modules:
gensim
bert-key
word2vec models pretrained from wiki chinese datasets
(a) For courses, we take the target group and subgroups into account.
```python=
course_key_dict = {}
i = 0
for course in tqdm(course_data['target_group']):
i += 1
#first we analyze the target group text....
key_list = []
if course == 0:
course_key_dict[i] = []
else:
#print(course)
try:
if len(course)>10:
keybert_list = kw_extractor.generate_keywords(course,top_k=2,rank_methods="mmr")[0]
else:
keybert_list = kw_extractor.generate_keywords(course,top_k=1,rank_methods="mmr")[0]
course_key_dict[i] = keybert_list
except:
course_key_dict[i] = []
```
我們可以針對每個user去計算他對課程的score,藉由word vector上的距離遠近。
```python =
import pandas as pd
tests = pd.read_csv(file_test)
content_based_dict = {}
for user_id in tqdm(tests['user_id']):
c_list = data[user_id]
for i in range(len(c_list)):
try:
text_score = 1/c_list[i]
except:
text_score = 1000.0
c_list[i] = text_score*purchasing_dict_course[i]
res = sorted(range(len(c_list)), key = lambda sub: data[user_id][sub])[-50:]
id_list = num_to_id(res)
content_based_dict[user_id] = id_list
```
(b) User
For user's content, we extract the subgroup data that is given in the subgroups they chose.
(c)Calculating score (or preference)
Using gensim model to calculate the score between user and course.
```python =
model = gensim.models.KeyedVectors.load_word2vec_format('y_360W_cbow_2D_300dim_2020v1.bin',unicode_errors='ignore',binary=True)
```
The model is downlaoded from http://nlp.tmu.edu.tw/word2vec/index.html
(d)Results on seen course
| | +ALS | +Bayesian | Pure content based |
| -------- | -------- | -------- |-------- |
| Validation | 0.04704 |0.04602|0.01102|
| kaggle | 0.03506 |0.03237|X|
3. Naive Bays
在Hahow提供的資料中,總共有四個資料用來表示一個使用者,分別是:性別、職業、 興趣、喜好,利用這些資訊可以計算條件機率 p(課程|性別), p(課程|頭銜), … 及 p(課程子類別|性別), p(課程子類別|頭銜), …
因為職業、興趣、喜好都可能有不只一個項目,當出現複選時,我使用幾何平均來估計 條件機率。因為有些使用者的興趣並沒有出現在課程子類別中,所以我額外統計了課程 類別的條件機率做為參考。
我們可以使用這些條件機率來進行排序,作為推薦的結果。
```
p(course|user) = p(course|gender) * \
p(course|occupation_titles) * \
p(course|interests) * \
p(course|group) * \
p(course|recreation_names) * \
p(course)
p(subgrooup|user) = p(subgrooup|gender) * \
p(subgrooup|occupation_titles) * \
p(subgrooup|interests) * \
p(subgrooup|group) * \
p(subgrooup|recreation_names) * \
p(subgrooup)
```
我沒有使用p(course),p(subgrooup)這一項(因為忘了放...再幫我想一個理由)
$$p(course|interests)={\prod{p(course|interests_{n})}}^{1/n}$$
| | Seen Course | Seen subgroup |Unseen course |Unseen subgroup |
| -------- | -------- | -------- | - | - |
| val | 0.0621 |0.2268 |0.0687 | 0.1801 |
| kaggle | 0.0374 |0.21316|0.05227|0.17749 |
從validation 及kaggle的結果可以發現seen course表現最差,因為naive bays 只會推薦使用者已經看過得課程,所以結果很差。
4. Two Tower Method
關於深度學習部分之應用,我們嘗試了雙塔模型,顧名思義即為有兩個model所組成的方法。如下圖所示,由user feature encoder與course feature encoder所組成。其基本想法為利用深度模型對於各個feature encoding的能力,將user feature與course feature投影到同一個embedding space, 並期許若某個course應被某個user所購買,則他們的relevance應該要相對高。而這個relevance可以用各種方式計算,如cosine similarity......。在本篇work中我們以dot product作為計算similarity的metric。

關於User encoder, 我們將其興趣與職業等特性轉化為one hot vector後採用2層的linear layer,最後將其output成256維的embedding。

關於course encoder, 由於其含有大量文字敘述,我們以bert-base-chinese作為pretrain,首先先將descriptions以bert 做tokenlization,隨後送進pretrained bert與兩層linear layer,最後同樣output 256維的向量。

關於loss function,由於在這個data框架下,每一個training pair(user 與其對應之course)並沒有負樣本(negative sample),因此直接訓練的話,model可以把traing的分數最大化,達不到訓練效果。因此我們採用了in-batch negative sampling,將同一個batch內,除了該user本身對應的course以外都視為負樣本,並且將整個pair組成matrix的形式,在計算similarity socre時,可以直接以矩陣相乘來計算分數,加速計算。


```python =
class customLoss(nn.Module):
def forward(self, courseEmb, userEmb, label):
mat = torch.matmul(userEmb, courseEmb.transpose(0, 1))
probTensor = []
for l in label:
probTensor.append(self.prob_lookUp[l])
probTensor = torch.tensor(probTensor).to(device)
mat = mat - probTensor[:,None]
mat = mat.to(device)
true_labels = torch.tensor(np.arange(len(label))).to(device)
loss = nn.CrossEntropyLoss()
l = loss(mat, true_labels)
return l
```
在testing方面,我們會先將700餘個course全數通過course encoder,轉化為256維向量並保存。在每一個user feature通過user encoder時,從course embeddings中尋找l2 norm 最小的一群courses作為其candidate courses。
在結果方面,我們遇到了loss 沒有辦法有效下降的問題,造成在validation set上的表現不佳。以目前的結果來說,並沒有辦法超越Naive bays method。
| | Seen Course | Unseen course
| -------- | -------- | --------
| val | 0.0215 |0.0197