###### tags: `Deep learning` # Federated learning ## FL 分類 #### 1.Horizontal Federated Learning Ex:多個醫療機構可以在不共用病患個資的前提下結合多家醫院的模型來得到更精準的模型。 #### 2.Vertical Federated Learning Ex:醫療機構的資料上可能以病歷為主,政府機關則是以公民資料為主,醫療機構跟政府機關可能面對的用戶都是同一群用戶 ## Basic FL * 情境如下[1]: There are N clients, and each contains a local dataset Di and a local model wi ![](https://i.imgur.com/9M8Pi57.png) ### Process flowchart * Step1-2 Server會以最先連到的client作為初始parameter ![](https://i.imgur.com/OvjZ9Cm.png) * Step3.等待其他client connect(At least 2 clients) * Step4. 所有client皆會download global intial paramter - [ ] 以w表示global weights ![](https://i.imgur.com/hoGmaWS.png) * Step5.All client start training,最先train完的client會先upload local paramter至server - [ ] 每個client的weights會被更新,r為learning rate ![](https://i.imgur.com/2saDXDX.png) - [ ] 此時local loss function可表示為fj(w,xj,yj),第i個client有j個data samples, input vector xj and the output scalar yj ![](https://i.imgur.com/BKqEcZl.png) * Step6.Server等待所有client上傳parameter - [ ] global model aggregation以下圖表示 ![](https://i.imgur.com/0tT8feu.png) ![](https://i.imgur.com/CV9m45M.png) * Step7.server做aggregation,並且broadcast to all clients ![](https://i.imgur.com/xc8jVgw.png) * Ste8.Each client will update their local model based on the agrregation model * Ste9.Repeat Step5-8 until global is compl eted. - [ ] 最終目標希望找出最佳w可以使所有local loss function minimize,The training process requires the iterations between (3b) and (3a) until convergence, and each node can obtain the optimal model w如下圖 ![](https://i.imgur.com/S6HlUap.png) ![](https://i.imgur.com/sD1YNes.png) The handshake of FedAvg is described below: ![](https://i.imgur.com/AV9KKmg.png) ### Local model ## Personalized FL ### 目的 對於Basic FL的global model並非適用於所有的client,解決heterogeneous devices的問題,找出各自適合的local model,找出各自適合的local model ## Convex & Non-convex ### 定義 任意取集合中的倆個點並連線,如果說連線段完全被包含在此集合中,那麼這個集合就是凸集 ![](https://i.imgur.com/v1okY2p.png) ![](https://i.imgur.com/TSMjdes.png) * Convex 1. 相對簡單,任何區域性最優解即為全域性最優解, 2. Method:例如貪婪演算法(Greedy Algorithm)或梯度下降法(Gradient Decent),收斂求得的區域性最優解即為全域性最優 * Non-convex 1. 非常困難,可能存在無數個區域性最優點,通常求解全域性最優的演算法複雜度是指數級的(NP難) 2. 蒙特卡羅投點法,隨便投個點,然後在附近區域(可以假設convex)進行搜尋,得到區域性最優值。然後隨機再投個點,再找到區域性最優點。如此反覆,直到滿足終止條件。1w個區域性最優點,你至少要投點1w次 ## 疑問 [5] ![](https://i.imgur.com/9x2p1WS.png) ## Reference 1. Personalized Federated Learning for ECG Classification Based on Feature Alignment 2. https://www.jiqizhixin.com/articles/2020-09-23-7 3. Ditto personalized FL https://blog.csdn.net/weixin_42534493/article/details/118556509 4. Covex & Non-convex https://www.itread01.com/content/1543908005.html 5. Ditto: Fair and Robust Federated Learning Through Personalization