# Tensor Compression ###### tags: `tinyML` ###### members: 劉育任 林畊廷 ### Time: 7/12/2021 - 7/19/2021 ## 1. Successes Last Week * Double check CSR indices size is correct and figure out the reason * Implement COO format * Changing weight pruning threshold: See 1) the number zeros 2) accuracy 3) CSR/COO compressed data size (ratio) * Figure out the weight pruning method of this paper: https://people.cs.nctu.edu.tw/~ttyeh/course/2020_Fall/IOC5009/file/week4-sparse-tensor-core.pdf ## 2. Progress and Problems Last Week * There are three array in the CSR format, data, indices, and indptr. (data: save the non-zero data, indices: save the position, indptr: save the number of the non-zero data in each row.) * Use scipy to implement the CSR format. There are three array to save the non-zero data, which is row, column, and data. * ![](https://i.imgur.com/BUIGKnL.png) * Sorts the elements in each vector by their absolute values layer by layer. Then we change at most k elements into zero. In the next layer, we will decrease the number k and pruned at most k elements again and again in order to ensure the accuracy. ## 3. Not Resolved Problems * CSR/COO 字典和nonzero data大小 (KB)需要了解CSR/COO overhead * Sparse tensor core所提到的sturctural sparsity是怎麼產生的 ? ## 4. Goals for next week * 為什麼CSR/COO compression的比例比zero data增加的比例多這麼多 ? * 因為CSR/COO compression中有兩個array(data, indices)需要同時去存非零的資料,因此當我們model中的零變多時,壓縮的效果會比預期的好 * 找到一些weight pruning的方式去pruning model,比較CSR/COO壓縮weight的比例 (例如: 實作structural sparsity) * 分析structural sparsity的accuracy * 讀現在weight pruning的方式有哪些 ? https://arxiv.org/pdf/1710.09282.pdf --- ### Time: 7/19/2021 - 7/26/2021 ## 1. Successes Last Week * 了解CSR/COO的overhead * 找出weight pruning的方式有哪些 * Sparse tensor core所提到的sturctural sparsity是怎麼產生的 ? ## 2. Progress and Problems Last Week * ![](https://i.imgur.com/JuAUpRu.png) * Weight pruning methods: 1. Pruning Filters for Efficient ConvNets: 使用filter中參數的絕對值和∑∣Fij∣作為pruning的標準,減掉較不重要的filter 2. Structured Pruning of Deep Convolutional Neural Networks * 什麼時候COO比CSR好? * 當model的column數比row還要多時,COO壓縮model的效率會比CSR好 * 為什麼CSR indptr的大小不會隨著矩陣裡面的zero的資料多寡而改變? * matrix #of row 越多時,indptr會越高 * 為什麼CSR跟COO的data 和index的數量在同一個pruning paramter裡面都一樣? * 因為同一個model的非零data數目相同 ## 3. Not Resolved Problems * 找到一些weight pruning的方式去pruning model,比較CSR/COO壓縮weight的比例 ## 4. Goals for next week * 實做類似vector sparse pruning的方法 * 讀現在weight pruning的方式有哪些 ? 看看這個paper裡面有哪些weight pruning的方式然後去實做出來,比較CSR和COO在不同weight pruning方法下的壓縮率和accuracy * 整理出一個類似上面比較CSR/COO壓縮率/accuracy/非零資料比例表格 https://arxiv.org/pdf/1710.09282.pdf --- ### Time: 7/27/2021 - 8/1/2021 ## 1. Successes Last Week * 實作不同weight pruning的方法 ## 2. Progress and Problems Last Week * 實作weight pruning和比較他們的overhead和accuracy * ![](https://i.imgur.com/eiPxVJr.png) ## 3. Not Resolved Problems ## 4. Goals for next week * 實作CSB,DIA,ELL 稀疏矩陣壓縮,跟不同的pruning method和CSR,COO做比較 * 實作一下壓縮non-zero data方式 * idx(16bit) base table index(8bit) stride (8bit) 15 00000001 0.001 * base table 2 ### Time: 8/2/2021 - 8/8/2021 ## 1. Successes Last Week * 實作不同weight pruning的方法 ## 2. Progress and Problems Last Week * 實作DIA,ELL, 和上週提到的nonzero compression method * ![](https://i.imgur.com/GuXGIkR.png) ## 3. Not Resolved Problems * 實作CSB,因找不到這個方法的相關資料 ## 4. Goals for next week * 實作CSB * idx(16bit) base table index(8bit) stride (8bit) * 弄excel ### Time: 8/9/2021 - 8/15/2021 ## 1. Successes Last Week * 整理表格 * 實作csb * 實作 idx(16bit) base table index(8bit) stride (8bit) ## 2. Progress and Problems Last Week * 各種壓縮方法overhead表格 https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit?usp=sharing ## 3. Not Resolved Problems * ## 4. Goals for next week * check CSB (育任) http://supertech.csail.mit.edu/papers/csb.pdf * 如何用新方法處理FP16、INT8 (育任) * 造一個sparse tensor(COO, CSF) (育任) * 找支援稀疏矩陣格式的compiler (畊廷) [Sparse Tensor format in framework](https://hackmd.io/@ktlin/BJaRp0TgY) * 了解mixed precision如何應用 (畊廷) [Mixed Precision Training](https://hackmd.io/@ktlin/SyTY8b2xK) * 如何壓縮(儲存)多維sparse tensor(類似CSF) (畊廷) ### Time: 8/16/2021 - 8/22/2021 ## 1. Successes Last Week * check CSB * 用新方法處理FP16 * 造一個sparse tensor ## 2. Progress and Problems Last Week * 實作CSF * 將stride改成FP16 ## 3. Not Resolved Problems * Varying length的idx和data,如果矩陣很小可以不需要用到16bit來記,但若是矩陣很大需要用超過16bit的時候該怎麼辦? * 如何縮小non-zero data的encoding bit,如果數值不需要用32 bit,該怎麼知道把32bit轉成16bits? 如果數值一定要用32 bit的話,那要怎麼辦? ## 4. Goals for next week * 對不同稀疏矩陣的資料進行壓縮,稀疏矩陣的資料從paper去找,比較不同壓縮格式的壓縮率,紀錄在google spreadsheet (育任) https://hackmd.io/@nctu-cas-lab/ByKrKDAJt * TACO,TVM Compiler如何辨識sparse data compression format (畊廷) https://hackmd.io/@ktlin/S1Sv1bFZK ### Time: 8/16/2021 - 8/22/2021 ## 1. Successes Last Week ## 2. Progress and Problems Last Week ## 3. Not Resolved Problems ## 4. Goals for next week * 找不同的model做壓縮(育任) * 比較pruning前後準確率、壓縮率(育任) * * 比較不同格式對甚麼data比較好、優缺點 (畊廷) https://hackmd.io/@ktlin/r1sKIlCWK * Format conversion具體在做甚麼,runtime做2件事 - 決定format spec - 存取方式 ### Time: 8/30/2021 - 9/05/2021 ## 1. Successes Last Week * 找不同的model做pruning、壓縮 ## 2. Progress and Problems Last Week * https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit#gid=715999071 ## 3. Not Resolved Problems * 比較pruning前後準確率 ## 4. Goals for next week * 比較pruning前後準確率(育任) * 為何csr壓出來比coo差(育任) * 整理不同壓縮方法適合哪種matrix(育任) * 為甚麼要做不同format的轉換(畊廷) * Automatic轉換的目的(畊廷) --- ### Time: 9/06/2021 - 9/12/2021 ## 1. Successes Last Week * 比較各種model pruning前後準確率 * check csr, coo壓縮差別 ## 2. Progress and Problems Last Week * https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit#gid=715999071 * 因為在壓縮四維的matrix當中的二維matrix太小,因此壓縮之後coo效率會比csr好 * 壓縮方法適合哪種matrix - COO格式常用於從文件中進行稀疏矩陣的讀寫 - CSR格式常用於讀入數據後進行稀疏矩陣計算 - DIA適合structured matrix - ELL壓縮的優點是快速 ## 3. Not Resolved Problems * ## 4. Goals for next week * 比較pruning前後準確率(育任) * 為何csr壓出來比coo差(育任) * 整理不同壓縮方法適合哪種matrix(育任) * 為甚麼要做不同format的轉換(畊廷) - 加速data import 和 矩陣計算 - COO對import data比較efficient,因為在增加nonzero時比較有效率(支援append or random insertion) - CSR則是做SpMV比較快 * Automatic轉換的目的(畊廷) - Automatic的目的在於如果遇到library不支援某種format時如果我們要去access data需要hand-implement,他設計了一種方法,只要提供給它某些資訊,他就可以幫我們把code生出來。 ### Time: 9/13/2021 - 9/20/2021 ## 1. Successes Last Week ## 2. Progress and Problems Last Week ## 3. Not Resolved Problems ## 4. Goals for next week - CSF implement (畊廷) - done https://hackmd.io/@ktlin/HymO-kCmY - 找65536 * 65536 matrix - get part of the matrix from consph - 找4096 * 4096 matrix - 整個矩陣用CSR、COO、DIA壓 - 分塊壓縮 - save in txt file ### Time: 9/27/2021 - 10/04/2021 ## 1. Successes Last Week - CSR policy [Result](https://docs.google.com/spreadsheets/d/1UWAjg39N8DAqq5NO_U1mTp4ILxIasQBubDWt9lIHo6w/edit?usp=sharing) - 和一般的COO、CSR、DIA比的話我們把存index的bit數降低用的memory比較少,但我們目前測試的dataset的matrix size都控制在我們設的bit數可表達的範圍內。 - 我把csr的policy做進去,效果不太好,推測和切塊的方法或是我們切小塊後所省的memory比我們增加許多小塊所需的memory還少。 ## 2. Progress and Problems Last Week - Survey of tensor format ![](https://i.imgur.com/4l0AuQh.png) ## 3. Not Resolved Problems ## 4. Goals for next week - 找可以用的資料集 - implement our method by using 8-bit and compare with other format https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit#gid=376338285 ### Time: 10/05/2021 - 10/12/2021 ## 1. Successes Last Week - https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit#gid=1918013700 (育任) - CSR format sparsity 100%仍會存indptr(= pos array) - Chart - Compression ratio = after compressed / original - 4096 X 4096 ![](https://i.imgur.com/cL13hNw.png) - 8192 X 8192 ![](https://i.imgur.com/bN4Mtvo.png) - 16384 X 16384 ![](https://i.imgur.com/lxvmbCm.png) - Mnist ![](https://i.imgur.com/b2eBv3s.png) ## 2. Progress and Problems Last Week ## 3. Not Resolved Problems ## 4. Goals for next week - 測不同稀疏度在不同格式壓縮的結果 ### Time: 9/27/2021 - 10/04/2021 ## 1. Successes Last Week - CSR policy [Result](https://docs.google.com/spreadsheets/d/1UWAjg39N8DAqq5NO_U1mTp4ILxIasQBubDWt9lIHo6w/edit?usp=sharing) - 和一般的COO、CSR、DIA比的話我們把存index的bit數降低用的memory比較少,但我們目前測試的dataset的matrix size都控制在我們設的bit數可表達的範圍內。 - 我把csr的policy做進去,效果不太好,推測和切塊的方法或是我們切小塊後所省的memory比我們增加許多小塊所需的memory還少。 ## 2. Progress and Problems Last Week - Survey of tensor format ![](https://i.imgur.com/4l0AuQh.png) ## 3. Not Resolved Problems ## 4. Goals for next week - 找可以用的資料集 - implement our method by using 8-bit and compare with other format https://docs.google.com/spreadsheets/d/19JVwWNLF9U-XmpxtPiDS26kkNeXdBUXXHIArFpUsyNo/edit#gid=376338285 ### Time: 10/13/2021 - 10/17/2021 ## 1. Successes Last Week - 分塊壓縮 - matrix size 4096*4096 - Compression ratio = after compressed / original - 4 bit ![](https://i.imgur.com/0UK6nb0.png) ![](https://i.imgur.com/02jKuZr.png) - 8 bit ![](https://i.imgur.com/YwkHs2Z.png) ![](https://i.imgur.com/56k2zQ4.png) - CSB format - row, col ,data三個array去儲存每個block中的資訊 - indptr去儲存每個block中的nonzero data - CSB壓縮效率與CSR相同 ## 2. Progress and Problems Last Week ## 3. Not Resolved Problems ## 4. Goals for next week ### Time: 10/18/2021 - 10/24/2021 ## 1. Successes Last Week * https://hackmd.io/@wtdf_NG5TQqy0k4CP2sZlg/HJQslkmUF * https://hackmd.io/@ktlin/H1vRB__SF * https://hackmd.io/@ktlin/B1NsbGnVK ### Time: 11/08/2021 - 11/15/2021 ## 1. Successes Last Week https://hackmd.io/@ktlin/Bk5Q-VKwK ## 4. Goals for next week * 用256*256, 512*512, 1024*1024矩陣去測量不同sparsity做csb parallel, csb sequential, csb compression的時間 ### Time: 11/16/2021 - 11/21/2021 ## 1. Successes Last Week * ![](https://i.imgur.com/MJMf2UQ.png) ### Time: 12/06/2021 - 12/12/2021 ## 1. Successes Last Week * https://docs.google.com/spreadsheets/d/1x9WOmDUupV4TVnZAu9Owez-Gb34MKp3_6UsFVtT8NkQ/edit#gid=0 ### Time: 12/13/2021 - 12/19/2021 ## 1. Successes Last Week * ![](https://i.imgur.com/CmqISZI.png)