# SimpleHutch/Hutch++/HutchBar AMM experiment ## data settings from literatures 1. [A Note on Random Sampling for Matrix Multiplication](https://arxiv.org/pdf/1811.11237.pdf) - random matrix generated from standard uniform distribution with sizes $100\times2000$ - block (Coarser partition) chosen by Enhanced pairwise partition. - AMM sampler size = 1000 and 3000 - repeated experiments number = 50000 2. [Importance sampling for a Monte Carlo matrix multiplication algorithm, with application to information retrieval](https://epubs.siam.org/doi/pdf/10.1137/10080659X) - practical data: term-by-document matrix for the Reuters Transcribed Subset data set with sizes $201\times5601$; term-by-document matrix from Wikipedia web pages with sizes $203688\times198853$ 3. [Approximate Weighted C R Coded Matrix Multiplication](https://arxiv.org/pdf/2011.09709.pdf) - random matrices A, B with L = 260, N = 9600, M = 280 with non-uniform distribution 4. [OverSketch: Approximate Matrix Multiplication for the Cloud](https://ieeexplore.ieee.org/iel7/8610059/8621858/08622139.pdf?casa_token=bXHQw5YdykQAAAAA:W297arGkkX1EzSd9PZBkAMmyK1hkhgrTCKnfC5zvWPn-KDlkUbUHWEKWndw9IQ8pjmTqHRHavVc9) - AWS Lambda cloud system 5. [A practical streaming approximate matrix multiplication algorithm](https://www.sciencedirect.com/science/article/pii/S1319157818306396) - Uniform Random: $2000\times10000$ and $1000\times10000$ - Low rank matrices: - Noisy low rank matrices - Drift ## experiments Matrix Factor Size: $100\times10000$ **Block chosen by Enhanced pairwise partition** (column-wise sampling probability estimated by Simple Hutch with 10 matvecs) sample number of AMM = 100 number of repeated test = 100 block_sz = 5 (#group = 10000/5) ![](https://i.imgur.com/D76ZAZ9.png) block_sz = 10 (#group = 10000/10) ![](https://i.imgur.com/5gSb2qI.png) block_sz = 20 (#group = 10000/10) ![](https://i.imgur.com/g2tGcAp.png) block_sz = 50 (#group = 10000/50) ![](https://i.imgur.com/KyAoolK.png) block_sz = 100 (#group = 10000/100) ![](https://i.imgur.com/771kyY4.png) **Block chosen by Enhanced pairwise partition** (column-wise sampling probability estimated by Simple Hutch with 1 matvec) block_sz = 5 (#group = 10000/5) ![](https://i.imgur.com/5hRDhgz.png) block_sz = 100 (#group = 10000/100) ![](https://i.imgur.com/paFHrYn.png) **Block chosen by natural sequence** block_sz = 5 (#group = 10000/5) ![](https://i.imgur.com/f0kmqwg.png) block_sz = 10 (#group = 10000/10) ![](https://i.imgur.com/QR71h7R.png) block_sz = 20 (#group = 10000/20) ![](https://i.imgur.com/mbzLLSB.png) block_sz = 50 (#group = 10000/50) ![](https://i.imgur.com/PZXhH5I.png) block_sz = 100 (#group = 10000/100) ![](https://i.imgur.com/oQ9rFU3.png) **Non-Uniform Data** Block chosen by Enhanced pairwise partition, block_sz = 5 ![](https://i.imgur.com/igAIDc7.png) Block chosen by Enhanced pairwise partition, block_sz = 100 ![](https://i.imgur.com/tvN1vpB.png) Block chosen by random sequence, block_sz = 5 ![](https://i.imgur.com/8wYoPtz.png) Block chosen by random sequence, block_sz = 100 ![](https://i.imgur.com/BAneRLU.png) **Modified P_bar method** block_sz = 20 Block chosen by natural sequence: ![](https://i.imgur.com/DajT4TB.png) Block chosen by Enhanced pairwise partition: ![](https://i.imgur.com/LpiLABN.png) **Time comparison** ![](https://i.imgur.com/9OBtWSn.png) ## experiments 2 **control the number of truly sampled columns** ### On uniform sampled data ![](https://i.imgur.com/acqxosW.png) ![](https://i.imgur.com/qZnI2UF.png) ![](https://i.imgur.com/TJ6a4Qr.png) ![](https://i.imgur.com/cs5rFDV.png) ### Low Rank Matrices ![](https://i.imgur.com/7ZXwqw8.png) ![](https://i.imgur.com/RXFz9Vs.png) ![](https://i.imgur.com/sPUaN14.png) ![](https://i.imgur.com/oITkDjT.png) ![](https://i.imgur.com/KEmIaaM.png) ![](https://i.imgur.com/o1EAGhS.png) ### Covariance Matrices A is low rank matrix: ![](https://i.imgur.com/XMlZMR0.png) ![](https://i.imgur.com/1AghSDG.png) A is uniform sampled: ![](https://i.imgur.com/EeB8z5J.png) ![](https://i.imgur.com/GhO6PS2.png) ### experiment with block size = 1 #matvec = 1 uniform sampled data ![](https://i.imgur.com/SRrqNGk.png) low rank case ![](https://i.imgur.com/iN7PDeN.png) low rank case: fixed relative error ![](https://i.imgur.com/uKCVRGU.png) ### exponential decaying data experiment ![](https://i.imgur.com/uJ5FPA1.png) ![](https://i.imgur.com/XR2yVfE.png) ![](https://i.imgur.com/zJrO032.png) ![](https://i.imgur.com/lSqHYNT.png) ![](https://i.imgur.com/FTOdbZe.png) ![](https://i.imgur.com/rYbuVmi.png) case 2: ![](https://i.imgur.com/DqIUkks.png) ![](https://i.imgur.com/lFwWi33.png) ![](https://i.imgur.com/VmSF1cT.png) ### linear decaying data experiment ![](https://i.imgur.com/Uwqq3Dt.png) ![](https://i.imgur.com/i8EX2yH.png) ![](https://i.imgur.com/uKALHOV.png) ### drift experiment ![](https://i.imgur.com/CBz4EVG.png) ![](https://i.imgur.com/W7dafhF.png) ![](https://i.imgur.com/3rd3LT9.png) ## comparison with uniform sampling Expriments comparing the performance between Hutch estimated sampling, Biased p_bar sampling and uniform sampling AMM, showing where and when Hutch estimated sampling performs better. ![](https://i.imgur.com/7HCzsBh.png) ![](https://i.imgur.com/xkO6TLu.png) Performance: Hutch AMM perofrms best in all cases. Only when mat-vec number id very low, uniform sampling is better than Biased p_bar. ![](https://i.imgur.com/VJAOc9y.png) After increasing the block size, (decreasing AMM sampling times for fixed total column number) the performance: All three methods' performance become worse. Hutch AMM perofrms best, then biased p_bar, then uniform method. ![](https://i.imgur.com/bJESVGZ.png) ![](https://i.imgur.com/7wp0roe.png) ![](https://i.imgur.com/6biBvGI.png) ![](https://i.imgur.com/cSsbOoH.png) The performance of linear increasing and drift case agrees with the exponential case. ![](https://i.imgur.com/gE4gMkp.png) ![](https://i.imgur.com/ezMROLc.png) In this approximated uniform data, Hutch AMM is only worse than uniform method when mat-vec number is small. ### runtime visualization ![](https://i.imgur.com/Xlyhof5.png) For a fair comparison, we predefine the subgroup indices and compare the runtime of Hutch AMM and standard block AMM. ![](https://i.imgur.com/SBebKgb.png) ### experiment: summarized line plots ![](https://i.imgur.com/eqDmWCy.png) ![](https://i.imgur.com/WYvGgUm.png) ![](https://i.imgur.com/lDctLob.png) ![](https://i.imgur.com/ZkBbGsi.png) ![](https://i.imgur.com/FLiqmDT.png) ![](https://i.imgur.com/XinQbHo.png) ![](https://i.imgur.com/fTGvgqM.png) ![](https://i.imgur.com/2hbXrFM.png) ### line plots with average error ![](https://i.imgur.com/vBAT6Li.png) ![](https://i.imgur.com/4lZDWYl.png) ![](https://i.imgur.com/43qC80v.png) ![](https://i.imgur.com/9IvmlUF.png) ### experiment: runtime visualization fixed ![](https://i.imgur.com/mTx58Ac.png) size 100 $\times$ 10000, exponential decreasing, h=10 ![](https://i.imgur.com/KwubUNy.png) size 200 $\times$ 10000 ![](https://i.imgur.com/SRBo6dg.png) size 50 $\times$ 10000 ![](https://i.imgur.com/IK1n6wn.png) size 100 $\times$ 20000 ![](https://i.imgur.com/3NDsjrb.png) ### extended visualization ![](https://i.imgur.com/wNukU8S.png) ![](https://i.imgur.com/1i1XKMv.png) ![](https://i.imgur.com/tHbRlt6.png) ### time measurements for diff shape $100\times 10000$ ![](https://i.imgur.com/dV3RO6W.jpg) $100\times 20000$ ![](https://i.imgur.com/d5LNaVc.jpg) $100\times 50000$ ![](https://i.imgur.com/bZCvjYG.jpg) $100\times 100000$ ![](https://i.imgur.com/4QVa22u.jpg) $200\times 10000$ ![](https://i.imgur.com/VPUswZe.jpg) $200\times 20000$ ![](https://i.imgur.com/IRZjEeQ.jpg) Exponential Decreasing | Time table | time - p | accuracy - p | | -------- | -------- | -------- | | s = 10 | 1.0e-12 * 0.0000 | 0.2843 | | s = 50 | 1.0e-12 * 0.5310 | 0.0351 | | s = 100 | 1.0e-12 * 0.0000 | 0.0559 | | s = 200 | 1.0e-12 * 0.0000 | 0.9606 | | s = 500 | 1.0e-12 * 0.0111 | 0.6939 | Linear Increasing | Time table | time - p | accuracy - p | | -------- | -------- | -------- | | s = 10 | 1.0e-12 * 0.0000 | 0.4947 | | s = 50 | 1.0e-12 * 0.3964 | 0.0240 | | s = 100 | 1.0e-12 * 0.0000 | 0.8398 | | s = 200 | 1.0e-12 * 0.0000 | 0.5426 | | s = 500 | 1.0e-12 * 0.0000 | 0.1633 | Random | Time table | time - p | accuracy - p | | -------- | -------- | -------- | | s = 10 | 1.0e-15 * 0.0000 | 0.3622 | | s = 50 | 1.0e-15 * 0.0000 | 0.1193 | | s = 100 | 1.0e-15 * 0.0000 | 0.2183 | | s = 200 | 1.0e-15 * 0.0000 | 0.8061 | | s = 500 | 1.0e-15 * 0.1886 | 0.1228 | Exponential Decreasing, #tests = 100, fixed matrices seed=1, block_sz = 100, h=10: | Time table | time - p | accuracy - p | | -------- | -------- | -------- | | s = 10 | 1.0e-11 * 0.0000 | 0.1275 | | s = 100 | 1.0e-11 * 0.7748 | 0.9286 | | s = 500 | 1.0e-11 * 0.0013 | 0.5993 | block_sz = 100, h=50: | Time table | time - p | accuracy - p | | -------- | -------- | -------- | | s = 10 | 0.1612 | 0.6885 | | s = 100 | 0.0655 | 0.5168 | | s = 500 | 0.1793 | 0.8550 | ### Practical application [tf-idf matrices multiplication for cosine similarity](https://medium.com/@rantav/large-scale-matrix-multiplication-with-pyspark-or-how-to-match-two-large-datasets-of-company-1be4b1b2871e) [Text datasets in matlab format](http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html) [Distributed dot product using Apache spark](https://georgheiler.com/2021/08/06/scalable-sparse-matrix-multiplication/) [cosine similarity movie recommendation](https://towardsdatascience.com/using-cosine-similarity-to-build-a-movie-recommendation-system-ae7f20842599) [Wiki People Dataset](https://www.kaggle.com/code/meuge672/tf-idf-and-knn-in-people-wikipedia-dataset/data)