# HW Records Projects / homeworks completed during my master's degree. * 3 Categories * DSP & 多媒體 * ML & AI * HDL --- ## 碩一 Digital Signal Processing https://github.com/kliuhalo/DSP_HW - DSP 內容:Signal 在 Time Domain / Frequency Domain 之間變換操作以及 signal 在 Discrete / Continuous 特性 - Continuous -> Discrete (by sampling) - 傅立葉轉換 (FT, FFT), 濾波器(All, Low, High pass Filter...)、降噪算法轉換... - 處理和分析數位信號(離散時間內的signal可以是音頻、視頻、圖像、感測器數據等) - 使用 tools 完成基本音訊處理 - 語言: Matlab, Python ### HW1 - **Write a MATLAB program to generate a discrete-time exponential signal. Use this function to plot the exponential x[n]=(0.9)^n over the range n=0, 1, 2, …, 20.** - **Given a differential equation: 𝑦[𝑛] − 1.8 cos ( 𝜋/16) 𝑦[𝑛 − 1] + 0.81𝑦[𝑛 − 2] = 𝑥[𝑛] + 1/2 𝑥[𝑛 − 1] generate and plot the impulse response h[n] of the difference equation** (a) using recursion 𝑦[𝑛] = 1.8 cos ( 𝜋 /16) 𝑦[𝑛 − 1] − 0.81𝑦[𝑛 − 2] + 𝑥[𝑛] + 1/2 𝑥[𝑛 − 1] (b) using the filter function. Plot h[n] in the range of −10 ≤ 𝑛 ≤ 100. ### HW2 - **Download an audio signal file with a sampling rate of 16 KHz from the course Web site and process the signal as follows.** a. Change the sampling rate to 12 KHz for the audio signal. - fs 代表 sample rate, 一開始的每一個sample時間是1/16000, 所以總共時間長度/每個sample的時間長度 = wav time - 在 nearest neighbor interpolation 跟 bilinear interpolation 之間我選擇 bilinear。現在就是一個 continuous function。 - resample, 設定1/12000新的sample 週期,最後一個時間在 wav time, 總共取wav time * 12000個點 - 最後如果沒有做normalize會變成爆音。 ### HW4 - **Download the two audio signal files with a sampling rate of 16 KHz (one is clean and the other is noisy) from the course Web site and process the signal as follows.** a. Show the spectrogram of the two audio signals b. Remove the noise. You need to analyze the spectrum of the noise from the two audio files and design a filter to remove the noise. - 使用 matlab butterworth bandpass filter 把電話聲的基頻( fundamental frequency f0) 跟 formants(f1 to f4) - 還是會有比較小聲的低音雜音。 原因是因為在低音的部分沒辦法完整濾除掉 --- ## 碩一 數位音樂訊號分析 * JUCE Framework 開發音訊軟體 JUCE Framework: is the most widely used framework for audio application and plug-in development. It is an open source C++ codebase that can be used to create standalone software on Windows, macOS, Linux, iOS and Android, as well as VST, VST3, AU, AUv3, AAX and LV2 plug-ins. * 一個合成器中有許多效果元件,像是Filter, mixer, ADSR, envelope, 每個元件都負責做不同事情 * 殘響器 Reverb: 用來模擬聲音在不同空間中的不同結果,像是在教堂中的回音或是小房間的音響結果就會不一樣 * Language: C++ OOP * 矩陣運算跟透過脈衝 impulse 測試數位訊號底層的原理 * GUI ### Homework https://hackmd.io/@Kaykkk/SkfWOSE4F #### HW1 將 sine wave 合成器改成方波、三角波、鋸齒波 #### HW2 - 客製化 Slider - 將 waveform 與 spectrum 更新到畫面上 - 新增下拉選單來控制輸出波形 ## Project https://github.com/kliuhalo/Reverb/tree/main ### Implement ![截圖 2023-11-20 下午6.56.38](https://hackmd.io/_uploads/ByNuoh_V6.png) - in_delay : 訊號一進來的delay - distrub : 8*2 矩陣,模擬往8個方向跑的聲音 - allpass_comb : 用來模擬撞到牆壁的聲音變化 (**需特別設計**) - feedbackmatrix : 撞牆之後每條聲音又會分成8個方向。要注意它的 eigenvalue 不能超過1,系統才會 stable - delayfilters : 用來模擬在空氣中傳播的衰減 (**需特別設計**) - fbdelaylines : 每個方向的聲音各自都要花一段時間才會再撞牆一次 (**需特別設計**) - # Files ## Matrix - class Matrix - 提供 matrix 運算 function - 改寫 operator * - 提供 inverse, muly, diag 等 function ## IIR 以下全部繼承 Base class **Module** :要改寫 pure virtual function update `virtual float* update(float* input) = 0;` - delay // queue 實作 delay line, max 8 delay lines - delay2 - ![截圖 2023-12-18 下午4.07.00](https://hackmd.io/_uploads/r1z3TdpI6.png) ``` class Delay2 : public Module<inputDim> { private: float* arr[8]; int pos[8]{ 0 }; int len[8]; float outputBuffer[10]; public: Delay2( const int numDelaySamples[]) { for (int i = 0; i < inputDim; i++) { arr[i] = new float[numDelaySamples[i]]{ 0 }; len[i] = numDelaySamples[i]; } } Delay2(int numDelaySamples){ for (int i = 0; i < inputDim; i++) { arr[i] = new float[numDelaySamples]{ 0 }; len[i] = numDelaySamples; } } float* update(float* input) override { for (int i = 0; i < inputDim; i++) { outputBuffer[i] = arr[i][pos[i]]; arr[i][pos[i]] = input[i]; pos[i]++; if (pos[i] == len[i])pos[i] = 0; } return outputBuffer; } }; ``` - Comb - input -> [經過 allpass filter+ delay( param : 100 samples)] - allpass - 透過 File: Matrix 實作 **y[n] = r * r * x[n]- 2r * cos(theta) * x[n-1] + x[n-2] + 2 * r * cos(theta) * y[n-1] - r * r * y[n-2]** - 8 channels - Lowpass // delayfilter uses low pass - cutoff frequency a = (0~1) * exp(-(1.f/44100) * 2 * pi , a 越大 → 濾掉越多高頻 - a = a * exp(-(1.f/44100) * 2pi `this->a = exp(-(1.f/44100)*6.283*a);` - a → 1 : 完全沒有input,能量越小 - output = input * (1-a) + feedback * a - Matrix - distrib // 2*8 matrix, 模擬往8個方向跑的聲音 - outdistribute matrix // 8*2 matrix, back to 2 channel - feedback matrix - M = VDV^(-1), 8 * 8 matrix - Add -> DC Blocker - 有時候會有DC訊號,要濾掉 - y[n] = x[n] - x[n-1] + R * y[n-1] - 發現如果要讓每個 channel 不爆掉的充分條件: - 每個 row 的 sum 不大於 1 ,而非每個 Eigen value 的值 < 1 - 只有短的 delay 需要 VDV^(-1) 維持殘響長度 **PluginProcessor** - PluginProcessor 負責處理音訊與 Midi 訊號,處理音訊的演算法也會在這邊運行 - juce::Synthesiser 是一個 based class, 我們可以繼承此 class 來開發我們的合成器 ``` 1. 建立合成器時我們必須先建立 juce::SynthesiserSound 與 juce::SynthesiserVoice,Sound 用來描述合成器可以發出的音色,Voice 則用來發出聲音。 2. 使用 addVoice() 和 addSound() 方法為合成器提供一組或多組音色 3. DAW 會重複呼叫 renderNextBlock() 方法來產生音訊。任何 MIDI 事件都會被解析以獲取 note on/off messenge,這些 MIDI messenges 用於啟動和停止播放 Voice。 4. 在 renderNextBlock() 之前,一定要呼叫 setCurrentPlaybackSampleRate() 告訴它目前的取樣頻率是多少,以便正確地調整輸出音高。 5. prepareToPlay 函式:處理DAW要播放前的前置動作 6. processblock 函式:處理音訊演算法 ``` **PluginEditor** ``` - 負責 GUI 介面,透過傳入的 PluginProcessor 參數與合成器做溝通 - 繼承 juce::Component class,這個 class 是所有 UI 元件的 base class ``` ![截圖 2023-11-20 下午7.40.56](https://hackmd.io/_uploads/SySRrTuE6.png) **參數說明:** > - decay(0~1) : 矩陣的數值 > - damping (0~22026): 代表 cutoff frequency > - SetDamping : `delayFilters.setA(_damping);` > - Modulation depth( 0~1) : APF 的強度 > - `allpass.Set( R );` > - addImpulse button > - chooseButton: choose impulse response file > - channel (-1~7) :測試 impulse response 的 GUI, **channel -1 ~7**: > - -1 : 正常的reverb, > - 0~7 : for testing, 挑 delay filter 到 fbdelayfilter 的其中一軌,不經過 feedback loop, 為了要看 allpass 和 feedback matrix 的 amplitude response 有沒有超過1 > - cutoff : range a = 0~100000(Hz) > - 新增 tester , 可以畫出 z transform 跟 impulse response( phase + amplitude) - 將 impulse( 8 channels input 全部都是1) 灌進去 : - 整個 reverb's impulse response 的 z transform ![截圖 2023-11-20 下午7.17.43](https://hackmd.io/_uploads/S1XPeT_ET.png) - pole 位於 ( all pass 參數 r = 0.8 , theta = 1) ![截圖 2023-11-20 下午7.18.37](https://hackmd.io/_uploads/BJY5gp_4a.png) - comb filter gray: 數值超過 float 可表示範圍 ![截圖 2023-11-20 下午7.19.24](https://hackmd.io/_uploads/HkwplpdVT.png) - 要單獨測試不同 channel 的參數應該要調多少 amplitude 不會超過 1 (DB) - For example channel 0 - r = 0.5, theta = 0.7 ![截圖 2023-11-20 下午7.23.43](https://hackmd.io/_uploads/HJo6-a_4p.png) ``` - 結果: 1. theta 越小, allpass 比較會影響到低頻, 也就是 allpass 經過 z transform 的 phase 越低頻就開始下降 2. R 越大,allpass 的 zero 越近 unit circle - 推論: 1. all pass filter 會影響高於某個頻率的 phase,theta 代表那個頻率 ,r 表示作用的程度 2. all pass filter 怎麼調才能讓 reverb穩定應該沒有固定的結論,就是想辦法調到穩定就可以了 ``` **SynthVoice** - juce::SynthesiserVoice 需要 implement 以下函式 ``` canPlaySound() startNote() stopNote() renderNextBlock() pitchWheelMoved() controllerMoved() ``` For Example ``` bool SynthVoice::canPlaySound (juce::SynthesiserSound* sound) { return dynamic_cast<SynthSound*>(sound) != nullptr; } ``` **SynthSound.h** - 決定了合成器怎麼發出聲音,voice 個數代表合成器同時可以發幾個聲音。 - --- # 多媒體以及深度學習相關 ## 碩一 資料探勘 - 使用統計學、機器學習和資料庫,在大量資料中尋找pattern。 - 在資料庫中發掘有價值的資訊,轉化成可以理解的結構,在決策中使用 **HW:** https://github.com/kliuhalo/Datamining/tree/main **Kaggle Contest:** **Jigsaw Rate Severity of Toxic Comments** - 競賽內容:評分句子的惡意程度(越惡毒越高分),競賽要求評分約7500個句子。 備註:評分目的只需將句子的惡意程度做排序即可,並無絕對的分數評比。 ![截圖 2023-12-18 下午9.23.54](https://hackmd.io/_uploads/HJKx_a6LT.png)Test Data : 我們要評分的 sentence Output : 句子的惡意評分。 - Evaluation : 將預測的毒性分數對 comment pair 進行排名。 如果此排名與註釋者排名匹配,則該對將收到 1,如果不匹配,則收到 0。 - Dataset: Doesn't provide training data,自行尋找過往相關比賽 - Toxic data - Ruddit data - Toxic CLEAN data - Our Model: Pretrained Bert + fc + ReLU / Toxic-Bert ![截圖 2023-12-18 下午9.42.07](https://hackmd.io/_uploads/SJTNn6pI6.png) Mid Report https://docs.google.com/presentation/d/1Xv31x1vNFThwJMPYIZZLWBW-bY3ioGRJ4D_Dx24hGgs/edit#slide=id.g107f0ceedfe_0_48 - Preprocessing Validation Data using Bert Auto Tokenizer. - Do experiment on both Bert and HateBert model. - Seperate the Bert and linear layer optimizer. - Connected to TensorBorad to visualize our training process. - Collected related toxic comments data . Final Report https://docs.google.com/presentation/d/1ef8O9Qog79Lu9b1X9Tc76XjgYSB3qNQN6aNJq89iXNw/edit#slide=id.gd1b53d9468_22_17 ### HW1 Association Analysis **Dataset** Dataset1: Select from kaggle.com / UCI Dataset2: Use IBM Quest Synthetic Data Generator - https://sourceforge.net/projects/ibmquestdatagen/ - Generate different datasets **Frequent itemsets mining** - Implement Apriori Algorithm and apply on these datasets - Hash? Tree? (optional) - FP-growth - Generate association rules - Testing data - Compare results - And answer what are rules with: high/low support + high/low confidence **Algorithm** 1. FP-Growth 2. Apriori ### HW2 Classification model **Flow** - Step 1: Design a set of rules to classify data, e.g., classify students with good performance. - Design k features/attributes for your problems first. - Use ‘absolutely right’ rules to generate positive and negative data (the number of data = M) - Step 2: Use the data generated in Step 1 to construct the classification model - Decision tree is basic requirement, you can add more classification models. - Step 3: Compare the rules in the decision tree from Step 2 and the rules used to generate the ‘right’ data - Step 4: Discuss anything ### 結論 - Rule base Dataset: 適合用 Decision Tree or Random Forest - Rule 太複雜 -> Decision Tree might not learn Absolute Right Rule - Oblique Decision tree computation 更複雜 不會學到包含兩個 attribute 的 rule, 而是每個節點判斷一次其他獨立 Feature - Rule: base on binary tree -> Decision Tree 最適合 - Naive Bayes 不 Explainable 所以結果特別差,比較適合 independent attribute ### HW3 **1. Implement HITS and PageRank calculate authority, hub and PageRank values** - HITS - New Authority: Sum of hubs of its parents - New Hub: Sum of authority of its children - Normalize - PageRank - Google 搜尋引擎核心 algo **2. Implement SimRank to calculate pair-wise similarity of nodes (choice any parameter C you like)** - SimRank - key: 2 objects are considered to be similar if they ref by similar objects **3. Find a way (e.g., add/delete some links) to increase hub, authority, and PageRank of Node 1 in first 3 graphs respectively.** 1. Result analysis and discussion - HITS : - 好的中心->指向許多其他頁面,好得權威->被許多中心連結 - 相較另外兩種比較少用在搜尋引擎 - PageRank - Assume: 重要頁面往往被更多 ref - 只能衡量重要性 while SimRank 可以衡量相似度 - SimRank - 自己跟自己最相關 - Time : O( k|E^2| ) - Space : O( |V|^2 ) 3. Computation performance analysis 4. Discussion - More limitations about link analysis algorithms - Practical issues when implement these algorithms in a real Web (time cost) - The effect of “C” parameter in SimRank - Design a new link-based similarity measurement --- ## 碩一 多媒體分析 https://github.com/kliuhalo/MM_analyze ### HW2 Video shot change Detection - 有三支影片,由於三支影片的 shot change 不一樣,所以針對不同的 影片,要用不同的演算法去處理。 - Video 1: 新聞的片段, - 新聞的特性就是場景變換幾乎都是 hard cut,而 hard cut 通常比較好偵測。 - [1] CV2 轉成灰階 再用 CV2 chi-square 計算兩兩相鄰 frame 差異 - [2] 每張圖都分成RGB 三個channel,然後針對每張圖的256維度一維一 維的去計算距離(使用歐式距離)sum - Video 2: 除了 hard cut, 要考慮到連貫的場景變化。 - 直覺上 feature 好像對於找 frame 跟 frame 之間的變化是個重要資訊。 - SIFT(Scale-Invariant Feature Transform) 去做 feature matching - 同時用 feature match 到的數量跟 histogram -> Smooth-> 判斷 hard cut - 可以發現 hard cut 的時候 histogram 的 difference 會有一個很大的變化 - 計算 histogram 跟 feature match 的 slope - 可以發現在切換的當下 histogram slope 會往上衝,在瞬間往下變負值。 - 而 feature 在 hard cut 時通常 feature match 的 slope 會是正值 - Video 3: 有很多利用移動鏡頭來轉場的技巧 - Video 2 的方式 - 條件設低會加入很多鏡頭移動的 frames,若把條件設高則是不把 鏡頭移動視為轉場 ### HW4 GMM-based Color Image Segementation - 建構 Gaussian Mixture Model,完成 color image segmentation - 判別每一個 pixel 是屬於場地或非場地 - 資料:共兩張足球轉播畫面及其對應的場地 mask - Accuracy : - 用第一張圖片去作為預測第一張圖片的 GMM model 有最高的 accuracy - 第一二張混合的 GMM model 預測出第一張圖片 會有較高的 accuracy - 第一二張混合的 GMM model 預測第二張則比較差 - Result: 2 或 3 個 GMM components 最佳 - 較高 n_components 的 accuracy 會突然超高,Accuracy 的不穩定導致我們 不太能用準確度去判斷模型好壞 ### HW5 Music genre Classification - 共 10 類音樂類型。每一類 50 個音樂片 段,每個片段 30 秒。 - Validation: 5-fold cross validation - Feature: - step1: Mel-frequency cepstral coefficients (MFCCs) - mfc 代表一小段時間 內的音訊能量,在 mel 頻譜上面進行倒譜分析 - step2: 轉為相鄰兩兩 mfcc 的 delta - step3: n_chroma=12 取 stft 後的訊號色譜圖 - step4: beat feature: [mfcc_delta, chroma_stft] - step5: reshape 為 170 維的 array - Alternative Feature: 時間序列的音訊 decompose 成 harmonic 跟 percussive components - 比較 Model: LogisticRegression, MLPClassifier, svm, KMeans, Knn - 分別用五次 accuracy 的平均準確率判別好壞 - Result: 準確率都蠻差的,大概落在 0.6 之下 - Build My Model: - 三層 NIN block 最後經過 global average pooling 分成十個類別 - Result: - Train 了 100 個 epoch,經過 five cross validation 後的平均結果是 0.725 左右。 - 最後 model 的結果也可以到 75%以上,BCE loss 則是 0.11 左右 - 比一開始使用 sklearn 內各種 ML algorithm 的結果好多了。 --- ## 碩一 智慧感測與行動計算 Smart Sensing And Mobile Computing * 經典論文 model backbone & implementation ### Week 4 : Experimentation - #### Implementation 1. Implement your Convolution Neural Network. 2. Train your Neural Network on GPU. 3. Try to use different hyperparameters(e.g. batch_size, epoch), you need to have three different results. 4. Write your experimentation result on Google Docs or Word and upload to moodle include: - Convolution Neural Network code - difficulties which you encountered - Produce different results by adjust hyperparameters, loss or transform(at least 3 results) - Conclusion - epoch 數較小 model 可以更快收斂 - batch size 太大代表這個 model 應該underfitting ### Week 9: Data Processing - Get Dataset: !wget https://download.pytorch.org/tutorial/hymenoptera_data.zip - Transform Functions ![](https://hackmd.io/_uploads/HJLT-nc-a.png) - Implementation - Task1 Implement these transform functions and show the results, you can use the api or do it by yourself. Finally, paste the image and your code to the .docx file. - Task2 Try to apply the transform function to the hymenoptera_data dataset and run the model. Observe the accuracy and compare with baseline accuracy. Then, paste the accuracy and write your thought to the .docx file. - Sample Colab notebook: - https://colab.research.google.com/drive/1U0DSM3scMH-tHGjy0d8XEO-da4Kq2m7p?usp=sharing ### Week 12 : Region Proposal Network - Dataset: - https://www.cis.upenn.edu/~jshi/ped_html/ - 170 images with 345 labeled pedestrians - Model: ![](https://hackmd.io/_uploads/B1Qpt5qb6.png) - #### Implementation - Implementation_1 - Load the pretrained VGG16 to extract image features - Cut off the last five layers <div> <img src="https://i.imgur.com/25H4XIv.png" width="500"/> </div> - Implementation_2 - Write the RPN network - Implementation_3 - build data flow in test_RPN - image batch -> VGG16 -> feature map -> RPN -> (anchor_locs, cls_scores, objectness_score) - print shape of each output tensor - Colab notebook - https://colab.research.google.com/drive/1YydjJC6FOFEcyV3_uE3s0pu3U7AC3up-#scrollTo=eEfYQTjPuN7l - Reference https://github.com/sorg20/RPN/blob/master/rpn.ipynb ### Week 13 : - Dataset & Model same as Week 12 - #### Implementation - Implementation_1 - Implement IOU_threshold ![](https://hackmd.io/_uploads/ry7iJn5ZT.png) - Implementation_2 - Calculate dy, dx, dw, dh ![](https://hackmd.io/_uploads/SywBl2q-p.png) - Implementation_3 - Train the model(at least 4 epochs) Train the model and show the bounding box prediction of data_loader_val. - e.g. - epochs:4 - validate(model, data_loader_val) <img src="https://i.imgur.com/sIDtfD0.png" alt="drawing" width="200"/> - Colab Notebook - https://colab.research.google.com/drive/1mqU_OPbmtnDjLMbBcOEpP8-OP9kddb-Y?usp=sharing - ### Network in Network (NIN) - Final 負責介紹一個 Backbone,自行實踐在應用上 - 概念 ![](https://hackmd.io/_uploads/HyHqbTkGp.png) - Colab Notebook (Implemented by keras and TF) - https://colab.research.google.com/drive/1OTfRDNPvuPVVj_nmSenJxEPHdGl5vF8M#scrollTo=m2erAJzV2aL5 --- ## 碩一 影像處理、電腦視覺及深度學習 --- ## 碩二 人工智慧導論 **Homework** https://github.com/kliuhalo/AI_hw5_GMM_2022 ### HW2 Genetic Algorithm https://github.com/kliuhalo/AI_HW2_Genetic_algo **Problem** Job Assignment Problem - N Tasks, N Agent, cost **Input** - in json - input[i][j]: i task assign to j agent **Algorithm** - Brute Force - Genetic Algorithm **Result** - Brute Force - Permutation 計算 cost - Genetic Algorithm - 詳見作業報告 - Comparison ![](https://hackmd.io/_uploads/BJP49HTA2.png) ### HW4 Nonparametric Regression Algorithm https://github.com/kliuhalo/AI_HW4_knn_lw **Problem** Linear Regression Problem **Input** - data1.npz - simple regression dataset - 每個點的x, y 座標 (x 為⾃變數、y為應變數 - x shape: (1000,) - y shape:(1000,) - data2.npz - multiple regression dataset - 每個點的 x0, x1, y 座標(x0, x1 為⼆個獨⽴的⾃變數、y為應變數) - x shape: (1000,2) - y shape:(1000,) **Algorithm** (不直接 call function) - **k-nearest-neighbors linear regression** - **Locally weighted regression** - Other methods : **Polynomial Regression** - 可用 matplotlib視覺化 and numpy 計算 **Result** - 詳參 Report ### HW5 Gaussian Mixture Model **Problem** - 利⽤⾼斯混合模型(Gaussian Mixture Model, GMM)來做資料的分群。 **Algorithm** - EM algorithm - Least Square + GMM **詳參 Report**: https://cubic-cycle-30f.notion.site/AI_hw5-6db77f58b0b44293b37cb5c6c6604116?pvs=4 ## 碩一 數位IC設計 https://github.com/kliuhalo/IC_2022 - 流程 - [RTL code] -> Pre-sim -> Synthesis circuits (合成電路) -> [ Gate-level netlist ] - 作業流程 step 1: Functional Simulation (ModelSim) step 2: Synthesis (Quartus) step 3: Gate level Simulation (ModelSim) - RTL coding - Hardware Description Language -HDL 硬體描述語言) : Verilog - 模擬 Tool: ModelSim - Synthesis = Translation ( into Boolean Representation )+ Optimizaion+Mapping - Process of logic synthesis - 合成 Tool : Quartus 提供環境跟tools給設計者produce circuits 滿足以下 - performance - area - testability - Input : HDL src code Output : Target technology ### HW1 - Combinational Circuit - Half Adder - Full Adder - 1-bit ALU - 8-bit ALU ### HW2 Traffic Light System (TLS) - **紅綠燈** - Sequential Circuit - Moore Machine - always block 分三個部分 - State Register - Next State Logic - Output Logic ### HW3 Encoder&Decoder - IC/CAD Contest 積體電路電腦輔助設計軟體製作競賽題目 - **實作LZ77 Encoder Decoder** - ![](https://hackmd.io/_uploads/S1Tuwwrgp.png) - ![](https://hackmd.io/_uploads/H16qvDrea.png) - Search buffer: 存目前有的字串, - Lookahead buffer: 存不斷推進來的待 encode 字串 - 比對 search buffer 中跟 lookahead buffer match 的 substring - Encoder: [offset, match_len, char_nxt] - offset: 距離search buffer 多長 - match_len: match上的最長字串 - char_nxt 下一個推進的字元 - Decoder: Encoder 每給一個 encoded code -> 一個 clk 推進一格token, 推進 search buffer - State: Idle,input, calculate, finish - Tips - 盡量將不同訊號分不同 always block 處理 - 可以 assign 就直接 assign - Verilog 中的 for loop 其實寫完會攤開成電路 並不是時間 dimension ### HW4 Edge-Based Line Average Interpolation - **將一張圖片解壓縮** - python 評分解壓縮的結果 - ![](https://hackmd.io/_uploads/S1qLDPBeT.png)