# Convolution: Theory Review and Its Application
###### tags: `Tutorial`
* Theory Reveiwe
* Signal
* LTI system
* Impulse response
* Convolution theorem
* FIR
* Implementation
* circular buffer
* JUCE
* convolution v.s. cross-correlation
* Application
* Reverberation / Cabinet
* Sampler
* HRTF
----
## 1. Theory Reveiwe
**TextBook:**
[Discrete-Time Signal Processing](https://www.amazon.com/Discrete-Time-Signal-Processing-3rd-Prentice-Hall/dp/0131988425)
by Alan Oppenheim and Ronald Schafer
Chpater 2
### 1.1 Signal Representation
* impulse / unit sample function or, simply, impulse
<img src="https://i.imgur.com/J1yfo1C.png" alt="drawing" width="200"/>
* represent discrete-time signal as a sum of delayed and scaled impulse sequences.
<img src="https://i.imgur.com/11ZK2PA.png" alt="drawing" width="250"/>
### 1.2 LTI system
* LTI System: **L**inear **T**ime-**I**nvariant System
* A Linear system satisfies two properties:
* Additivity
<img src="https://i.imgur.com/8I5t0CX.png" alt="drawing" width="350"/>
* Homogeneity
<img src="https://i.imgur.com/iLUcO93.png" alt="drawing" width="185"/>
* Combining two properties => **superposition property**
<img src="https://i.imgur.com/od3yAgS.png" alt="drawing" width="355"/>
* Time-Invariant System
* A system is time invariant if a shift of the input signal results in a corresponding shift of the output signal
* An LTI system is completely characterized by its **impulse response**.
* Knowing the impulse response is sufficient to completely predict what the system will output for any possible input.
### 1.3 Impulse Response (IR)
* System **response** to the **impulse**
<img src="https://i.imgur.com/NusZ1H2.png" alt="drawing" width="250"/>
* Convolution
<img src="https://i.imgur.com/WW4gG7m.png" alt="drawing" width="230"/>
* We can get the output of an LTI system by convolving the input with its system response.
### 1.4 Convolution Theorem
* Convolution in time domain equals multiplication in frequency domain
* <img src="https://i.imgur.com/9fC3mhp.png" alt="drawing" width="200"/>
* <img src="https://i.imgur.com/za2B186.png" alt="drawing" width="150"/>
* *H* is the frequency domain **transfer function**
* read proof in 3.4.7
* Time Compexity (1D)
* time: <img src="https://i.imgur.com/Adypug3.png" alt="drawing" width="70"/>
* freq:<img src="https://i.imgur.com/4x9fYVl.png" alt="drawing" width="95"/>
* https://github.com/fkodom/fft-conv-pytorch
<img src="https://github.com/fkodom/fft-conv-pytorch/raw/master/doc/benchmark.png" alt="drawing" width="1500"/>
* Faster than direct convolution for large kernels.
* Much slower than direct convolution for small kernels.
### 1.5 FIR (Finite Impulse Response) Filter
* A finite impulse response (FIR) filter is a filter whose impulse response is of finite duration,
* For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values:

* [sinc filter](https://hackmd.io/@v10vZJlnRcKyhTtriMUrsQ/Sye8FPeG5)
## 2. Implementation
### 2.1 Circular Buffer
* or circular queue
* FIFO (First-In-First-Out)
* Suitable for data streams
<img src="https://upload.wikimedia.org/wikipedia/commons/f/fd/Circular_Buffer_Animation.gif" alt="drawing" width="400"/>
* When receptive field is larger than buffer size
<img src="https://i.imgur.com/1zwG8xW.png" alt="drawing" width="300"/>
* maximum size of the buffer = receptive field
### 2.2 Convolutoin in JUCE
[dsp::Convolution](https://docs.juce.com/master/classdsp_1_1Convolution.html#details)
[dsp::FIR](https://docs.juce.com/master/classdsp_1_1FIR_1_1Filter.html#details)
```cpp=
void AudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock){
//...
juce::dsp::ProcessSpec spec;
spec.sampleRate = sampleRate;
spec.maximumBlockSize = samplesPerBlock;
spec.numChannels = getTotalNumOutputChannels();
convIn.reset();
convIn.loadImpulseResponse(
std::move (convInBuffer),
spec.sampleRate,
dsp::Convolution::Stereo::yes,
dsp::Convolution::Trim::no,
dsp::Convolution::Normalise::no);
convIn.prepare (spec);
}
void AudioProcessor::processBlock (
juce::AudioBuffer<float>& buffer,
juce::MidiBuffer& midiMessages){
//...
dsp::AudioBlock<float> block(buffer);
dsp::ProcessContextReplacing<float> context(block);
convIn.process(context);
}
```
### 2.3 Convolutoin v.s. Cross-correlation
* [Correlation vs Convolution Filtering](https://medium.com/@aybukeyalcinerr/correlation-vs-convolution-filtering-2711d8bb3666)

**Warning: conv in PyTorch is cross-correlation**
> [Why are PyTorch “convolutions” implemented as cross-correlations?](https://discuss.pytorch.org/t/why-are-pytorch-convolutions-implemented-as-cross-correlations/115010/2)
## 3. Application
### 3.1 Effect: Reverberation / Cabinet
RIR (room impulse response)
* [How to Record a Lion in a Concert Hall](https://tomroelandts.com/articles/how-to-record-a-lion-in-a-concert-hall)
* [Youtube: Creating Impulse Responses](https://youtu.be/1egKAtC16e8?t=53)
* [Altiverb](https://www.audioease.com/altiverb/)
### 3.2 Instrument: Sampler
* x[n]: midi signal
* h[n]: recorded samples
* Differentiable Renderer / Neural Synthesis
### 3.3 Spatial Audio: HRTF
* **H**ead **R**elated **T**ransfer **F**unction is a set of filters/IRs that characterize how an ear receives a sound from a point in space.
* ITD (interaural time difference): 方向
* ILD (interaural level Difference): 方向
* pinnae: 上下
* Binaural acoustic experiences
* 用**耳機**來還原3D空間聽覺體驗
* 一個座標一對(左右耳)filter
<img src="https://i.imgur.com/8pUluMO.png" alt="drawing" width="350"/>
* 假人頭錄音, [dummy head recording](https://en.wikipedia.org/wiki/Dummy_head_recording)
* [Youtube: Binaural Audio Recording](https://www.youtube.com/watch?v=vGt9DjCnnt0&ab_channel=Markabe)
* [DEMO](https://www.youtube.com/watch?v=c6SDKfHCDm8&ab_channel=WildCat)
---
## Reference
https://tomroelandts.com/articles/impulse-response