# 1. General Intution of Audio DSP **Transfer Funcion:** $$T(S)=\frac{a_3s^3+a_2s^2+a_1s+a_0}{b_3s^3+b_2s^2+b_1s+b_0}$$ $$TestAudioClip = AudioClip \circledast TransferFunction$$ The coefficients: $a_0, a_1, a_2,a_3,b_0,b_1,b_2,b_3$ will be used to create a test audio file. - Audio clip will be constructed by convoluting a audio clip with the transfer function. Every filter has an associated transfer function and a response curve. - **Is all of these audio dsp related functionality done through external services ?** # 2. Preparation for Test Flow QNA Questions: - While asking user about the problems he/she is having cause of hearing disability, we have three options: - YES - SOMETIMES - NO - As we discussed do we need to map those to certain numerical values to calculate the initial gain ? - Gender is mapped to numerical value to calculate gain, any other parameters mapped to numerical value? # 3. Choosing a Device: - Before actually jumping into the audio test screen, user chooses a device, device os, audio output device (headphones, earbud), type of audio output device types. - Do we need to map those to certain numberical values **to calculate initial gain** ? If so we might need the excel / csv file containing the record of device and threshold to be used. - Calibration test after user fills the details of a device. (If it's not an approved device, do we need to do a calibration test with the combination of mobile device and audio o/p device?) # 4. About the transition of screens in Audio Test Screen ## From the presentation slide - Given a details about **age, sex** we can find the **initial parameters** about the audio to be build with. - As per the user's feedback about the words they have heard, we calculate the **parameters** to be used for the second sound. With those parameters we build a second audio clip to be played. - After finishing the answers to all the questions, we provide user with the graph about their hearing ability i.e. frequency response. **i.e.** - Given age, sex -> initial parameters - initial parameter + (audio) -> audio clip to be used in second test - Phases of test, what are the 2 phases of tests about. ## Questions from this section. - **1. Are audio clip from which we construct the test audio clip same ?** > What we thoughe was, there will be a single audio and we just modify the audio to be adjusted with different gain, frequencies, e.t.c. So, are we storing different audio files on a backend with the correct answer. We'll store the audio file on the cloud storage. - **2. What are the parameters about and audio that we need to store on backend?** > As we have discussed, we have gain, frequencies, ... parameters about an audio. While user is presented with a sound clip. We can store the data as below: ``` TestAudioSamples (Table) - audio_file -> audio file itself - correction_answer -> correct answer - options -> options to be shown AudioTest (Table) - audio_sample : FK (TestAudioSamples) - no_of_frequencies - individual_frequencies - gain - ..... (any other??) - answer (option user has selected) Test (Table) - Will have many AudioTest object in relation. ``` - **2. What parameters do we need to store for result and graphs ?** > We need to show the frequency response graph at the end based upon the options user chooses mentioning the word he has heard. > Do options have a weightage i.e. do we map options to certain values to derive the frequency response graph. > If not what extra parameters do we need to store other that the informations above in the AudioTest table. **Note** - By **audio clip** we mean the original audio file to which we apply few preset and construct the test audio clip. - By **test audio clip** we mean the audio clip that is presented to the user to hear. **Example: Different audio file provided to user** Suppose Jane is Male, 28 y/o. So: - (User's demographic) (Male, 28) -> (Audio Parameters) (gain=0.78, no_of_frequencies=12, ...) - (Audio Parameters) (gain=0.78, no_of_frequencies=12, ...) + (audio clip) (1.wav) = Test Audio - We now provide user with Test Audio and the options. Suppose options are (OP1, OP2, OP3) with some weightage of (2, 3, and 4) and among them OP2 is the correct option. - If Jane chooses OP2, which is correct now we've to again calculate the audio parameter for second test. i.e. - Now we again calculate audio parameters for second test from the options selected by the user. - (Weightage of options choosed) -> Audio Paramaters (gain=0.72, no_of_frequencies=24, ...) - Audio Paramaters (gain=0.72, no_of_frequencies=24, ...) + (audio clip)(1.wav) => Audio clip for second test - In this test, **is the audio clip same as the initial test i.e. (1.wav)** - This process repeats untill few times (**Is there any other conditions that might cause this flow to deviate with other random audio samples or same audio samples?**) **Example: Estimation of frequency response curve:** Suppose Nancy is a Female, 56 y/o. - At first she was provided with a test audio clip of below parameters: - no of frequency: 24 - gain: 0.12dB - ... - Options provided during that test was: - Ear (weightage: 3) - Hear (weightage: 4) - Her (weightage: 5) (Correct answer) - Among those options Nancy selected Ear with weightage 3. - And the flow continues and we've the data curated about user's test below. - After user finishes test, at some point (after user buys a test report) we need to send the report to user. Do we have a email template so that we can find what things do we need to store on report details database table. | Audio Sample | Selected Option | Option Weightage | Audio Parameters | | ------------ | --------------- | ---------------- |----------------------------- | | 1.wav | Ear | 3 | gain=0.12dB, no_of_freq= 24 | | 1.wav | <Option> | <Weightage> | gain=XdB, no_of_freq= Y | - **Is there any other data we need to store on database except above ?** - **is there any other service or third party api that takes the data and provides us with response curve figure or data points for the curve?**