# RawNet3 - Synthetized speech detector
## Introduction
This repository is a fork of the [voxceleb_trainer](https://github.com/clovaai/voxceleb_trainer/) repo, which is a framework for training the RawNet3[3] model for speaker recognition task on the ASVspoof21 dataset. We have further developed this framework to formulate a binary classification problem to assess the task of deepfake speech recognition.
## Overview of the Architecture
The training setup consits of training the RawNet3 model with AAMSoftmax loss, with the aim to increase the distance between the speakers, which outputs a feature vector of 1024 size.
On these features we jointly train a Classification FC layer to create the mappings to real & fake samples with BCE loss. These to parts have been decoupled which facilitates separate pre/fine-tuning as well. This architecture is defined in [JointModelTrainer.py](JointModelTrainer.py).
This training setup is implemented in the [train.py](train.py) file, which for the 3 stages of the process (training, testing, validation) loads in the Dataloaders, sets up the environment based on the parameters declared in our [configuration file](config.py), trains the model, while validating the result for every `test_interval`. We have implemented an early stopping mechanism to stop the training and save the best model when the validation accuracy does not imporve after 5 epochs. We evaluate our test split on the best model, draw statisctics (accuracy, F1 score, confusion metrics, EER), and eventually visualize the training progress.
## Datasets
| Name | Label | Dataset URL | Additional Details | Size | Sampling rate | Training | Evaluation |
|---------------------------|:------------------:|------------------------------------------------------------------------------------------------------|--------------------------------------------------|:-------------:|---------------|:--------:|:----------:|
| **This American podcast** | human | [link](https://www.kaggle.com/datasets/shuyangli94/this-american-life-podcast-transcriptsalignments) | single speaker, <br>augmented (background music) | 12841 | 44.1k | NO | YES |
| **Mozilla Common Voice** | human | [link](https://commonvoice.mozilla.org/) | multi speaker, varied style | 835 | 32k | YES | YES |
| **LJSpeech** | human | [link](https://keithito.com/LJ-Speech-Dataset/) | single speaker | 24288 | 22.05k | YES | YES |
| **CDAF** | human<br>generated | [link](https://arxiv.org/abs/2207.12308) | chinese, <br>augmented (all sorts) | 10944<br>3285 | 16k | NO | YES |
| **Wavefake** | generated | [link](https://github.com/RUB-SysSec/WaveFake) | 8 different generator models | 114146 | 22.05k | YES | YES |
| **Amazon TTS** | generated | [link](https://aws.amazon.com/polly/) | | 6605 | 16k | YES | YES |
| **Azure TTS** | generated | [link](https://azure.microsoft.com/hu-hu/products/cognitive-services/text-to-speech/) | | 4763 | 16k | YES | YES |
| **Google TTS** | generated | [link](https://cloud.google.com/text-to-speech) | | 4106 | 24k | YES | YES |
| **Coqui TTS** | generated | [link](https://coqui.ai/) | | 67 | 44.1k | NO | YES |
| **Elevenlabs TTS** | generated | [link](https://elevenlabs.io/) | | 264 | 44.1k | NO | YES |
| **ASVspoof2021** | human<br>generated | [link](https://www.asvspoof.org/) | | 7315<br>19411 | 16k | NO | YES |
| **miscellaneous** | generated | [link](https://youtube.com/shorts/D3RiivI9l_U?feature=share) | hungarian, <br>augmented (background music) | 12 | 48k | YES | YES |
| **BSD** | text | [link](https://youtube.com/shorts/D3RiivI9l_U?feature=share) | business phone dialogs | ~1700 sentences | - | - | - |
*Explanation*: The size columns mean the number of 4s samples in each dataset.
### Generating synthethised speech through APIs
In `tools/tts_generator.py` we implemented methods to process the BSD dataset, by dividing it into dialogs, and concatenating all monolouges from a single speaker together. Then these sentences are used to generate audio on Coqui & Elevenlabs. This is a well-suited formulation, hence it simulates a similar setting that a call center would normally encounter. Rest of the generation was carried out on the LJSpeech transcript.
The generation process aims to generate audio using as many voices as possible from each API. There is also a cleanup credits method to use up all the remaining credits, by running a binary search on the text dataset to find a sentence with a required character count.
### Processing
The main processing steps can be found in the [tts.py](tts.py) file. These steps consist of:
1) removing silent parts and known artifacts
2) converting the audio into mono
3) chopping them up into 4s segments (smaller segments are either concatenated together, if they are originating from the same speaker or omitted)
### Dataset structure
```
dataset
* text
* human
* generated
** split (contains the chopped up segments)
** unsplit (contains the original segments)
*** [dataset name]
**** dataset files
* data list files (eg., train_list.txt, test_list.txt, ...)
```
The preprocessed dataset is now archived and stored on our S3.
The datasets are provided to the model in a form of datalists that have the following structure:
```
<id> <relative path to file>
```
For each stage of the training we created train (89%) - test (10%) - validation (1%) splits, and for each split there is an associated list generated with tts.py.
### Data loading
Data loaders have been implemented in the [DatasetLoader.py](DatasetLoader.py) for each split. They take care about
- loading in the wav files,
- creating the datafusion (equally subsample from all the datasources),
- normalizing the volume across the dataset,
- resampling the audio files the with a provided sample rate,
- padding the smaller segments,
- augmenting the data.
### Data augmentation
The following forms of augmentation are used:
- adding Gaussian noise N(0, 0.01)
- adding reverb
- adding background elements [noise, speech, music]
- applying a bandpass filter
To be able to use the augmentation, first the required resources have to be downloaded: `python dataprep.py --augment`. Then configure the neccessary paths in `cofig.py`
Known issue is that the code hangs when it tries to extract musan.zip. Solution is to either wait a sufficient amount of time and kill the script or extract it manually.
## Training parameters
`config.py` contains all training parameters. They have been mostly inherited from [4], [1], voxtrainer_celeb, rawnet3 repository and `configs/RawNet3_AAM.yaml`, but some parameters were also finetuned. The ones listed there are empirically the best for this task.
## How to train
- install the requirements
- download & extract the augmentation resources
- download/create the dataset
- generate the train-test-val lists
- setup config.py with the respective paths
- `python train.py <runID>`
The train id can be an arbitrary string that will identify the current run. It makes sense to use descriptive ids (eg., highLR-100epoch). The progress of the training can be tracked on the console, and also there are 2 log files created under `experiments/results` in `details` (dt_<runID>.txt) & `scores` (sc_<runID>.txt) directories. The prior contains the per epoch stats (acc, losses), the latter the per test interval metrics (train, val accuracy, f1, confusion metrics, final test accuracy). Based on these log files the progress of the training is visualized under the `visualizations` folder associated with the given `runID`..
### Citation
Please cite [1] if you make use of the code. Please see [here](References.md) for the full list of methods used in this trainer.
[1] _In defence of metric learning for speaker recognition_
```
@inproceedings{chung2020in,
title={In defence of metric learning for speaker recognition},
author={Chung, Joon Son and Huh, Jaesung and Mun, Seongkyu and Lee, Minjae and Heo, Hee Soo and Choe, Soyeon and Ham, Chiheon and Jung, Sunghwan and Lee, Bong-Jin and Han, Icksang},
booktitle={Proc. Interspeech},
year={2020}
}
```
[2] _The ins and outs of speaker recognition: lessons from VoxSRC 2020_
```
@inproceedings{kwon2021ins,
title={The ins and outs of speaker recognition: lessons from {VoxSRC} 2020},
author={Kwon, Yoohwan and Heo, Hee Soo and Lee, Bong-Jin and Chung, Joon Son},
booktitle={Proc. ICASSP},
year={2021}
}
```
[3] _Pushing the limits of raw waveform speaker recognition_
```
@inproceedings{jung2022pushing,
title={Pushing the limits of raw waveform speaker recognition},
author={Jung, Jee-weon and Kim, You Jin and Heo, Hee-Soo and Lee, Bong-Jin and Kwon, Youngki and Chung, Joon Son},
booktitle={Proc. Interspeech},
year={2022}
}
```
[4] _End-to-end anti-spoofing with RawNet2_
```
@misc{tak2021endtoend,
title={End-to-end anti-spoofing with RawNet2},
author={Hemlata Tak and Jose Patino and Massimiliano Todisco and Andreas Nautsch and Nicholas Evans and Anthony Larcher},
year={2021}
}
```
### License
```
Copyright (c) 2023-present TC&C Kft.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
```