Machine_S
  • NEW!
    NEW!  Connect Ideas Across Notes
    Save time and share insights. With Paragraph Citation, you can quote others’ work with source info built in. If someone cites your note, you’ll see a card showing where it’s used—bringing notes closer together.
    Got it
        • Sharing URL Link copied
        • /edit
        • View mode
          • Edit mode
          • View mode
          • Book mode
          • Slide mode
          Edit mode View mode Book mode Slide mode
        • Customize slides
        • Note Permission
        • Read
          • Owners
          • Signed-in users
          • Everyone
          Owners Signed-in users Everyone
        • Write
          • Owners
          • Signed-in users
          • Everyone
          Owners Signed-in users Everyone
        • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invite by email
        Invitee

        This note has no invitees

      • Publish Note

        Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

        Your note will be visible on your profile and discoverable by anyone.
        Your note is now live.
        This note is visible on your profile and discoverable online.
        Everyone on the web can find and read all notes of this public team.

        Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

        Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

        Explore these features while you wait
        Complete general settings
        Bookmark and like published notes
        Write a few more notes
        Complete general settings
        Write a few more notes
        See published notes
        Unpublish note
        Please check the box to agree to the Community Guidelines.
        View profile
      • Commenting
        Permission
        Disabled Forbidden Owners Signed-in users Everyone
      • Enable
      • Permission
        • Forbidden
        • Owners
        • Signed-in users
        • Everyone
      • Suggest edit
        Permission
        Disabled Forbidden Owners Signed-in users Everyone
      • Enable
      • Permission
        • Forbidden
        • Owners
        • Signed-in users
      • Emoji Reply
      • Enable
      • Versions and GitHub Sync
      • Note settings
      • Note Insights New
      • Engagement control
      • Make a copy
      • Transfer ownership
      • Delete this note
      • Insert from template
      • Import from
        • Dropbox
        • Google Drive
        • Gist
        • Clipboard
      • Export to
        • Dropbox
        • Google Drive
        • Gist
      • Download
        • Markdown
        • HTML
        • Raw HTML
    Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Help
    Menu
    Options
    Engagement control Make a copy Transfer ownership Delete this note
    Import from
    Dropbox Google Drive Gist Clipboard
    Export to
    Dropbox Google Drive Gist
    Download
    Markdown HTML Raw HTML
    Back
    Sharing URL Link copied
    /edit
    View mode
    • Edit mode
    • View mode
    • Book mode
    • Slide mode
    Edit mode View mode Book mode Slide mode
    Customize slides
    Note Permission
    Read
    Owners
    • Owners
    • Signed-in users
    • Everyone
    Owners Signed-in users Everyone
    Write
    Owners
    • Owners
    • Signed-in users
    • Everyone
    Owners Signed-in users Everyone
    Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.

    Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Explore these features while you wait
    Complete general settings
    Bookmark and like published notes
    Write a few more notes
    Complete general settings
    Write a few more notes
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech ([ICML 2021](https://arxiv.org/abs/2106.06103)) ###### tags: `Yang` ## 1. Introduction Text-to-Speech system pipelines have been **simplified to two-stage generative modeling** apart from text preprocessing such as text normalization and phonemization. The first stage is to **produce intermediate speech representations** such as mel-spectrograms or linguistic features from the preprocessed text, and the second stage is to **generate raw waveforms conditioned on the intermediate representations**. Models at each of the two-stage pipelines have been developed independently. Despite the progress of parallel TTS systems, two-stage pipelines remain problematic **because they require sequential training or fine-tuning for high-quality production wherein latter stage models are trained with the generated samples of earlier stage models**. In addition, their dependency on predefined intermediate features precludes applying learned hidden representations to obtain further improvements in performance. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Using a variational autoencoder (VAE), **we connect two modules of TTS systems through latent variables to enable efficient end-to-end learning**. To improve the expressive power of our method so that high-quality speech waveforms can be synthesized, **we apply normalizing flows to our conditional prior distribution and adversarial training on the waveform domain**. To tackle the one-to-many problem, we also **propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text**. ## 2. Method The proposed method is mostly described in the first three subsections: - A conditional VAE formulation - Alignment estimation derived from variational inference - Adversarial training for improving synthesis quality. From now on, we will refer to our method as **Variational Inference with adversarial learning for end-to-end Text-to-Speech (VITS)**. ![](https://i.imgur.com/44YPLsw.jpg) ### 2.1.1 Variational Inference VITS can be expressed as a conditional VAE with the objective of maximizing the variational lower bound, also called the evidence lower bound (ELBO), of the intractable marginal log-likelihood of data $log_{\theta}p(x|c)$ $$ \log p_{\theta}(x \mid c) \geq \mathbb{E}_{q_{\phi}(z \mid x)}\left[\log p_{\theta}(x \mid z)-\log \frac{q_{\phi}(z \mid x)}{p_{\theta}(z \mid c)}\right] $$ ### 2.1.2 Reconstruction Loss As a target data point in the reconstruction loss, we use a mel-spectrogram instead of a raw waveform, denoted by ${x}_{mel}$. We upsample the latent variables ${z}$ to the waveform domain $\hat{y}$ through a decoder and transform $\hat{y}$ to the melspectrogram domain $\hat{x}_{mel}$. Then the $L1$ loss between the predicted and target mel-spectrogram is used as the reconstruction loss: $$L_{recon} = ||{x}_{mel} - \hat{x}_{mel}||_{1}$$ We define the reconstruction loss in the mel-spectrogram domain to **improve the perceptual quality by using a mel-scale that approximates the response of the human auditory system**. Note that the mel-spectrogram estimation from a raw waveform does not require trainable parameters as it only uses STFT and linear projection onto the mel-scale. ### 2.1.3 KL-Divergence The input condition of the prior encoder c is composed of phonemes $c_{text}$ extracted from text and an alignment $A$ between phonemes and latent variables. The alignment is a hard monotonic attention matrix with $|c_{text}| * |z|$ dimensions representing how long each input phoneme expands to be time-aligned with the target speech. In our problem setting, we aim to provide more high-resolution information for the posterior encoder. We, therefore, use the linear-scale spectrogram of target speech $x_{lin}$ as input rather than the mel-spectrogram. Note that the modified input does not violate the properties of variational inference. The KL divergence is then: $$ L_{k l}=\log q_{\phi}\left(z \mid x_{lin}\right)-\log p_{\theta}\left(z \mid c_{t e x t}, A\right) \\ z \sim q_{\phi}\left(z \mid x_{lin}\right)=N\left(z ; \mu_{\phi}\left(x_{lin}\right), \sigma_{\phi}\left(x_{lin}\right)\right) $$ The factorized normal distribution is used to parameterize our prior and posterior encoders. **We found that increasing the expressiveness of the prior distribution is important for generating realistic samples.** We, therefore, apply a normalizing flow $f_{\theta}$, which allows an invertible transformation of a simple distribution into a more complex distribution following the rule of change-of-variables, on top of the factorized normal prior distribution: $$ \begin{aligned} p_{\theta}(z \mid c) &=N\left(f_{\theta}(z) ; \mu_{\theta}(c), \sigma_{\theta}(c)\right)\left|\operatorname{det} \frac{\partial f_{\theta}(z)}{\partial z}\right|, \\ c &=\left[c_{\text {text }}, A\right] \end{aligned} $$ ### 2.2 Alignment Estimation ### 2.2.1 Monotonic Alignment Search To estimate an alignment A between input text and target speech, we adopt Monotonic Alignment Search (MAS), a method to search an alignment that maximizes the likelihood of data parameterized by a normalizing flow $f$: $$ \begin{aligned} A &=\underset{\hat{A}}{\arg \max } \log p\left(x \mid c_{\text {text }}, \hat{A}\right) \\ &=\underset{\hat{A}}{\arg \max } \log N\left(f(x) ; \mu\left(c_{\text {text }}, \hat{A}\right), \sigma\left(c_{\text {text }}, \hat{A}\right)\right) \end{aligned} $$ where the candidate alignments are restricted to be monotonic and non-skipping following the fact that humans read text in order without skipping any words. To find the optimum alignment, Kim et al. (2020) use dynamic programming. Applying MAS directly in our setting is difficult because our objective is the ELBO, not the exact log-likelihood. We, therefore, redefine MAS to find an alignment that maximizes the ELBO, which reduces to finding an alignment that maximizes the log-likelihood of the latent variables $z$: $$ \begin{array}{l} A = \underset{\hat{A}}{\arg \max } \log p_{\theta}\left(x_{mel} \mid z\right)-\log \frac{q_{\phi}\left(z | x_{lin}\right)}{p_{\theta}\left(z|c_{text}, \hat{A}\right)} \\ =\underset{\hat{A}}{\arg \max } \log p_{\theta}\left(z \mid c_{text}, \hat{A}\right) \\ =\log N\left(f_{\theta}(z) ; \mu_{\theta}\left(c_{text}, \hat{A}\right), \sigma_{\theta}\left(c_{text}, \hat{A}\right)\right) \end{array} $$ ### 2.2.2 Duration Prediction From Text To generate human-like rhythms of speech, we design a stochastic duration predictor so that its samples follow the duration distribution of given phonemes. The stochastic duration predictor is a flow-based generative model that is typically trained via maximum likelihood estimation. The direct application of maximum likelihood estimation, **however, is difficult because the duration of each input phoneme is** - a discrete integer, which needs to be dequantized for using continuous normalizing flows - a scalar, which prevents high-dimensional transformation due to invertibility. We apply variational dequantization and variational data augmentation to solve these problems. To be specific, **we introduce two random variables $u$ and $v$, which have the same time resolution and dimension as that of the duration sequence $d$, for variational dequatization and variational data augmentation, respectively**. We restrict the support of $u$ to be [0, 1) so that the difference $d-u$ becomes a sequence of positive real numbers, and we concatenate $v$ and $d$ channel-wise to make a higher dimensional latent representation. We sample the two variables through an approximate posterior distribution $q_{\phi}(u,v |d, c_{text})$. The resulting objective is a variational lower bound of the log-likelihood of the phoneme duration $$ \begin{aligned} \log p_{\theta}\left(d \mid c_{\text {text }}\right) \geq \mathbb{E}_{q_{\phi}\left(u, \nu \mid d, c_{\text {text }}\right)}\left[\log \frac{p_{\theta}\left(d-u, \nu \mid c_{\text {text }}\right)}{q_{\phi}\left(u, \nu \mid d, c_{\text {text }}\right)}\right] \end{aligned} $$ The training loss $L_{dur}$ is then the negative variational lower bound. <font color="#fff">dddddddddddddd</font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_6a84f4711e39ee87eecf1e52643c5e72.png) ### 2.3 Adversarial Training To adopt adversarial training in our learning system, we add a discriminator D that distinguishes between the output generated by the decoder G and the ground truth waveform y. In this work, we use two types of loss successfully applied inspeech synthesis; the least-squares loss function for adversarial training, and the additional feature matching loss for training the generator: $$ \begin{array}{l} L_{a d v}(D)=\mathbb{E}_{(y, z)}\left[(D(y)-1)^{2}+(D(G(z)))^{2}\right] \\ L_{a d v}(G)=\mathbb{E}_{z}\left[(D(G(z))-1)^{2}\right] \\ L_{f m}(G)=\mathbb{E}_{(y, z)}\left[\sum_{l=1}^{T} \frac{1}{N_{l}}\left\|D^{l}(y)-D^{l}(G(z))\right\|_{1}\right] \end{array} $$ where $T$ denotes the total number of layers in the discriminator and $D^l$ outputs the feature map of the $l$-th layer of the discriminator with $N_l$ number of features. Notably, the feature matching loss can be seen as reconstruction loss that is measured in the hidden layers of the discriminator suggested as an alternative to the element-wise reconstruction loss of VAEs ### 2.4 Final Loss With the combination of VAE and GAN training, the total loss for training our conditional VAE can be expressed as follows: $$ L_{vae}= L_{recon} +L_{kl} +L_{dur} + L_{adv}(G)+L_{fm}(G)$$ ## 3. Results ### 3.1 Speech Synthesis Quality We conducted crowd-sourced **MOS** tests to evaluate the quality. Raters listened to randomly selected audio samples, and rated their naturalness on a 5 point scale from 1 to 5 <font color="#fff"></font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_51be5d141677a9334fd56d97da02cc25.png) We conducted an ablation study to demonstrate the effectiveness of our methods, including the normalized flow in the prior encoder and linear-scale spectrogram posterior input. <font color="#fff"></font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_27a619594258b291733dc843d1e455f6.png) <font color="#fff"></font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_761ce229556b4440e254d4f1689d6512.png =300x) ### 3.2. Speech Variation We verified how many different lengths of speech the stochastic duration predictor produces. <font color="#fff"></font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_a8a351c8bf801ad13a86a7a325255506.png) All samples here were generated from a sentence **“How much variation is there?”**. Figure shows histograms of the lengths of 100 generated utterances from each model. While Glow-TTS generates only fixed-length utterances due to the deterministic duration predictor, samples from our model follow a similar length distribution to that of Tacotron 2. Figure 2b shows the lengths of 100 utterances generated with each of five speaker identities from our model in the multi-speaker setting, implying that the model learns the speaker-dependent phoneme duration. ### 3.3. Synthesis Speed We compared the synthesis speed of our model with a parallel two-stage TTS system, Glow-TTS and HiFi-GAN. We measured the synchronized elapsed time over the entire process to generate raw waveforms from phoneme sequences with 100 sentences randomly selected from the test set of the LJ Speech dataset. <font color="#fff">dddddddddddddddddddd</font>![](https://mllab.asuscomm.com:12950/hackmd/uploads/upload_769949ca8cef077f6b3a74a46b1308a6.png) ## 4. Conclusion - In this work, we proposed a parallel TTS system, VITS, that can learn and generate in an end-to-end manner. - We further introduced the stochastic duration predictor to express diverse rhythms of speech. - The resulting system synthesizes natural sounding speech waveforms directly from text, without having to go through predefined intermediate speech representations. - Our experimental results show that our method outperforms two-stage TTS systems and achieves close to human quality.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password
    or
    Sign in via Google Sign in via Facebook Sign in via X(Twitter) Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    By signing in, you agree to our terms of service.

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully