Willianto Sulaiman
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    ###### tags: `PaperReview` [Paper Link](https://arxiv.org/pdf/2303.01037) # Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages > Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, Zhong Meng, Ke Hu Andrew Rosenberg, Rohit Prabhavalkar, Daniel S. Park, Parisa Haghani Jason Riesa, Ginger Perng, Hagen Soltau, Trevor Strohman Bhuvana Ramabhadran, Tara Sainath, Pedro Moreno, Chung-Cheng Chiu Johan Schalkwyk, Françoise Beaufays, Yonghui Wu > ## Introduction - Recent advances in speech recognition is to use self-supervised learning to develope a good quality of "universal model". - The fundamental challenge in scaling speech technologies is obtaining enough data to train the model. - While transcribed speech maybe scarce, untranscribed speech and text data are practically unlimited. - Recent studies shows that large models can utilize large data sets more effectively than smaller ones. ### Their approach - **2B-parameter Conformer** models are built using these datasets through the following steps: - **Unsupervised Pre-training**: BEST-RQ (**BE**RT-based **S**peech pre-**T**raining with **R**andom-projection **Q**uantizer) is used with YT-NTL-U. - **MOST** (**M**ulti-**O**bjective **S**upervised pre-**T**raining): The model can optionally be further prepared by multi-objective supervised pre-training pipelline that utilizes all three kinds of dataset: YT-NTL-U, Pub-U, Web-NTL, and Pub-S. **Weighted sum of the BEST-RQ masked language loss and text-injection loss is optimized during training**. - **Supervised ASR training**: Produce generic ASR models trained CTC, RNN-T, and Listen, Attend, and Spell (LAS) transducer for downstream task. ![](https://hackmd.io/_uploads/HJQkNMEw3.png) - Two types of models are produced during this pipeline --- pre-trained models that can be fine-tuned on downstream task, and generic ASR models for which we assume no downsteam fine-tuning occurs. The generic ASR models are trained with **chunk-wise attention**. - They denote pre-trained models **USM and USM-M**, where **-M** indicates MOST has been utilized for the preparation of the model. - Evaluate the models on two benchmarks: - **ASR** (Automatic Speech Recognition): Evaluated on SpeechStew and FLEURS. Results on CORAAL will also be reported. - **AST** (Automatic Speech Translation): Evaluated on CoVoST. ![](https://hackmd.io/_uploads/BJuOQfVwh.png) ### Key Findings - **SoTA results for downsteam multilingual speech tasks** ![](https://hackmd.io/_uploads/Hkd1YGND2.png) - **BEST-RQ is a scalable speech representation learner** - **MOST (BEST-RQ + text-injection) is a scalable speech and text representation learner** - **Representations from MOST (BEST-RQ + text-injection) can quickly adapt to new domains** - **Chunk-wise attention for robust long-form speech recognition** ## Method ### Model Architecture: **Conformer** - Conformer with relative attention is used as an encoder model. - BEST-RQ pre-training is exclusively applied to the encoder, while other forms of training train the entire task network as a whole. ![](https://hackmd.io/_uploads/r1tXqz4vh.png) ### Pre-training: **BEST-RQ** ![](https://hackmd.io/_uploads/S1Q8cfNw2.png) - BEST-RQ employs a BERT-like training task where the model tries to predict masked speech features. - For a given number of quantization targets *c*, random "codebook" vectors $v_0, \dots, v_{c-1}$ are chosen in an embedding space. - The discrete label of the speech feature is obtained by first **projecting the feature** into the embedding space by a **randomly initialized, frozen projection matrix** and then **finding the closest codebook vector by cosine similarities**. - The index of the codebook vector is identified as the label of the speech feature. ### Multi-softmax - Instead of a single codebook, Using **multiple codebooks** can improve the BEST-RQ training. - Specifically, **N-softmax layers** is used to produce **N probability predictions** from the output of the encoder to compare against **N independent quantization targets** obtained from the masked speech features. - This application **improves the stability and convergence** of the model. ### Self-training: Noisy Student Training (NST) - First **train a teacher model** with augmentation on a supervised set. - Use that teacher to **generate transcripts for unlabeled audio data**. - Heuristic filtering method based on the ratio between the number of words and audio length is used to **filter the pseudo-labeled data**. - Mix the pseudo-labeled data with supervised data to train a student model. ### Chunk-wise Attention for Long-form ASR - Using global attention to atteng to the entire audio is **impractical**, thus **local self-attention** is widely used. - Stacking many local self-attention layers creates a **significant receptive field mismatch** between training and inference. - This kind of problem henceforth will be called "**long-form (performance) degradation**" problem. ![](https://hackmd.io/_uploads/ryXbsXVwn.png) - They propose a simple modification where the attention is **restricted to audio chunks**. - Divide audio into **8-second chunks** to get the **best recognition quality vs. computational cost trade-off**. - Chunk-wise attention is more flexible, where the block processing is performed at the **input feature level**, which **limits the encoder layers** to the context frame at the current chunk. And also, this allows other layers in the encoder to **process contextual frames beyond the current chunk**. - They **only chunk the attention state**, and allow the decoder to acces the **entire encoder output**, unlike **whisper**, which **segments the audio** into 30 second chunks and uses a h**euristic process** to carry the decoder states over. ### Multi-Objective Supervised Pre-training: BEST-RQ + text-injection ![](https://hackmd.io/_uploads/S1eEgNED3.png) - The model is trained **jointly** on unlabeled speech, unlabeled text, and paired speech and text data. - The training loss is based on the **text-injection loss** including duration modeling and consistency regularization, to which we **add a weighted BEST-RQ loss** for the encoder of the model. - MOST yields two benefits which are: - Training with paired speech and text data resulting in learning **speech representations that are better aligned with text**. - Training simultaneously on unlabeled text in a model that learns speech and text representations jointly **improves the robustness of learned representations**. - The key architectural components in their approach include: - **A speech-only encoder**: Convolutional sub-sampling feature encoder and a single conformer layer. Initialized from BEST-RQ pre-trained checkpoint and random for encoder and conformer respectively. - **A text-only encoder**: embedding layer, an upsampler, and a conformer layer block. All initialized randomly. - **A shared conformer encoder**: Initialized from the BEST-RQ speech encoder. - **The BEST-RQ speech softmax layers**: Initialized from BEST-RQ checkpoint. - **Decoder Unit**: Initialized randomly. - The main idea of text-injection is to produce joint, co-aligned embeddings of speech and text as sequences in the same embedding space. - The model is presented using three types of data, with different type of losses: - **Unlabeled speech data** passes through the shared encoder and the BEST-RQ softmax layers to contribute to the **BEST-RQ loss**. - **Paired speech-text data** serves multiple functions - Labeled speech flows through the speech encoder, the shared encoder, and the decoder unit and contributes to the standard ASR loss computed against the paired text . - The text of the paired data also passes through the text encoder. The **encoded text sequence is used to compute a consistency loss** against the encoded speech sequence. This loss is used to train **solely the text encoder**—the speech encoder weights are frozen for this particular forward-propagation. - **Unlabeled text data** contributes to a reconstruction loss. This loss is constructed by passing the text through the text encoder, then masking chunks of the feature sequence produced. The loss is computed against the original text. - MOST proceed in two stages, first they train solely on paired data for stable decoder alignments for 20k steps, then train the duration upsampler and activate the loss for unlabeled text. - During fine-tuning for ASR, intialize the **feature encoder** of the ASR model with the **speech feature encoder**, initialize the **conformer** block with the **shared conformer encoder**, and **add a randomly initialized** task-specific transducer. ### Residual Adaptation with a Frozen Encoder - Fine-tuning the pre-trained USM individually for various tasks and domains becomes prohibitively expensive. - So they froze the entire USM and **add two parallel adapters to each Conformer block** which only amounts to **2%** of the original pre-trained USM. - Training the adapter versus fine-tuning the entire model can reduce over-fitting especially when the training data is limited. ### Training Details **Data processing**: - Audio is uniformly sampled to **16kHz**. - Then the audio is featurized into **128-dimensional log-mel filterbank coefficient**. - **Graphemes** are used to tokenize the text for **FLEURS**, while **word-piece models (WPMS)** are used for tokenization for **all other tasks**. **BEST-RQ**: - **Default masking and quantization parameters** of BEST-RQ. - **16 codebooks** multi-softmax loss is used to stabilize training and improve performance. - **Not using EMA** (Exponential Moving Average) for pretraining. ![](https://hackmd.io/_uploads/H1ny1BEwh.png) **MOST**: - Use **4k sentence-piece models** (SPMs). - Use a single **1536-dimensional Conformer layer** as the speech encoder and **Conformer-2B** encoder as the shared encoder. - Mix un-transcribed speech, unspoken text, and transcribed speech in each batch with fixed batch sizes of 4096, 8192, and 1024 respectively. **Supervised Training**: - **Two separate optimizers** for the encoder parameters and the decoder parameters of the network. - For USM-CTC and USM-LAS, Train the model for **100k steps with 2048 batch size**. For in-domain experiments, the checkpoint is selected based on **development set performance**. **Training Large Models**: - Use the GShard framework with the GSPMD backend to train the large models on TPUs. ## Datasets - Their USM was trained using three types of dataset which are: - **Unpaired Audio**: - **YT-NTL-U**: 12M hours of unlabeled multilingual YouTube-based audio in 51 languages. - **Pub-U**: 429k hours of unlabeled speech in 52 languages based on public datasets. - **Unpaired Text**: - **Web-NTL**: Multilingual text-only corpus with 28B sentences spanning over 1140 languages. - **Paired ASR Data**: - **YT-SUP+**: 90k hours of labeled multilingual data covering 73 languages and 100k hours of en-US pseudo-labeled data generated by *Noisy Student Training (NST)* from YT-NTL-U - **Pub-S**: 10k hours of labeled multi-domain en-US data and 10k labeled multilingual public data covering 102 languages. ## Key Results ### Robust Speech Recognition for Massively Multilingual Task - Results are in the upper part of the table below. ![](https://hackmd.io/_uploads/S1Z5GrNP2.png) ### Massively Multilingual Results Beyond 100 Languages - Results are in the lower part of the table above. - While generic speech models can be powerful, **performance is still maximized by in-domain fine-tuning**. ### MOST Produces Robust Representations that Generalize to New Domains - By **adding only 2%** to the total number of parameters, the MOST representation model (USM-M-adapter) only performs **slightly worse** than the fine-tuning baselines, still showing competitive performance on downstream ASR and AST tasks. ### Pushing the Quality of AST on Unseen Language - Tail languages often **do not have paired transcriptions** for supervised learning, which will be refered as "**unseen languages**". - First, **build USM-LAS-Adapter** and **train them using FLEURS data**. - Using the **USM-LAS-Adapter as teacher**, **transcribe the unlabeled data** in the unseen languages on the FLEURS benchmark. - Improvement of **more than 30%** for several languages. - The approach of training adapter models on small datasets and utilizing them for pseudo-labeling to be a **promosing route** for scaling up the languages that can be transcribed by USMs. ![](https://hackmd.io/_uploads/SkRGSS4wh.png) ### USMs are Strong AST models ![](https://hackmd.io/_uploads/HkvQ8r4wn.png) ## Analysis and Ablations ### Multi-Softmax Loss for BEST-RQ ![](https://hackmd.io/_uploads/S1IUIBEP3.png) ### Model and Language Scaling - **Scaling up the model size** and **increasing the language coverage** of the pre-training dataset **greatly benefits** the performance of the USMs. ### BEST-RQ is a Scalable Self-supervised Learner ![](https://hackmd.io/_uploads/ByQbPrVv2.png) ### Chunk-wise Attention for Robust Long-Form Speech Recognition ![](https://hackmd.io/_uploads/rkCrDBVw3.png) - Figure above depicts the long-form performance degradation issue as described before. - With a deeper model that has 48 layers but roughly the same number of parameters, however, the **larger receptive field mismatch results in higher test WERs** as the training step increases. ![](https://hackmd.io/_uploads/ry76wBEw2.png) - Table above compare **chunk-wise attention models with an 8-second chunk size** (CW-8s in Table 7) against **local self attention models which uses 128 context frames** in each conformer layer (LSA-128). ### TPU Serving Capacity of USM-CTC models ![](https://hackmd.io/_uploads/Hk8IurVD2.png) bf-16 = brain float 16 RTF = real-time factor - 2B-paramter USM-CTC model is only **3.9x slower** than the 100M-parameter streaming model. - This shows that **USM-CTC can be used as an offline transcriber** efficiently on TPUs (or GPUs) ## Discussion and Conclusion - **Unlabeled versus weakly labeled data**: - collaborating with native speakers to identify unsupervised data in hundreds of tail languages can be an effective route to improving recognition performance on low resource languages. - **In-domain data is best**: - we can build a robust ASR system across many domains by utilizing a large amount of unsupervised data and a small amount of labeled data. - the most effective way to optimize the performance for a given domain is to use in-domain data to fine-tune the model. - CTC vs RNN-T vs LAS: - The best transducer depends on the downstream task. - A large pre-trained model with a frozen encoder can allow experimenters to test different transducers quickly and select the optimal transducer for their purpose.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully