Markdown --- - [Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling](/a15kCqOPRoWiyJGRj187xA) - [CONTENTVEC: An Improved Self-Supervised Speech Representation by Disentangling Speakers - ICLR2022](/sO0fiC1LTYCDMFHrG4M1eA) - [DiffusER: Diffusion via Edit-based Reconstruction 12/19](/auDhM0Y_T7e-4Ao79WEg4Q) - [A3T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing - ICML 2022](/exW3_xEbRk6KBxgoypqORw) - [PnG BERT / Dynamically Adaptive Machine Speech Chain Inference for TTS in Noisy Environment](/xKPAVBAlSM2S9pHQbT3d4g) - [MultiSpeech / ADAPTIVE TEXT TO SPEECH FOR CUSTOM VOICE](/JWFvuvzYTxGOBYNl4p9kRg) - [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](/WPhSgdvKS8ucpOWX-RNuzg) - [Fused Acoustic and Text Encoding for Multimodal Bilingual Pretraining and Speech Translation](/FtKB1owcRqaTd2TwU-qxWw) - [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language - ICML2022](/Et8sX320QI2DYfX_3qWs0Q)
{"metaMigratedAt":"2023-06-17T19:53:50.567Z","metaMigratedFrom":"Content","title":"Text-to-Speech (Yang)","breaks":true,"description":"Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling","contributors":"[{\"id\":\"8008890c-8864-4761-8a67-cd9ab7a27e6d\",\"add\":1203,\"del\":143}]"}
Expand menu