Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
May 4, 2020 · In this paper, we introduce local recurrent neural network (Local-RNN) into Transformer to make full use of the advantages of both RNN and ...
This is inspired by Transformer network [21], where self-attention plays a vital role in modeling global dependency. And we also use the Local-RNN [22] ...
ABSTRACT. Although Transformer based neural end-to-end TTS model has demonstrated extreme effectiveness in capturing long-term depen-.
Improving End-to-End Speech Synthesis with Local Recurrent Neural Network Enhanced Transformer. from www.semanticscholar.org
This paper introduces local recurrent neural network (Local-RNN) into Transformer to make full use of the advantages of both RNN and Transformer while ...
Abstract: Although Transformer based neural end-to-end TTS model has demonstrated extreme effectiveness in capturing long-term dependencies and achieved state- ...
Improving End-to-End Speech Synthesis with Local Recurrent Neural Network Enhanced Transformer. from www.semanticscholar.org
Improving End-to-End Speech Synthesis with Local Recurrent Neural Network Enhanced Transformer · A Comprehensive Survey on Applications of Transformers for Deep ...
Jul 12, 2019 · In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their ...
Missing: Synthesis | Show results with:Synthesis
Leveraging such improvement, speech synthesis using a Transformer network was reported to generate human-like speech audio. However, such ...
Sep 15, 2019 · End-to-end. ASR has been advanced by improving sequence-to-sequence. (S2S) architectures, e.g., pyramid networks [2], [3], connection- ist ...
Missing: Synthesis Enhanced
People also ask
We apply it to speech Transformer, accelerating the con- vergence speed of the model during training and improving the final speech recognition accuracy. 3.2.
Missing: Synthesis | Show results with:Synthesis