Direct Text to Speech Translation System Using Acoustic Units - Le Mans Université
Article Dans Une Revue IEEE Signal Processing Letters Année : 2023

Direct Text to Speech Translation System Using Acoustic Units

Résumé

This letter proposes a direct text to speech translation system using discrete acoustic units. This framework employs text in different source languages as input to generate speech in the target language without the need for text transcriptions in this language. Motivated by the success of acoustic units in previous works for direct speech to speech translation systems, we use the same pipeline to extract the acoustic units using a speech encoder combined with a clustering algorithm. Once units are obtained, an encoder-decoder architecture is trained to predict them. Then a vocoder generates speech from units. Our approach for direct text to speech translation was tested on the new CVSS corpus with two different text mBART models employed as initialisation. The systems presented report competitive performance for most of the language pairs evaluated. Besides, results show a remarkable improvement when initialising our proposed architecture with a model pre-trained with more languages.
Fichier principal
Vignette du fichier
2309.07478.pdf (723.93 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04212239 , version 1 (20-09-2023)

Identifiants

Citer

Victoria Mingote, Pablo Gimeno, Luis Vicente, Sameer Khurana, Antoine Laurent, et al.. Direct Text to Speech Translation System Using Acoustic Units. IEEE Signal Processing Letters, 2023, 30, pp.1262-1266. ⟨10.13039/501100011033⟩. ⟨hal-04212239⟩
71 Consultations
55 Téléchargements

Altmetric

Partager

More