ZeroST: Zero-Shot Speech Translation - LIUM - Equipe Language and Speech Technology
Communication Dans Un Congrès Année : 2024

ZeroST: Zero-Shot Speech Translation

Résumé

Our work introduces the Zero-Shot Speech Translation (ZeroST) framework, leveraging the synergistic potential of pre trained multilingual speech and text foundation models. Inspired by recent advances in multimodal foundation models, ZeroST utilizes a Query Transformer (Q-Former) to seamlessly connect a speech foundation model, such as Whisper or Massively Multilingual Speech (MMS), with a text translation model like No-Language-Left-Behind (NLLB). Our proposed learning framework enables the model to perform the speech-to-text translation in a zero-shot manner, bypassing the need for explicit supervision from expensive-to-collect speech-text translation pairs during training. Our extensive experiments, notably on the Europarl-ST benchmark, demonstrate that ZeroST achieves results comparable to those of a strong cascaded translation system and significantly outperforms baseline models. This promising approach paves the way for future research in zero-shot speech translation.
Fichier principal
Vignette du fichier
TR2024-122.pdf (428.92 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04692601 , version 1 (10-09-2024)

Identifiants

  • HAL Id : hal-04692601 , version 1

Citer

Sameer Khurana, Chiori Hori, Antoine Laurent, Gordon Wichern, Jonathan Le Roux. ZeroST: Zero-Shot Speech Translation. Interspeech 2024, Sep 2024, Kos Island, Greece. ⟨hal-04692601⟩
79 Consultations
87 Téléchargements

Partager

More