Cross-Lingual Transfer Learning for Low-Resource Speech Translation - LIUM - Equipe Language and Speech Technology
Conference Papers Year : 2024

Cross-Lingual Transfer Learning for Low-Resource Speech Translation

Abstract

The paper presents a novel three-step transfer learning framework for enhancing cross-lingual transfer from high- to low-resource languages in the downstream application of Automatic Speech Translation. The approach integrates a semantic knowledge-distillation step into the existing two-step cross-lingual transfer learning framework XLS-R. This extra step aims to encode semantic knowledge in the multilingual speech encoder pre-trained via Self-Supervised Learning using unlabeled speech. Our proposed three-step cross-lingual transfer learning framework addresses the large cross-lingual transfer gap (TRFGap) observed in the XLS-R framework between high-resource and low-resource languages. We validate our proposal through extensive experiments and comparisons on the CoVoST-2 benchmark, showing significant improvements in translation performance, especially for low-resource languages, and a notable reduction in the TRFGap.
Fichier principal
Vignette du fichier
Template.pdf (171.79 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04432308 , version 1 (01-02-2024)
hal-04432308 , version 2 (07-02-2024)

Identifiers

  • HAL Id : hal-04432308 , version 2

Cite

Sameer Khurana, Nauman Dawalatabad, Antoine Laurent, Luis Vicente, Pablo Gimeno, et al.. Cross-Lingual Transfer Learning for Low-Resource Speech Translation. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr 2024, Seoul (Korea), South Korea. ⟨hal-04432308v2⟩
254 View
277 Download

Share

More