ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks - Archive ouverte HAL Access content directly
Conference Papers Year :

ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks

(1) , (2) , (3) , (2) , (1) , (1) , (4)
1
2
3
4

Abstract

We aim at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute 1) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT 2) or through training a LM from scratch. New models (FlauBERT-Oral) are shared with the community and evaluated for 3 downstream tasks: spoken language understanding, classification of TV shows and speech syntactic parsing. Results show that FlauBERT-Oral can be beneficial compared to its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-generated text can be used to build spoken language models.
Fichier principal
Vignette du fichier
2207.01893.pdf (183.55 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03770506 , version 1 (06-09-2022)

Identifiers

Cite

Valentin Pelloin, Franck Dary, Nicolas Hervé, Benoît Favre, Nathalie Camelin, et al.. ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks. Interspeech 2022, Sep 2022, Incheon, South Korea. ⟨hal-03770506⟩
20 View
7 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More