Robust Training of Vector Quantized Bottleneck Models - Le Mans Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Robust Training of Vector Quantized Bottleneck Models

Résumé

In this paper we demonstrate methods for reliable and efficient training of discrete representation using Vector-Quantized Variational Auto-Encoder models (VQ-VAEs). Discrete latent variable models have been shown to learn nontrivial representations of speech, applicable to unsupervised voice conversion and reaching state-of-the-art performance on unit discovery tasks. For unsupervised representation learning, they became viable alternatives to continuous latent variable models such as the Variational Auto-Encoder (VAE). However, training deep discrete variable models is challenging, due to the inherent non-differentiability of the discretization operation. In this paper we focus on VQ-VAE, a state-of-the-art discrete bottleneck model shown to perform on par with its continuous counterparts. It quantizes encoder outputs with on-line k-means clustering. We show that the codebook learning can suffer from poor initialization and non-stationarity of clustered encoder outputs. We demonstrate that these can be successfully overcome by increasing the learning rate for the codebook and periodic date-dependent codeword re-initialization. As a result, we achieve more robust training across different tasks, and significantly increase the usage of latent codewords even for large code-books. This has practical benefit, for instance, in unsupervised representation learning, where large codebooks may lead to disentanglement of latent representations.
Fichier principal
Vignette du fichier
robust_vq_arxiv.pdf (449.62 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02912027 , version 1 (05-08-2020)

Identifiants

  • HAL Id : hal-02912027 , version 1

Citer

Adrian Łańcucki, Jan Chorowski, Guillaume Sanchez, Ricard Marxer, Nanxin Chen, et al.. Robust Training of Vector Quantized Bottleneck Models. IJCNN 2020, Jul 2020, Glasgow, United Kingdom. ⟨hal-02912027⟩
116 Consultations
148 Téléchargements

Partager

Gmail Facebook X LinkedIn More