AN EXPLAINABLE PROXY MODEL FOR MULTILABEL AUDIO SEGMENTATION - Le Mans Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

AN EXPLAINABLE PROXY MODEL FOR MULTILABEL AUDIO SEGMENTATION

Résumé

Audio signal segmentation is a key task for automatic audio indexing. It consists of detecting the boundaries of class-homogeneous segments in the signal. In many applications, explainable AI is a vital process for transparency of decision-making with machine learning. In this paper, we propose an explainable multilabel segmentation model that solves speech activity (SAD), music (MD), noise (ND), and overlapped speech detection (OSD) simultaneously. This proxy uses the non-negative matrix factorization (NMF) to map the embeddings used for the segmentation to the frequency domain. Experiments conducted on two datasets show similar performances as the pre-trained black box model while strong explainable features arise. Specifically, the frequency bins used for the decision can be easily identified at both the segment level (local explanations) and global level (class prototypes).
Fichier principal
Vignette du fichier
ICASSP2024_nmf.pdf (678.15 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04393946 , version 1 (16-01-2024)

Identifiants

  • HAL Id : hal-04393946 , version 1

Citer

Théo Mariotte, Antonio Almudévar, Marie Tahon, Alfonso Ortega. AN EXPLAINABLE PROXY MODEL FOR MULTILABEL AUDIO SEGMENTATION. International Conference on Acoustics Speech and Signal Processing, IEEE, Apr 2024, Seoul (Korea), France. ⟨hal-04393946⟩
31 Consultations
22 Téléchargements

Partager

Gmail Facebook X LinkedIn More