Self-Supervised Models for Phoneme Recognition: Applications in Children's Speech for Reading Learning - Structuration, Analyse et Modélisation de documents Vidéo et Audio
Communication Dans Un Congrès Année : 2024

Self-Supervised Models for Phoneme Recognition: Applications in Children's Speech for Reading Learning

Résumé

Child speech recognition is still an underdeveloped area of research due to the lack of data (especially on non-English languages) and the specific difficulties of this task. Having explored various architectures for child speech recognition in previous work, in this article we tackle recent self-supervised models. We first compare wav2vec 2.0, HuBERT and WavLM models adapted to phoneme recognition in French child speech, and continue our experiments with the best of them, WavLM base+. We then further adapt it by unfreezing its transformer blocks during fine-tuning on child speech, which greatly improves its performance and makes it significantly outperform our base model, a Transformer+CTC. Finally, we study in detail the behaviour of these two models under the real conditions of our application, and show that WavLM base+ is more robust to various reading tasks and noise levels.
Fichier principal
Vignette du fichier
Interspeech_2024-6.pdf (141.86 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04694927 , version 1 (11-09-2024)

Identifiants

Citer

Lucas Block Medin, Thomas Pellegrini, Lucile Gelin. Self-Supervised Models for Phoneme Recognition: Applications in Children's Speech for Reading Learning. 25th Interspeech Conference (Interspeech 2024), Sep 2024, Kos, Greece. pp.5168--5172, ⟨10.21437/Interspeech.2024-1095⟩. ⟨hal-04694927⟩
41 Consultations
43 Téléchargements

Altmetric

Partager

More