Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness? - Laboratoire d'Informatique et Systèmes
Communication Dans Un Congrès Année : 2024

Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness?

Résumé

This paper introduces a novel evaluation framework, inspired by methods from human psychophysics, to systematically assess the robustness of multimodal integration in audiovisual speech recognition (AVSR) models relative to human abilities. We present preliminary results on AV-HuBERT [Shi et al., 2022a,b] suggesting that multimodal integration in state-of-the-art (SOTA) AVSR models remains mediocre when compared to human performance and we discuss avenues for improvement.
Fichier principal
Vignette du fichier
86_Multimodal_Integration_in_A.pdf (358.64 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04801000 , version 1 (24-11-2024)

Identifiants

  • HAL Id : hal-04801000 , version 1

Citer

Marianne Schweitzer, Anna Montagnini, Abdellah Fourtassi, Thomas Schatz. Multimodal Integration in Audio-Visual Speech Recognition How Far Are We From Human-Level Robustness?. NeurIPS 2024 Workshop on Behavioral Machine Learning, Dec 2024, Vancouver (BC), Canada. ⟨hal-04801000⟩
0 Consultations
0 Téléchargements

Partager

More