Audio-visual deep learning methods for musical instrument classification and separation

  1. SLIZOVSKAIA, OLGA
Dirigée par:
  1. Glòria Haro Ortega Directeur/trice

Université de défendre: Universitat Pompeu Fabra

Fecha de defensa: 21 octobre 2020

Jury:
  1. Xavier Giro Nieto President
  2. Xavier Serra Casals Secrétaire
  3. Estefanía Cano López Rapporteur

Type: Thèses

Teseo: 634820 DIALNET

Résumé

In music perception, the information we receive from a visual system and audio system is often complementary. Moreover, visual perception plays an important role in the overall experience of being exposed to a music performance. This fact brings attention to machine learning methods that could combine audio and visual information for automatic music analysis. This thesis addresses two research problems: instrument classification and source separation in the context of music performance videos. A multimodal approach for each task is developed using deep learning techniques to train an encoded representation for each modality. For source separation, we also study two approaches conditioned on instrument labels and examine the influence that two extra sources of information have on separation performance compared with a conventional model. Another important aspect of this work is in the exploration of different fusion methods which allow for better multimodal integration of information sources from associated domains.