Audio-visual deep learning methods for musical instrument classification and separation

  1. SLIZOVSKAIA, OLGA
Supervised by:
  1. Glòria Haro Ortega Director

Defence university: Universitat Pompeu Fabra

Fecha de defensa: 21 October 2020

Committee:
  1. Xavier Giro Nieto Chair
  2. Xavier Serra Casals Secretary
  3. Estefanía Cano López Committee member

Type: Thesis

Teseo: 634820 DIALNET

Abstract

In music perception, the information we receive from a visual system and audio system is often complementary. Moreover, visual perception plays an important role in the overall experience of being exposed to a music performance. This fact brings attention to machine learning methods that could combine audio and visual information for automatic music analysis. This thesis addresses two research problems: instrument classification and source separation in the context of music performance videos. A multimodal approach for each task is developed using deep learning techniques to train an encoded representation for each modality. For source separation, we also study two approaches conditioned on instrument labels and examine the influence that two extra sources of information have on separation performance compared with a conventional model. Another important aspect of this work is in the exploration of different fusion methods which allow for better multimodal integration of information sources from associated domains.