Hi, I'm Matthias. Welcome to my website and blog!

I'm a lecturer and researcher in the field of music informatics. I currently work as a Royal Academy of Engineering Research Fellow with the Centre for Digital Music at Queen Mary, University of London (see my Queen Mary web page).

Past work places include the Internet music platform Last.fm, where I worked as Research Fellow, the Japanese research centre AIST in Tsukuba, and, as a research student, the Centre for Digital Music. Find more info on my biography page.

My main research interest (and the subject of my PhD thesis) has been the automatic transcription of chords from audio, but I've also done work on segmentation, harpsichord tuning estimation and, recently, lyrics-to-audio alignment. Please do have a look at my publications website to learn more about my work, ask Google Scholar directly, or visit my Software site if you're more interested in just using it.

Conference Paper, Publication »

[16 Feb 2015 | Comments Off | 98 views]
http://schall-und-mauch.de/artificialmusicality/wp-content/uploads/2015/02/notemodelling.png     Publication authored by Tian Cheng and Simon Dixon and Matthias Mauch. We investigate piano acoustics and compare the theoretical temporal decay of individual partials to recordings of real-world piano notes from the RWC Music Database. We first describe the theory behind double decay and beats, known phenomena caused by the interaction between strings and soundboard. Then we fit the decay of the first 30 partials to a standard linear model and two physically-motivated non-linear models that take into account the coupling of strings and soundboard. We show that the use of non-linear models provides a better fit to the data. We use these estimated decay rates to parameterise the characteristic decay response (decay rates along frequencies) of the piano under investigation. The results also show…

Conference Paper, Publication »

[16 Jan 2015 | Comments Off | 127 views]
http://cdn1.editmysite.com/uploads/3/2/1/8/32182799/background-images/1927180276.png     Publication authored by Rachel Bittner and Justin Salamon and Mike Tierney and Matthias Mauch and Chris Cannam and Juan Bello. We introduce MedleyDB: a dataset of annotated, royalty-free multitrack recordings. The dataset was primarily developed to support research on melody extraction, addressing important shortcomings of existing collections. For each song we provide melody f0 annotations as well as instrument activations for evaluating automatic instrument recognition. The dataset is also useful for research on tasks that require access to the individual tracks of a song such as source separation and automatic mixing. In this paper we provide a detailed description of MedleyDB, including curation, annotation, and musical content. To gain insight into the new challenges presented by the dataset, we run a set of experiments using a state-of…

Done and Liked, Featured, Other, Publication »

[11 Jan 2015 | Comments Off | 104 views]
http://schall-und-mauch.de/anatomy-of-the-charts/aotc1_files/137.png   Publication authored by Matthias Mauch. I’ve finally found a new home for the series of 5 blog posts called the “Anatomy of the Charts”, which I wrote in 2011 during my stay as a Research Fellow at Last.fm. In 2011 I’d started thinking about the evolution of music, and Last.fm provided the chance to analyse some really exciting data: more than 15,000 recordings from 50 years of UK charts. There was little scope for properly scientific work then, but Last.fm gave me the freedom to write about this on their blog, and my team lead at the time, Mark Levy, helped as well. This resulted in the series of 5 blog posts investigating different musical dimensions in a data-driven, but entertaining way—or so I hope. Sadly, as time went on, some image and text on La…

Conference Paper, Publication »

[9 Jan 2015 | Comments Off | 97 views]
    Publication authored by Tian Cheng and Simon Dixon and Matthias Mauch.
Recently, we have witnessed an increasing use of the source-filter model in music analysis, which is achieved by integrating the source filter model into a non-negative matrix factorisation (NMF) framework or statistical models. The combination of the source-filter model and NMF framework reduces the number of free parameters needed and makes the model more flexible to extend. This paper compares four extended source-filter models: the source-filter-decay (SFD) model, the NMF with time- frequency activations (NMF-ARMA) model, the multi-excitation (ME) model and the source-filter model based on β-divergence (SFbeta model). The first two models represent the time-varying spectra by adding a loss filter and a tim

Conference Paper, Publication »

[8 Jan 2015 | Comments Off | 110 views]
    Publication authored by Christof Weiß and Matthias Mauch and Simon Dixon.
We propose a novel set of chroma-based audio features inspired by pitch class set theory and show their utility for style analysis of classical music by using them to classify recordings into historical periods. Musicologists have long studied how composers’ styles develop and influence each other, but usually based on manual analyses of the score or, more recently, automatic analyses on symbolic data, both largely independent from timbre. Here, we investigate whether such musical style analyses can be realised using audio features. Based on chroma, our features describe the use of intervals and triads on multiple time scales. To test the efficacy of this approach we use a 1600 track balanced corpus that covers the Baroque, Cl

Journal Paper, Publication »

[1 Jan 2015 | Comments Off | 96 views]
    Publication authored by Peter Foster and Matthias Mauch and Simon Dixon. We propose string compressibility as a descriptor of temporal structure in audio, for the purpose of determining musical similarity. Our descriptors are based on computing track-wise compression rates of quantised audio features, using multiple temporal resolutions and quantisation granularities. To verify that our descriptors capture musically relevant information, we incorporate our descriptors into similarity rating prediction and song year prediction tasks. We base our evaluation on a dataset of 15,500 track excerpts of Western popular music, for which we obtain 7,800 web-sourced pairwise similarity ratings. To assess the agreement among similarity ratings, we perform an evaluation under controlled condit…

from me to you »

[10 Dez 2014 | Comments Off | 120 views]
I’m delighted to be able to give a seminar talk as part of the Statistics Seminar Series in the School of Mathematical Sciences at Queen Mary. It’s happening at 1600 for a start at 1630 on Thursday 11th December, 2014 (location: Mathematics Seminar Room (203) on Level 2 of the Mathematics Building). You can find the abstract here. In essence, it’s an overview of much of my research since 2007, with some emphasis on the recent evolutionary stuff. I think anyone can come along, so if you want to get up to speed with what I do, then do come along.

Seen and Liked »

[3 Nov 2014 | Comments Off | 198 views]
http://www.bbc.co.uk/rd/images/dynamic/W1siZmYiLCJwdWJsaWMvcmQvc2l0ZXMvNTAzMzVmZjM3MGI1YzI2MmFmMDAwMDA0L2NvbnRlbnRfZW50cnk1MDRlMTgxNTcwYjVjMjBhMGMwMDEzNDQvNTQ1NzY3NGVhY2ZiYWIwNDRkMTNkZDZlL2ZpbGVzL2NvbWJpbmVkLmpwZyJdLFsicCIsInRodW1iIiwiNTkyeCJdXQ/combined.jpg I’m happy that Segmentino, our Vamp implementation of my music segmentation algorithm is being used at the Beeb. BBC R&D did some research to find efficient ways of editing down a piece of music to 30 seconds, as Chris Baume describes in this blog post. Their tests suggest that the more sophisticated automatic methods (including the one using Segmentino) perform similarly to manual edits. The research was carried out—under Chris’s supervision—by Adib Mehrabi. Incidentally, Segmentino has recently been used in other research, too, in Wang’s cool paper about automatic segmentation of full-length concert videos, as seen at ISMIR last week.

Seen and Liked »

[25 Okt 2014 | Comments Off | 348 views]
Looking at the ISMIR 2014 programme I discovered that Emilio Molina and colleagues have tested our pYIN pitch tracker against a range of other methods (SWIPE, YIN, MELODIA, Boersma/Lei Wang, …). Here’s a link to their paper entitled “The importance of F0 tracking in Query-by-Singing-Humming“. Their results suggest that pYIN is among the best trackers on clean data, and the most robust against noise and distortion. Nice.

Done and Liked »

[19 Okt 2014 | Comments Off | 307 views]
http://schall-und-mauch.de/artificialmusicality/wp-content/uploads/2014/10/IMG_7045_1024.jpg On Friday I had the pleasure of giving a talk about my research at the institute for musicology at the Tokyo University of the Arts just next to beautiful Ueno Park. Pat Savage, who studies for a PhD in comparative musicology there, organised the talk and also helped with translations when necessary. It appeared to me that, while often the study of musicology is still deeply traditional, some students start using computer tools to aid their study (especially pleased to hear that Sonic Visualiser was used). I also had a nice time after my talk chatting to some students in the cafeteria, and then with Pat discussing future projects. After the business had been dealt with, I walked over to Akihabara through the busy market streets of Ueno. In order to avoid the main rush hour on the train back to Tsukuba I stuck around for a while and grabbed a beer in a tiny bar called BeerS, where I got a warm welcome from the bar tender, and ended up chatting for hours to businessmen, who import flow measurement devices from Germany (and who excitedly showed me pictures of their party in Balver Höhle of all places!). Can be lovely folks, them Japanese.…