Using musical relationships between chord labels in Automatic Chord Extraction tasks

Using musical relationships between chord labels in Automatic Chord Extraction tasks

Abstract

Recent research on Automatic Chord Extraction (ACE) has focused on the improvement of models based on machine learning. However, most models still fail to take into account the prior knowledge underlying the labeling alphabets (chord labels). Furthermore, recent works have shown that ACE performances have reached a glass ceiling. Therefore, this prompts the need to focus on other aspects of the task, such as the introduction of musical knowledge in the representation, the improvement of the models towards more complex chord alphabets and the development of more adapted evaluation methods.

In this paper, we propose to exploit specific properties and relationships between chord labels in order to improve the learning of statistical ACE models. Hence, we analyze the interdependence of the representations of chords and their associated distances, the precision of the chord alphabets, and the impact of performing alphabet reduction before or after training the model. Furthermore, we propose new training losses based on musical theory. We show that these improve the results of ACE systems based on Convolutional Neural Networks. By analyzing our results, we uncover a set of related insights on ACE tasks based on statistical models, and also formalize the musical meaning of some classification errors.

\multauthor

Tristan Carsault           Jérôme Nika           Philippe Esling Ircam, CNRS, Sorbonne Université, UMR 9912 STMS, L3i Lab, University of La Rochelle
{carsault, jnika, esling}@ircam.fr

1 Introduction

Automatic Chord Extraction (ACE) is a topic that has been widely studied by the Music Information Retrieval (MIR) community over the past years. However, recent results seem to indicate that the rate of improvement of ACE performances has diminished over the past years [20].

Recently, a part of the MIR community pointed out the need to rethink the experimental methodologies. Indeed, current evaluation methods do not account for the intrinsic relationships between different chords [10]. Our work is built on these questions and is aimed to give some insights on the impact of introducing musical relationships between chord labels in the development of ACE methods.

Most ACE systems are built on the idea of extracting features from the raw audio signal and then using these features to construct a chord classifier [3]. The two major families of approaches that can be found in previous research are rule-based and statistical models. On one hand, the rule-based models rely on music-theoretic rules to extract information from the precomputed features. Although this approach is theoretically sound, it usually remains brittle to perturbations in the spectral distributions from which the features were extracted. On the other hand, statistical models rely on the optimization of a loss function over an annotated dataset. However, the generalization capabilities of these models are highly correlated to the size and completeness of their training set. Furthermore, most training methods see musical chords as independent labels and do not take into account the inherent relations between chords.

In this paper, we aim to target this gap by introducing musical information directly in the training process of statistical models. To do so, we propose to use prior knowledge underlying the labeling alphabets in order to account for the inherent relationships between chords directly inside the loss function of learning methods. Due to the complexity of the ACE task and the wealth of models available, we choose to rely on a single Convolutional Neural Network (CNN) architecture, which provides the current best results in ACE [19]. First, we study the impact of chord alphabets and their relationships by introducing a specific hierarchy of alphabets. We show that some of the reductions proposed by previous researches might be inadequate for learning algorithms. We also show that relying on more finely defined and extensive alphabets allows to grasp more interesting insights on the errors made by ACE systems, even though their accuracy is only marginally better or worse. Then, we introduce two novel chord distances based on musical relationships found in the Tonnetz-space or directly between chord components through their categorical differences. These distances can be used to define novel loss functions for learning algorithms. We show that these new loss functions improve ACE results with CNNs. Finally, we perform an extensive analysis of our approach and extract insights on the methodology required for ACE. To do so, we develop a specifically-tailored analyzer that focuses on the functional relations between chords to distinguish strong and weak errors. This analyzer is intended to be used for future ACE research to develop a finer understanding on the reasons behind the success or failure of ACE systems.

2 Related Works

Automatic Chord Extraction (ACE) is defined as the task of labeling each segment of an audio signal using an alphabet of musical chords. In this task, chords are seen as the concomitant or successive combination of different notes played by one or many instruments.

2.1 Considerations on the ACE task

Whereas most MIR tasks have benefited continuously from the recent advances in deep learning, the ACE field seems to have reached a glass ceiling. In 2015, Humphrey and Bello [10] highlighted the need to rethink the whole ACE methodology by giving four insights on the task.

First, several songs from the reference annotated chord datasets (Isophonics, RWC-Pop, McGill Billboard) are not always tuned to 440Hz and may vary up to a quarter-tone. This leads to multiple misclassifications on the concomitant semi-tones. Moreover, chord labels are not always well suited to describe every song in these datasets.

Second, the chord labels are related and some subsets of those have hierarchical organizations. Therefore, the one-to-K assessment where all errors are equivalently weighted appears widely incorrect. For instance, the misclassification of a C:Maj as a A:min or C#:Maj, will be considered equivalently wrong. However, C:Maj and A:min share two pitches in common whereas C:Maj and C#:Maj have totally different pitch vectors.

Third, the very definition of the ACE task is also not entirely clear. Indeed, there is a frequent confusion between two different tasks. First, the literal recognition of a local audio segment using a chord label and its precise extensions, and, second, the transcription of an underlying harmony, taking into account the functional aspect of the chords and the long-term structure of the song. Finally, the labeling process involves the subjectivity of the annotators. For instance, even for expert annotators, it is hard to agree on possible chord inversions.

Therefore, this prompts the need to focus on other aspects such as the introduction of musical knowledge in the representation of chords, the improvement of the models towards more complex chord alphabets and the development of more adapted evaluation methods.

2.2 Workflow of ACE systems

Due to the complexity of the task, ACE systems are usually divided into four main modules performing feature extraction, pre-filtering, pattern matching and post-filtering [3].

First, the pre-filtering usually applies low-pass filters or harmonic-percussive source separation methods on the raw signal [26, 12]. This optional step allows to remove noise or other percussive information that are irrelevant for the chord extraction task. Then, the audio signal is transformed into a time-frequency representation such as the Short-Time Fourier Transform (STFT) or the Constant-Q Transform (CQT) that provides a logarithmically-scaled frequencies. These representations are sometimes summarized in a pitch class vector called chromagram. Then, successive time frames of the spectral transform are averaged in context windows. This allows to smooth the extracted features and account for the fact that chords are longer-scale events. It has been shown that this could be done efficiently by feeding STFT context windows to a CNN in order to obtain a clean chromagram [14].

Then, these extracted features are classified by relying on either a rule-based chord template system or a statistical model. Rule-based methods give fast results and a decent level of accuracy [21]. With these methods, the extracted features are classified using a fixed dictionary of chord profiles [2] or a collection of decision trees [12]. However, these methods are usually brittle to perturbations in the input spectral distribution and do not generalize well.

Statistical models aim to extract the relations between precomputed features and chord labels based on a training dataset in which each temporal frame is associated to a label. The optimization of this model is then performed by using gradient descent algorithms to find an adequate configuration of its parameters. Several probabilistic models have obtained good performances in ACE, such as multivariate Gaussian Mixture Model [4] and convolutional [13, 9] or recurrent [25, 1] Neural Networks.

Finally, post-filtering is applied to smooth out the classified time frames. This is usually based on a study of the transition probabilities between chords by a Hidden Markov Model (HMM) optimized with the Viterbi algorithm [17] or with Conditional Random Fields [15].

2.3 Convolutional Neural Network

A Convolutional Neural Network (CNN) is a statistical model composed of layers of artificial neurons that transform the input by repeatedly applying convolution and pooling operations. A convolutional layer is characterized by a set of convolution kernels that are applied in parallel to the inputs to produce a set of output feature maps. The convolution kernels are defined as three-dimensional tensors where is the number of kernels, is the height and the width of each kernel. If we note the input as matrix , then the output feature maps are defined by for every kernels, where is a 2D discrete convolution operation

(1)

for and with and .

As this convolutional layer significantly increases the dimensionality of the input data, a pooling layer is used to reduce the size of the feature maps. The pooling operation reduces the maps by computing local mean, maximum or average of sliding context windows across the maps. Therefore, the overall structure of a CNN usually consists in alternating convolution, activation and pooling layers. Finally, in order to perform classification, this architecture is typically followed by one or many fully-connected layers. Thus, the last layer produces a probability vector of the same size as the chord alphabet. As we will rely on the architecture defined by [9], we redirect interested readers to this paper for more information.

3 Our proposal

3.1 Definition of alphabets

Chord annotations from reference datasets are very precise and include extra notes (in parenthesis) and basses (after the slash) [6]. With this notation, we would obtain over a thousand chord classes with very sparse distributions. However, we do not use these extra notes and bass in our classification. Therefore, we can remove this information

(2)

Even with this reduction, the number of chord qualities (eg. maj7, min, dim) is extensive and we usually do not aim for such a degree of precision. Thus, we propose three alphabets named , and with a controlled number of chord qualities. The level of precision of the three alphabets increases gradually (see Figure 1). In order to reduce the number of chord qualities, each one is mapped to a parent class when it exists, otherwise to the no-chord class .

Figure 1: Hierarchy of the chord alphabets (blue: , orange: , green: )

The first alphabet contains all the major and minor chords, which defines a total of 25 classes

(3)

where represents the 12 pitch classes.

Here, we consider the interest of working with chord alphabets larger than . Therefore, we propose an alphabet containing all chords present in the harmonization of the major scale (usual notation of harmony in jazz music). This corresponds to the orange chord qualities and their parents in Figure 1. The chord qualities without heritage are included in the no-chord class , leading to 73 classes

(4)

Finally, the alphabet is inspired from the large vocabulary alphabet proposed by [19]. This most complete chord alphabet contains 14 chord qualities and 169 classes

(5)

3.2 Definition of chord distances

In most CNN approaches, the model does not take into account the nature of each class when computing their differences. Therefore, this distance which we called categorical distance is the binary indicator

(6)

However, we want here to include the relationships between chords directly in our model. For instance, a C:maj7 is closer to an A:min7 than a C#:maj7. Therefore, we introduce more refined distances that can be used to define the loss function for learning.
Here, we introduce two novel distances that rely on the representation of chords in an harmonic space or in a pitch space to provide a finer description of the chord labels. However, any other distance that measure similarities between chords could be studied [18, 8].

3.2.1 Tonnetz distance

A Tonnetz-space is a geometric representation of the tonal space based on harmonic relationships between chords. We chose a Tonnetz-space generated by three transformations of the major and minor triads [5] changing only one of the three notes of the chords: the relative transformation (transforms a chord into his relative major / minor), the parallel transformation (same root but major instead of minor or conversely), the leading-tone exchange (in a major chord the root moves down by a semitone, in a minor chord the fifth moves up by a semitone). Representing chords in this space has already shown promising results for classification on the alphabet [11].

We define the cost of a path between two chords as the sum of the succesive transformations. Each transformation is associated to the same cost. Furthermore, an extra cost is added if the chords have been reduced beforehand in order to fit the alphabet . Then, our distance is:

(7)

with the set of all possible path costs from to using a combination of the three transformations.

3.2.2 Euclidean distance on pitch class vectors

In some works, pitch class vectors are used as an intermediate representation for ACE tasks [16]. Here, we use these pitch class profiles to calculate the distances between chords according to their harmonic content.

Each chord from the dictionary is associated to a 12-dimensional binary pitch vector with 1 if the pitch is present in the chord and 0 otherwise (for instance C:maj7 becomes ). The distance between two chords is defined as the Euclidean distance between the two binary pitch vectors.

(8)

Hence, this distance allows to account for the number of pitches that are shared by two chords.

The , or distance is used to define the loss function for training the CNN classification model.

3.3 Introducing the relations between chords

To train the model with our distances, we first reduce the original labels from the Isophonics dataset111http://isophonics.net/content/reference-annotations-beatles so that they fit one of our three alphabets , , . Then, we denote as the one-hot vector where each bin corresponds to a chord label in the chosen alphabet . The output of the model, noted , is a vector of probabilities over all the chords in a given alphabet . In the case of , we train the model with a loss function that simply compares to the original label . However, for our proposed distances ( and ), we introduce a similarity matrix that associates each couple of chords to a similarity ratio.

(9)

K is an arbitrary constant to avoid division by zero. The matrix is symmetric and we normalize it with its maximum value to obtain . Afterwards, we define a new which is the matrix multiplication of the old and the normalized matrix .

(10)

Finally, the loss function for and is defined by a comparison between this new ground truth and the output . Hence, this loss function can be seen as a weighted multi-label classification.

4 Experiments

4.1 Dataset

We perform our experiments on the Beatles dataset as it provides the highest confidence regarding the ground truth annotations [7]. This dataset is composed by 180 songs annotated by hand. For each song, we compute the CQT by using a window size of 4096 samples and a hop size of 2048. The transform is mapped to a scale of 3 bins per semi-tone over 6 octaves ranging from C1 to C7. We augment the available data by performing all transpositions from -6 to +6 semi-tones and modifying the labels accordingly. Finally, to evaluate our models, we split the data into a training (60%), validation (20%) and test (20%) sets.

4.2 Models

We use the same CNN model for all test configurations, but change the size of the last layer to fit the size of the selected chord alphabet. We apply a batch normalization and a Gaussian noise addition on the inputs layer. The architecture of the CNN consists of three convolutional layers followed by two fully-connected layers. The architecture is very similar to the first CNN that has been proposed for the ACE task [9]. However, we add dropout between each convolution layer to prevent over-fitting.

For training, we use the ADAM optimizer with a learning rate of for a total of 1000 epochs. We reduce the learning rate if the validation loss has not improved during 50 iterations. Early stopping is applied if the validation loss has not improved during 200 iterations and we keep the model with the best validation accuracy. For each configuration, we perform a 5-cross validation by repeating a random split of the dataset.

5 Results and discussion

The aim of this paper is not to obtain the best classification scores (which would involve pre- or post-filtering methods) but to study the impact on the classification results of different musical relationships (as detailed in the previous section). Therefore, we ran 9 instances of the CNN model corresponding to all combinations of the 3 alphabets , , and 3 distances , , to compare their results from both a quantitative and qualitative point of view. We analyzed the results using the mireval library [22] to compute classification scores, and a Python ACE Analyzer that we developed to reveal the musical meaning of classification errors and, therefore, understand their qualities.

5.1 Quantitative analysis: MIREX evaluation

Regarding the MIREX evaluation, the efficiency of ACE models is assessed through classification scores over different alphabets [22]. The MIREX alphabets for evaluation have a gradation of complexity from Major/Minor to Tetrads. In our case, for the evaluation on a specific alphabet, we apply a reduction from our training alphabet to the MIREX evaluation alphabet. Here, we evaluate on three alphabet : Major/Minor, Sevenths, and Tetrads. These alphabets correspond roughly to our three alphabets (Major/Minor , Sevenths , Tetrads ).

5.1.1 MIREX Major/minor

Figure 2 depicts the average classification scores over all frames of our test dataset for different distances and alphabets. We can see that the introduction of the or distance improves the classification compared to . With these distances, and even without pre- or post-filtering, we obtain classification scores that are superior to that of similar works (75.9% for CNN with post-filtering but an extended dataset in [10] versus 76.3% for ). Second, the impact of working first on large alphabets ( and ), and then reducing on for the test is negligible on Maj/Min (only from a quantitative point of view, see 5.2).

5.1.2 MIREX Sevenths

With more complex alphabets, the classification score is lower than for MIREX Maj/Min. This result is not surprising since we observe this behavior on all ACE systems. Moreover, the models give similar results and we can not observe a particular trend between the alphabet reductions or the different distances. The same result is observed for the evaluation with MIREX tetrads ( reduction on ). Nonetheless, the MIREX evaluation uses a binary score to compare chords. Because of this approach, the qualities of the classification errors cannot be evaluated.

Figure 2: Results of the 5-folds: evaluation on MIREX Maj/Min ( reduction on ).
Figure 3: Results of the 5-folds: evaluation on MIREX Sevenths ( reduction on ).

5.2 Qualitative analysis: understanding the errors

In this section, we propose to analyze ACE results from a qualitative point of view. The aim here is not to introduce new alphabets or distances in the models, but to introduce a new type of evaluation of the results. Our goal is twofold: to understand what causes the errors in the first place, and to distinguish “weak” from “strong” errors with a functional approach.

In tonal music, the harmonic functions qualify the roles and the tonal significances of chords, and the possible equivalences between them within a sequence [23, 24]. Therefore, we developed an ACE Analyzer including two modules discovering some formal musical relationships between the target chords and the chords predicted by ACE models. Both modules are generic and independent of the classification model, and are available online.222http://repmus.ircam.fr/dyci2/ace_analyzer

5.2.1 Substitution rules

The first module detects the errors corresponding to hierarchical relationships or usual chord substitutions rules: using a chord in place of another in a chord progression (usually substituted chords have two pitches in common with the triad that they are replacing).

Table 1 presents: Tot., the total fraction of errors that can be explained by the whole set of substitution rules we implemented, and Maj and min, the errors included in the correct triad (e.g. C:maj instead of C:maj7, C:min7 instead of C:min). Table 2 presents the percentages of errors corresponding to widely used substitution rules: rel. m and rel. M, relative minor and major; T subs. 2, tonic substitution different from rel. m or rel. M (e.g. E:min7 instead or C:maj7), and the percentages of errors mM and Mm (same root but major instead of minor or conversely). The tables only show the categories representing more than of the total number of errors, but other substitutions (that are not discussed here) were analyzed: tritone substitution, substitute dominant, and equivalence of dim7 chords modulo inversions.

Model Tot. Maj min
- 34.93 _ _
- 36.12 _ _
- 35.37 _ _
- 52.40 23.82 4.37
- 57.67 28.31 5.37
- 55.17 25.70 4.21
- 55.28 26.51 4.29
- 60.47 31.61 6.16
- 55.45 25.74 4.78

Table 1: Left: total percentage of errors corresponding to inclusions or chords substitutions rules, right: percentage of errors with inclusion in the correct triad (% of the total number of errors).
Model rel. M rel. m T subs. 2 mM Mm
- 4.19 5.15 2.37 7.26 12.9
- 4.40 5.20 2.47 7.66 13.4
- 5.13 4.87 2.26 8.89 10.89
- 2.63 3.93 1.53 4.46 8.83
- 3.05 3.36 1.58 5.53 7.52
- 3.02 4.00 1.62 5.84 8.07
- 2.54 4.15 1.51 4.96 8.54
- 2.79 2.97 1.54 5.29 7.46
- 3.11 4.26 1.63 5.34 7.59
Table 2: Left: percentage of errors corresponding to usual chords substitutions rules, right: percentage of errors “major instead of minor” or inversely (% of the total number of errors).

First, Tot. in Table 1 shows that a huge fraction of errors can be explained by usual substitution rules. This percentage can reach 60.47%, which means that numerous classification errors nevertheless give useful indications since they mistake a chord for another chord with an equivalent function. For instance, Table 2 shows that a significant amount of errors (up to 10%) are relative major / minor substitutions. Besides, for the three distances, the percentage in Tot. (Table 1) increases with the size of the alphabet: larger alphabets seem to imply weaker errors (higher amount of equivalent harmonic functions).

We can also note that numerous errors (between 28.19% and 37.77%) correspond to inclusions in major or minor chords ( Maj and min, Table 1) for and . In the framework of the discussion about recognition and transcription mentioned in introduction, this result questions the relevance of considering exhaustive extensions when the goal is to extract and formalize an underlying harmony.

Finally, for , , and , using instead of increases the fraction of errors attributed to categories in the left part of Table 2 (and in almost all the configurations when using ). This shows a qualitative improvement since all these operations are considered as valid chord substitutions. On the other hand, the impact on the (quite high) percentages in the right part of Table 2 is not clear. We can assume that temporal smoothing can be one of the keys to handle the errors mM and Mm.

5.2.2 Harmonic degrees

The second module of our ACE Analyzer focuses on harmonic degrees. First, by using the annotations of key in the dataset in addition to that of chords, this module determines the roman numerals characterizing the harmonic degrees of the predicted chord and of the target chord (e.g. in C, if a chord is an extension of C, I; if it is an extension of D:min, ii; etc.) when it is possible (e.g. in C, if a chord is an extension of C# it does not correspond to any degree). Then, it counts the errors corresponding to substitutions of harmonic degrees when it is possible (e.g. in C, A:min instead of C corresponds to Ivi). This section shows an analysis of the results using this second module. First, it determines if the target chord is diatonic (i.e. belongs to the harmony of the key), as presented in Table 3. If this is the case, the notion of incorrect degree for the predicted chord is relevant and the percentages of errors corresponding to substitutions of degrees is computed (Table 4).

Model Non-diat. targ. Non-diat. pred.
- 37.96 28.41
- 44.39 15.82
- 45.87 17.60
- 38.05 21.26
- 37.94 20.63
- 38.77 20.23
- 37.13 30.01
- 36.99 28.41
- 37.96 28.24
Table 3: Errors occurring when the target is non-diatonic (% of the total number of errors), non-diatonic prediction errors (% of the subset of errors on diatonic targets).
Model IIV IV IVV Ivi IVii Iiii
- 17.41 14.04 4.54 4.22 5.41 2.13
- 17.02 13.67 3.33 4.08 6.51 3.49
- 16.16 13.60 3.08 5.65 6.25 3.66
- 17.53 13.72 3.67 5.25 4.65 3.50
- 15.88 13.82 3.48 4.95 6.26 3.46
- 16.73 13.45 3.36 4.70 5.75 2.97
- 16.90 13.51 3.68 4.45 5.06 3.32
- 16.81 13.60 3.85 4.57 5.37 3.59
- 16.78 12.96 3.84 5.19 7.01 3.45
Table 4: Errors () corresponding to degrees substitutions (% of the subset of errors on diatonic targets).

A first interesting fact presented in Table 3 is that 36.99% to 45.87% of the errors occur when the target chord is non-diatonic. It also shows, for the three alphabets, that using or instead of makes the fraction of non-diatonic errors decrease (Table 3, particularly ), which means that the errors are more likely to stay in the correct key. Surprisingly, high percentages of errors are associated to errors IV (up to 14.04%), IIV (up to 17.41%), or IVV (up to 4.54%) in Table 4. These errors are not usual substitutions, and IVV and IIV have respectively 0 and 1 pitch in common. In most of the cases, these percentages tend to decrease on alphabets or and when using musical distances (particularly ). Conversely, it increases the amount of errors in the right part of Table 4 containing usual substitutions: once again we observe that the more precise the musical representations are, the more the harmonic functions tend to be correct.

6 Conclusion

We presented a novel approach taking advantage of musical prior knowledge underlying the labeling alphabets into ACE statistical models. To this end, we applied reductions on different chord alphabets and we used different distances to train the same type of model. Then, we conducted a quantitative and qualitative analysis of the classification results.

First, we conclude that training the model using distances reflecting the relationships between chords improves the results both quantitatively (classification scores) and qualitatively (in terms of harmonic functions). Second, it appears that working first on large alphabets and reducing the chords during the test phase does not significantly improve the classification scores but provides a qualitative improvement in the type of errors. Finally, ACE could be improved by moving away from its binary classification paradigm. Indeed, MIREX evaluations focus on the nature of chords but a large amount of errors can be explained by inclusions or usual substitution rules. Our evaluation method therefore provides an interesting notion of musical quality of the errors, and encourages to adopt a functional approach or even to introduce a notion of equivalence classes. It could be adapted to the ACE problem downstream and upstream: in the classification processes as well as in the methodology for labeling the datasets.

7 Acknowledgments

The authors would like to thank the master’s students who contributed to the implementation: Alexis Font, Grégoire Locqueville, Octave Roulleau-Thery, and Téo Sanchez. This work was supported by the DYCI2 project ANR-14-CE2 4-0002-01 funded by the French National Research Agency (ANR), the MAKIMOno project 17-CE38-0015-01 funded by the French ANR and the Canadian Natural Sciences and Engineering Reserch Council (STPG 507004-17), the ACTOR Partnership funded by the Canadian Social Sciences and Humanities Research Council (895-2018-1023).

References

  • [1] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent (2013) Audio chord recognition with recurrent neural networks.. In International Symposium on Music Information Retrieval, pp. 335–340. Cited by: §2.2.
  • [2] C. Cannam, E. Benetos, M. Mauch, M. E. P. Davies, S. Dixon, C. Landone, K. Noland, and D. Stowell (2015) MIREX 2015: vamp plugins from the centre for digital music. In Proceedings of the Music Information Retrieval Evaluation eXchange (MIREX). Cited by: §2.2.
  • [3] T. Cho, R. J. Weiss, and J. P. Bello (2010) Exploring common variations in state of the art chord recognition systems. In Proceedings of the Sound and Music Computing Conference (SMC), pp. 1–8. Cited by: §1, §2.2.
  • [4] T. Cho (2014) Improved techniques for automatic chord recognition from music audio signals. Ph.D. Thesis, New York University. Cited by: §2.2.
  • [5] R. Cohn (1997) Neo-riemannian operations, parsimonious trichords, and their” tonnetz” representations. Journal of Music Theory 41 (1), pp. 1–66. Cited by: §3.2.1.
  • [6] C. Harte, M. B. Sandler, S. A. Abdallah, and E. Gómez (2005) Symbolic representation of musical chords: a proposed syntax for text annotations.. In International Symposium on Music Information Retrieval, Vol. 5, pp. 66–71. Cited by: §3.1.
  • [7] C. Harte (2010) Towards automatic extraction of harmony information from music signals. Ph.D. Thesis. Cited by: §4.1.
  • [8] C-Z. A. Huang, D. Duvenaud, and K. Z. Gajos (2016) Chordripple: recommending chords to help novice composers go beyond the ordinary. In Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 241–250. Cited by: §3.2.
  • [9] E. J. Humphrey and J. P. Bello (2012) Rethinking automatic chord recognition with convolutional neural networks. In Machine Learning and Applications (ICMLA), 2012 11th International Conference on, Vol. 2, pp. 357–362. Cited by: §2.2, §2.3, §4.2.
  • [10] E. J. Humphrey and J. P. Bello (2015) Four timely insights on automatic chord estimation.. In International Symposium on Music Information Retrieval, pp. 673–679. Cited by: §1, §2.1, §5.1.1.
  • [11] E. J. Humphrey, T. Cho, and J. P. Bello (2012) Learning a robust tonnetz-space transform for automatic chord recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pp. 453–456. Cited by: §3.2.1.
  • [12] J. Jiang, W. Li, and Y. Wu (2017) Extended abstract for mirex 2017 submission: chord recognition using random forest model. MIREX evaluation results. Cited by: §2.2, §2.2.
  • [13] F. Korzeniowski and G. Widmer (2016) A fully convolutional deep auditory model for musical chord recognition. In Machine Learning for Signal Processing (MLSP), 2016 IEEE 26th International Workshop on, pp. 1–6. Cited by: §2.2.
  • [14] F. Korzeniowski and G. Widmer (2016) Feature learning for chord recognition: the deep chroma extractor. arXiv preprint arXiv:1612.05065. Cited by: §2.2.
  • [15] J. Lafferty, A. McCallum, and F. C. N. Pereira (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. Cited by: §2.2.
  • [16] K. Lee (2006) Automatic chord recognition from audio using enhanced pitch class profile.. In International Computer Music Conference, Cited by: §3.2.2.
  • [17] H-L. Lou (1995) Implementing the viterbi algorithm. IEEE Signal Processing Magazine 12 (5), pp. 42–52. Cited by: §2.2.
  • [18] S. Madjiheurem, L. Qu, and C. Walder (2016) Chord2Vec: learning musical chord embeddings. In Proceedings of the Constructive Machine Learning Workshop at 30th Conference on Neural Information Processing Systems (NIPS’2016), Barcelona, Spain, Cited by: §3.2.
  • [19] B. McFee and J. P. Bello (2017) Structured training for large-vocabulary chord recognition. In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR’2017). ISMIR, Cited by: §1, §3.1.
  • [20] M. McVicar, R. Santos-Rodríguez, Y. Ni, and T. D. Bie (2014) Automatic chord estimation from audio: a review of the state of the art. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 22 (2), pp. 556–575. Cited by: §1.
  • [21] L. Oudre, Y. Grenier, and C. Févotte (2009) Template-based chord recognition: influence of the chord types.. In International Symposium on Music Information Retrieval, pp. 153–158. Cited by: §2.2.
  • [22] C. Raffel, B. McFee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang, D. P. W. Ellis, and C. C. Raffel (2014) Mir_eval: a transparent implementation of common mir metrics. In Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR, Cited by: §5.1, §5.
  • [23] A. Rehding (2003) Hugo riemann and the birth of modern musical thought. Vol. 11, Cambridge University Press. Cited by: §5.2.
  • [24] A. Schoenberg and L. Stein (1969) Structural functions of harmony. WW Norton & Company. Cited by: §5.2.
  • [25] Y. Wu, X. Feng, and W. Li (2017) Mirex 2017 submission: automatic audio chord recognition with miditrained deep feature and blstm-crf sequence decoding model. MIREX evaluation results. Cited by: §2.2.
  • [26] X. Zhou and A. Lerch (2015) Chord detection using deep learning. In Proceedings of the 16th International Symposium on Music Information Retrieval Conference, Vol. 53. Cited by: §2.2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398313
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description