Abstract

Naive Dictionary On Musical Corpora:

From Knowledge Representation To Pattern Recognition


Qiuyi Wu1,*, Ernest Fokoué1

1 School of Mathematical Science, Rochester Institute of Technology, Rochester, New York, USA


* wu.qiuyi@mail.rit.edu

Abstract

In this paper, we propose and develop the novel idea of treating musical sheets as literary documents in the traditional text analytics parlance, to fully benefit from the vast amount of research already existing in statistical text mining and topic modelling. We specifically introduce the idea of representing any given piece of music as a collection of "musical words" that we codenamed "muselets", which are essentially musical words of various lengths. Given the novelty and therefore the extremely difficulty of properly forming a complete version of a dictionary of muselets, the present paper focuses on a simpler albeit naive version of the ultimate dictionary, which we refer to as a Naive Dictionary because of the fact that all the words are of the same length. We specifically herein construct a naive dictionary featuring a corpus made up of African American, Chinese, Japanese and Arabic music, on which we perform both topic modelling and pattern recognition. Although some of the results based on the Naive Dictionary are reasonably good, we anticipate phenomenal predictive performances once we get around to actually building a full scale complete version of our intended dictionary of muselets.

1 Introduction

Music and text are similar in the way that both of them can be regraded as information carrier and emotion deliverer. People get daily information from reading newspaper, magazines, blogs etc., and they can also write diary or personal journal to reflect on daily life, let out pent up emotions, record ideas and experience. Composers express their feelings through music with different combinations of notes, diverse tempo111In musical terminology, tempo ("time" in Italian), is the speed of pace of a given piece., and dynamics levels222In music, dynamics means how loud or quiet the music is., as another version of language.

This paper explores various aspects of statistical machine learning methods for music mining with a concentration on music pieces from Jazz legends like Charlie Parker and Miles Davis. We attempt to create a Naive Dictionary analogy to the language lexicon. That is to say, when people hear a music piece, they are hearing the audio of an essay written with "musical words", or "muselets". The target of this research work is to create homomorphism between musical and literature. Instead of decomposing music sheet into a collection of single notes, we attempt to employ direct seamless adaptation of canonical topic modeling on words in order to "topic model" music fragments.

One of the most challenging components is to define the basic unit of the information from which one can formulate a soundtrack as a document. Specifically, if a music soundtrack were to be viewed as a document made up of sentences and phrases, with sentences defined as a collection of words (adjectives, verbs, adverbs and pronouns), several topics would be fascinating to explore:

  • What would be the grammatical structure in music?

  • What would constitute the jazz lexicon or dictionary from which words are drawn?

All music is story telling as assumption. It is plausible to imagine every piece of music as a collection of words and phrases of variable lengths with adverbs and adjectives and nouns and pronouns.

The construction of the mapping is non-trivial and requires deep understanding of music theory. Here several great musicians offer insights on the complexity of from their perspectives, to explain about the representation of the input space, namely, creating a mapping from music sheet to collection of music "words" or "phrases":

  • "These are extremely profound questions that you are asking here. I think I’m interested in trying. But you have opened up a whole lot of bigger questions with this than you could possibly imagine." (Dr. Jonathan Kruger, personal communication with Dr. Ernest Fokoue, November 24, 2018).

  • "Your music idea is fabulous but are you sure that nothing exists? Do you know "band in a box? It is a software in which you put a sequence of chords and you get an improvisation ’à la manière de’. You choose amongst many musicians so they probably have the dictionary to play as Miles, Coltrane, Herbie, etc." (Dr. Evans Gouno, personal communication with Dr. Ernest Fokoue, November 05, 2018).

  • Rebecca Ann Finnangan Kemp mentioned building blocks of music when it comes to music words idea. (personal communication with Dr. Ernest Fokoue, November 20, 2018).

The concept of notes is equivalent to alphabet, which can be extended as below:

  • literature word mixture of the 26 alphabets

  • music word mixture of the 12 musical notes

Since notes are fundamental, one can reasonably consider input space directly isomorphic to the 12 notes.

2 Related Work

Text letter word topic document corpus
Music note notes* melody song album
  • a series of notes in one bar can be regarded as a "word"

Table 1: Comparison between Text and Music in Topic Modeling
Figure 1: Piece of Music Melody

Compared with the role of text in Topic Modeling as showed in Table 1, we treat a series of notes as "word", can also be called as "term", as single note could not hold enough information for us to interpret, specifically, we treat notes in one bar333In musical notation, a bar (or measure) is a segment of time corresponding to a specific number of beats in which each beat is represented by a particular note value and the boundaries of the bar are indicated by vertical bar lines. as one "term". Melody444Harmony is formed by consecutive notes so that the listener automatically perceives those notes as a connected series of notes. plays the role of "topic", and the melodic materials give the shape and personality of the music piece. "Melody" is also referred as "key-profile" by Hu and Saul [2009a] in their paper, and this concept was based on the key-finding algorithm from Krumhansl and Schmuckler [1990] and the empirical work from Krumhansl and Kessler [1982]. The whole song is regarded as "document" in text mining, and a collection of songs called album in music could be regarded as "corpus" in text mining.

Figure 2: Circle of Fifths (left) and Key-profiles (right)

Specifically, "key-profile" is chromatic scale showed geometrically in Figure 2 Circle of Fifths plot containing 12 pitch classes in total with major key and minor key respectively, thus there are totally 24 key-profiles, each of which is a 12-dimensional vector. The vector in the earliest model in Longuet-Higgins and Steedman [1971] uses indicator with value of 0 and 1 to simply determine the key of a monophonic piece. E.g. C major key-profile:

As showed in the figures below, Krumhansl and Schmuckler [1990] judge the key in a more robust way. Elements in the vector indicate the stability of each pitch-class corresponding to each key. Melody in the same key-profile would have similar set of notes, and each key-profile is a distribution over notes.

Figure 4 shows the pitch-class distribution of C Major Piano Sonata No.1, K.279/189d (Mozart, Wolfgang Amadeus) using K-S key-finding algorithm, and we can see all natural notes: C, D, E, F, G, A, B have high probability to occur than other notes. Figure 4 shows the pitch-class distribution of C Minor BWV.773 No. 2 in C minor (Bach, Johann Sebastian) and again we can see specific notes typical for C Minor with higher probability: C, D, D, F, G, G, and A.

Figure 3: C major key-profile
Figure 4: C minor key-profile

Usually different scales could bring different emotions. Generally, major scale arouse buoyant and upbeat feelings while minor scales create dismal and dim environment. Details for emotion and mood effects from musical keys would be presented in later section.

3 Representation

We mainly studied symbolic music in mxl format in this research work. The data are collected from MuseScore222MuseScore: https://musescore.org/en containing music pieces from different musicians and genres. Specifically, we collect music pieces from 3 different music genres, i.e.: Chinese songs, Japanese songs, Arabic songs. For Jazz music we collect work from 7 different musicians, i.e.: Duke Ellington, Miles Davis, John Coltrane, Charlie Parker, Louis Armstrong, Bill Evans, Thelonious Monk.

  • Transfer mxl file to xml file

  • Use mxl files to extract notes in each measure

  • Create matrices based on the extracted notes

Figure 5: Transforming Notes from Music Sheets to Matrices

Based on the concept of duration (the length of time a pitch/ tone is sounded), and in each measure the duration is fixed, we can create Measure-Note matrices. In Measure-Note matrices, we use letter {C, D, E, F, G, A, B} to denote the notes from "Do" to "Si", "flat" and "sharp" to denote and , and "O" to denote the rest333A rest is an interval of silence in a piece of music..

As demonstrated above, for Jazz part we mainly studied work from 7 Jazz musicians (Duke Ellington, Miles Davis, John Coltrane, Charlie Parker, Louis Armstrong, Bill Evans, Thelonious Monk), and for the comparison with other music genres we focused on Chinese, Japanese, and Arabic music. So we created two different albums based on the Measure-Note matrices we generated in previous Step. I use two different ways to demonstrate the album.

3.1 Note-Based Representation

Figure 6: Music Key

Based on the 12 keys (5 black keys + 7 white keys) in the Figure 6, I make note-based representation according to the pitch class in Table 2: forsaking the order of notes, we describe each measure in the song as a 12-dimension binary vector , where (Table 3)

Pitch Class Tonal Counterparts Solfege
1 , do
2 ,
3 re
4 ,
5 , mi
6 , fa
7 ,
8 sol
9 ,
10 la
11 ,
12 , ti
Table 2: Pitch Class
Document Pitch Class Genre
China 1 0 0 0 0 1 0 1 0 0 0 0 1 China
China 2 0 0 0 0 1 0 1 0 0 0 0 0 China
China 3 0 0 0 0 0 0 1 0 0 0 0 1 China
China 7 0 1 0 0 1 0 1 0 0 0 0 1 China
China 8 0 0 0 0 1 0 1 0 0 0 0 1 China
Japan 1 1 0 1 1 0 0 1 0 0 0 0 0 Japan
Japan 2 1 0 0 0 0 0 0 1 0 0 0 0 Japan
Table 3: Notes collection from 4 Music Genres
  • Document: song names, tantamount to document in text mining

  • Pitch Class: binary vector whose element indicates if certain note is on, tantamount to word in text mining

  • Genre: labeled contain Chinese songs, Japanese songs, Arabic songs, to compare with Jazz songs later

  • The dimension of this data frame is

Create the document term matrix (DTM) whose cells reflect the frequency of terms in each document. The rows of the DTM represent documents and columns represent term in the corpus. contains the number of times term appeared in document .

Term
Document 000000000000 000000000100 000000010100
Arab 5 15 6 20
Arab 7 0 5 5
China 6 1 12 0
China 7 13 0 1
Japan 4 8 4 1
Japan 5 0 0 0
USA 4 2 1 0
Table 4: Document Term Matrix

3.2 Measure-Based Representation

Document Notes Musician
Charlie 1 B O O O O O O O Charlie
Charlie 1 B B A A G G G F Charlie
Charlie 1 E F G B G G A O Charlie
Charlie 7 E E E E G G C O Charlie
Charlie 8 F O O O O O O O Charlie
Duke 1 C C C G G G G G Duke
Duke 1 F F F A A A B B Duke
Table 5: Notes collection from 7 musicians
  • Document: song names, tantamount to document in text mining

  • Notes: a series of notes in one measure, tantamount to word in text mining

  • Musician: the composer, tantamount to the label for later analysis

  • The dimension of this data frame is

Create the document term matrix (DTM) whose cells reflect the frequency of terms in each document. The rows of the DTM represent documents and columns represent term in the corpus. contains the number of times term appeared in document . Dimension of DTM is with the last column as label: Duke, Miles, John, Charlie, Louis, Bill, Monk.

Term
Document O O O O O O O O B D B B D D E E C A A B D C A O
Miles 6 40 0 0
Louis 2 32 0 0
Sonny 3 26 0 0
Miles 2 25 0 0
Duke 4 0 9 0
Sonny 4 14 0 0
Charlie 9 0 0 8
Table 6: Document Term Matrix

We can also talk a close look at the most frequent terms in the whole album: terms appear more than 20 times:

Term
O O O O O O O O
C C C C C C C C
A A A A O O O O
B B B B B B B B
B B B B B B B B
D D D D D D D D
G G G G G G G G
A A A A A A A A
Table 7: Most Frequent Terms

4 Pattern Recognition

We take the topic proportion matrix as input and employ it on machine learning techniques for classification. We conduct the supervised analysis via 5 models with k-fold cross-validation:

  • K Nearest Neighbors

  • Multi-class Support Vector Machine

  • Random Forest

  • Neural Networks with PCA Analysis

  • Penalized Discriminant Analysis

  for  do
     for  do
        Split dataset into chunks so that
        Form subset
        Extract train set
        Build estimator using
        Compute predictions for
        Calculate the error
     end for
     Compute
     Find with lowest prediction error
  end for
Algorithm 1 Supervised Analysis: 10-fold cross-validation with 3 times resampling

4.1 K-Nearest Neighbors

kNN predicts the class of song via finding the k most similar songs, where the similarity is measured by Euclidean distance between two song vectors in this case. The class (label) here is the 7 musicians: Duke, Miles, John, Charlie, Louis, Bill, Monk.

  for  do
     Choose the value of for
     Let be a new point. Compute
  end for
  Rank all the distance in order:
  Form
  Predict response
  where
Algorithm 2 k-Nearest Neighbors

4.2 Support Vector Machine

The task of Support Vector Machine (SVM) is to find the optimal hyperplane that separates the observations in such a way that the margin is as large as possible. That is to say, the distance between the nearest sample patterns (support vectors) should be as large as possible. SVM is originally designed as binary classifier, so in this case there are more than two classes, we use multi-class SVM. Specifically, we transform single multi-class task into multiple binary classification task. We train binary SVMs and maximize the margins from each class to the remaining ones. We choose linear kernel (Eq.1) due to its excellent performance on high dimensional data that are very sparse in text mining.

(1)
  for  do
     Given
     Find function that achieves
     
     subject to
  end forGet
Algorithm 3 Multi-class Support Vector Machine

4.3 Random Forest

Random Forest (RF) as an ensemble learning method that optimal the performance of single tree. Compared with tree bagging, the only difference in random forest is that then select each tree candidate with random subset of features, called "feature bagging", for correction of overfitting issue of trees. If some features weigh more strongly than other features, these features will be selected in many of trees among the whole forest.

  for  do
     Draw with replacement from a sample
     Draw subset of variables without replacement from
     Prune unselected variables from the sample to ensure is dimension
     Build tree (base learner) based on
  end forOutput the result based on the mode of classes
  where
Algorithm 4 Random Forest

4.4 Neural Network with PCA Analysis

Principal Components Analysis (PCA) as one of the most common dimension reduction methods can help improve the result of classification. Neural Network with Principal Component Analysis method proposed by Ripley [2007] is to run principal component analysis on the data first and then use the component in the neural network model. Each predictor has more than one values as the variance of each predictor is used in PCA analysis, and the predictor only has one value would be removed before the analysis. New data for prediction are also transformed with PCA analysis before feed to the networks.

  Given data , finding as estimates
  for  do
     Obtain eigenvalues and eigenvectors from
     Obtain principal components
  end for
  Get -dimensional input vector after PCA analysis
  for  do
     Compute linear combination for each node in hidden layer
     Pass through nonlinear activation function
  end for
  Combine with coefficients to get
  Pass with another activation function to output layer
Algorithm 5 Neural Network with PCA Analysis

4.5 Penalized Discriminant Analysis

Linear Discriminant Analysis (LDA) is common tool for classification and dimension reduction. However, LDA can be too flexible in the choice of with highly correlated predictor variables. Hastie et al. [1995] came up with Penalized Discriminant Analysis (PDA) to avoid the overfitting performance resulting from LDA. Basically a penalty term is added to the covariance matrix .

  for  do
     Given data
     Compute within-class covariance matrix
     Compute between-class covariance matrix
  end for
  Maximize the ratio of two matrices:
Algorithm 6 Penalized Discriminant Analysis

5 Topic Modeling

5.1 Intuition Behind Model

Similar to the work from Blei [2012] in text mining, Figure 7 illustrates the intuition behind our model in music concept. We assume an album, as a collection of songs, are mixture of different topics (melodies). These topics are the distributions over a series of notes (left part of the figure). In each song, notes in every measure are chosen based on the topic assignments (colorful tokens), while the topic assignments are drawn from the document-topic distribution.

Figure 7: Intuition behind Music Mining

5.2 Model

z

u

L

N

M

K

(2)
(3)

Notation

  • : notes (observed)

  • : chord per measure (hidden)

  • chord proportions for a song (hidden)

  • : parameter controls chord proportions

  • : key profiles

  • : parameter controls key profiles

5.3 Generative Process

  1. Draw

  2. For each harmony

    • Draw

  3. For each measure (notes in nth measure) in song

    • Draw harmony

    • Draw pitch in nth measure

Terms for single song:

(4)
(5)
(6)
(7)

Joint Distribution for the whole album:

(8)

Summary

  • Assume there are M documents in the corpus.

  • The topic distribution under each document is a Multinomial distribution with its conjugate prior .

  • The word distribution under each topic is a Multinomial distribution with the conjugate prior .

  • For the word in the certain document, first we select a topic from per document-topic distribution , then select a word under this topic from per topic-word distribution .

  • Repeat for M documents. For M documents, there are M independent Dirichlet-Multinomial Distributions; for K topics, there are K independent Dirichlet-Multinomial Distributions.

5.4 Estimation

For per-document posterior is

(9)

Here we use Variational EM (VEM) instead of EM algorithm to approximate posterior inference because the posterior in E-step is intractable to compute.

Figure 8: Variational EM Graphical Model

Blei et al. [2003] proposed a way to use variational term (Eq.10) to approximate the posterior (Eq.11). That is to say, by removing certain connections in the graphical model in Figure 8, we obtain the tractable version of lower bounds on the log likelihood.

(10)
(11)

With the simplified version of posterior distribution, we aim to minimize the KL Distance (Kullback–Leibler divergence) between the variational distribution and the posterior to obtain the optimal value of the variational parameters , , and (Eq.13). That is to obtain the maximum lower bound (Eq.15).

(12)
(13)
(14)
(15)
  for  do
     E-step
     Fix model parameters , . Initialize
     for  do
        for  do
           
        end for
        Normalize to sum to 1
     end for
     
     
     M-step
     Fix the variational parameters , ,
     Maximize lower bound with respect to model parameters ,
     until converge
  end for
Algorithm 7 Variational EM for Smoothed LDA in Sheet Music

6 Implementation

In this section we implement pattern recognition and topic modeling methods with two representation (note-based representation and measure-based representation) demonstrated previously, and evaluate performance of different representations in diverse scenarios.

6.1 Pattern Recognition

6.1.1 Note-Based Model

Figure 9: Pattern Recognition on Jazz and Chinese Music
Figure 10: Pattern Recognition on Jazz and Japanese Music
Figure 11: Pattern Recognition on Jazz and Arabic Music

6.1.2 Measure-Based Model

Figure 12: Pattern Recognition on Different Jazz Musicians

6.1.3 Comments and Conclusion

For note-based model we can see that the five supervised machine learning techniques could all classify different music genre with error rate no more than 35%. In addition, the performance of random forest, k nearest neighbors, and neural networks with PCA analysis are much better than the other two methods. Among the three comparisons (Jazz vs. Chinese music, Jazz vs. Japanese music, Jazz vs. Arabic music), the comparison of Jazz vs. Chinese would give better result than the other two, with random forest reaching lower than 0.1 error rate. For recognition between Jazz and Chinese songs, random forest is the best one with lowest error rate and variance. For recognition between Jazz and Japanese songs, k nearest neighbors, neural network and random forest have comparatively low error rate, but k nearest neighbors’ performance has smaller variance. For comparison between Jazz and Arabic songs, neural network and random forest have comparatively low error rate, while they all have large variance.

For measure-based model, we can see that from the confusion matrix of training set, the model accuracy rate is very high for all techniques expect k nearest neighbors. However, but for the test set all the model fails to provide very good result with lowest error rate as 0.4 from random forest. It is obvious that this scenario has the challenging of overfitting issue. Further investigation is necessary if we want to use this representation.

6.2 Topic Modeling

6.2.1 Perplexity

In topic modeling, the number of topics is crucial for the to achieve its optimal performance. Perplexity is one way to measure how well is predictive ability of a probability model. Having the optimal topic number is always helpful in the sense to reach the best result with minimum computational time. Perplexity of a corpus of documents is computed as below Equation (16).

(16)

Apart from the above common way, there are many other methods to find the optimal topics. The existing ldatuning package stores 4 methods to calculate all metrics for selecting the perfect number of topics for LDA model all at once.

Table 8 shows 4 different evaluating matrices. The extrema in each scenario illustrates the optimal number of topics.

  • minimum

    • Arun2010 [Arun et al., 2010]

    • CaoJuan2009 [Cao et al., 2009]

  • Maximum

    • Deveaud2014 [Deveaud et al., 2014]

    • Griffiths2004 [Griffiths and Steyvers, 2004]

Topics Number Griffiths2004 CaoJuan2009 Arun2010 Deveaud2014
2 -7454.086 0.11290217 13.856421 1.8604276
4 -6821.928 0.07120480 8.508257 1.7877936
6 -6516.431 0.06146701 5.613616 1.7126743
8 -6322.309 0.05740186 3.728195 1.6422201
10 -6184.650 0.05336498 2.404497 1.5998098
16 -6112.754 0.06507096 1.328469 1.3594688
20 -6101.264 0.07099931 1.512142 1.2242214
26 -6129.508 0.09352393 1.856783 1.0760613
30 -6121.120 0.10582645 2.545512 0.9585189
36 -6177.121 0.12330036 4.078891 0.8530592
40 -6183.168 0.14128330 5.226102 0.7767756
46 -6224.206 0.15072742 5.372056 0.7119278
50 -6253.992 0.16448002 6.637710 0.6719547
60 -6352.595 0.20606817 7.769699 0.5844223
72 -6325.653 0.25947947 9.892807 0.4742397
80 -6393.940 0.26968788 10.187645 0.4463054
Table 8: Perplexity of Different Matrices
Figure 13: Evaluating LDA Models

From perplexity we can come to the conclusion that the optimal number of topics is around 812. In this scenario Metric Deveaud2014 is not as informative as the other three.

6.2.2 Discussion

Figure 14 shows the top 10 tokens in the topics from two scenarios.

For Measure-Based Scenario, we can see some topics purely natural keys:
e.g. Topic 1: , Topic 5: .
While some topics are very complicated with many sharps and flats in the notes:
e.g. Topic 3: , Topic 6: .

For Note-Based Scenario, each token is a 12-dimension vector indicating which of the pitch are "on" in certain measure. Some of the topics contains many active notes:
e.g. In Topic 2, some tokens have at most 7 active pitches.
While some topics are very silent with only few active notes:
e.g. In Topic 4 most pitches are mute, tokens have at most 3 active pitches.

Figure 14: Top 10 Tokens in Selected Topic in Two Scenarios

Figure 15 shows the per-topic per-word probability of Measure-Based Scenario. We can see some topics appear very complicated with most of terms with flat or sharp notes (Topic 3, Topic 4). Some topics are very simple (Topic 8). Some topics contain too many terms with the same probability (Topic 2, Topic 4).

Figure 15: Topic Terms Distribution from Measure-Based Scenario

Figure 16 shows the per-topic per-word probability of Note-Based Scenario. Topic 4 and Topic 2 have certain distinctive terms while terms in Topic 9 have fairly similar probability. Further investigation involved musician is needed to better interpret the result.

Figure 16: Topic Terms Distribution from Note-Based Scenario

Lastly I draw chord diagram to see some potential relationship between topics learned from topic models and the targeted subjects.
In Figure 17, we can see:

  • American songs (Jazz music in this case) are particularly dominant in Topic 9, which has most probable term . It can also be interpreted as pitch class set: ,

  • Arabic songs contribute mostly to Topic 3, which has various terms equally distributed (see Figure 16).

  • Most of Chinese songs attributes to Topic 4 and Topic 5 which contain most probable G major or E minor scale

  • Japanese songs seem to have similar contribution to every topic.

In Figure 18, we can see:

  • Musician John Coltrane, Sonny Rollins and Louis Armstrong has some certain preference towards certain topics.

  • Other musicians do not show clear bias to a specific topic.

Figure 17: Chord Diagram for Music Genres
Figure 18: Chord Diagram for Jazz Music

7 Conclusion

7.1 Summary

In this paper we create two different representations for symbolic music and transform the music notes from music sheet into matrices for statistical analysis and data mining. Specifically, each song can be regarded as a text body consisting of different musical words. One way to represent these musical words is to segment the song into several parts based on the duration of each measure. Then the words in each song turn out to be a series of notes in one measure. Another way to represent music words is to restructure the notes in each segment based on the fixed 12-dimension pitch class. Both representations have been employed in pattern recognition and topic modeling techniques respectively, to detect music genres based on the collected songs, and figure out the potential connections between musicians and latent topics.

The predictive performance in pattern recognition for note-based representation turns out to be very good with 88% accuracy rate in the optimal scenario. We explored several aspects among music genres and musicians to see the hidden associations between different elements. Some genres contain very strong characteristics which make them very easy to detect. Jazz musicians John Coltrane, Sonny Rollins and Louis Armstrong show their particular preference towards certain topics. All these features are employed in the model to help better understand the world of music.

7.2 Future Work

Music mining is a giant research field, and what we’ve done is merely a tip of the iceberg. Look back to the initial motivation that triggers us to embark on this research work: Why does music from diverse culture have so powerful inherent capacity to bring people so many different feelings and emotions? To ultimately find out how to replace human intelligence with statistical algorithms for melody interpretation is still remain to be discovered.

Several potential studies we would love to continue exploring in the foreseeable future:

  • Facilitate audio music and symbolic music transformation via machine learning technique.

  • Deepen the understanding of musical lexicon and grammatical structure and create the dictionary in a mathematical way.

  • How to derive representations for smooth recognition of Jazz by statistical learning methods?

  • Apart from notes, can we embed other inherent musical structure such as cadence, tempo to better interpret the musical words?

  • Explore the improvisation key learning (how many keys do the giants of jazz tended to play in, and what are those keys).

  • Musical harmonies and its connection with elements of mood.

Acknowledgments

We would like to show our gratitude to Dr. Jonathan Kruger, Dr. Evans Gouno, Mrs. Rebecca Ann Finnangan Kemp, Dr. David Guidice for sharing their pearls of wisdom with us during the personal communication on music lexicon.

Special big thank goes to musicians: Lizhu Lu from Eastman School of Music, Gankun Zhang from Brandon University School of Music, Dr. Carl Atkins from Department of Performance Arts & Visual Culture, and Professor Kwaku Kwaakye Obeng from Brown University, for their encouragement and technical supports in music thoery all the time.

Qiuyi Wu thanks RIT Research & Creativity Reimbursement Program for partially sponsoring this work to have it possibly presented in Joint Statistical Meetings (JSM) this year in Vancouver. She appreciates supports from International Conference on Advances in Interdisciplinary Statistics and Combinatorics (AISC) for NC Young Researcher Award this year. She thanks 7th Annual Conference of the Upstate New York Chapters of The American Statistical Association (UP-STAT) for recognizing this work and offering her Gold Medal for Best Student Research Award this year.

References

  • Arun et al. [2010] Rajkumar Arun, Venkatasubramaniyan Suresh, CE Veni Madhavan, and MN Narasimha Murthy. On finding the natural number of topics with latent dirichlet allocation: Some observations. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 391–402. Springer, 2010.
  • Blei [2012] David M Blei. Probabilistic topic models. Communications of the ACM, 55(4):77–84, 2012.
  • Blei et al. [2003] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003.
  • Cao et al. [2009] Juan Cao, Tian Xia, Jintao Li, Yongdong Zhang, and Sheng Tang. A density-based method for adaptive lda model selection. Neurocomputing, 72(7-9):1775–1781, 2009.
  • Deerwester et al. [1990] Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391–407, 1990.
  • Deva [1999] Dharma Deva. Underlying socio-cultural aspects and aesthetic principles that determine musical theory and practice in the musical traditions of china and japan. Renaissance Artists and Writers Association, 1999.
  • Deveaud et al. [2014] Romain Deveaud, Eric SanJuan, and Patrice Bellot. Accurate and effective latent concept modeling for ad hoc information retrieval. Document numérique, 17(1):61–84, 2014.
  • Devroye et al. [2013] Luc Devroye, László Györfi, and Gábor Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013.
  • Eerola and Toiviainen [2004] Tuomas Eerola and Petri Toiviainen. Midi toolbox: Matlab tools for music research. 2004.
  • [10] Evans Gouno. personal communication.
  • Griffiths and Steyvers [2004] Thomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National academy of Sciences, 101(suppl 1):5228–5235, 2004.
  • Hastie et al. [1995] Trevor Hastie, Andreas Buja, and Robert Tibshirani. Penalized discriminant analysis. The Annals of Statistics, pages 73–102, 1995.
  • Hofmann [1999] Thomas Hofmann. Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 289–296. Morgan Kaufmann Publishers Inc., 1999.
  • Hu [2009] Diane J Hu. Latent dirichlet allocation for text, images, and music. University of California, San Diego. Retrieved April, 26:2013, 2009.
  • Hu and Saul [2009a] Diane J. Hu and Lawrence K. Saul. A probabilistic topic model for unsupervised learning of musical key-profiles, 2009a.
  • Hu and Saul [2009b] Diane J Hu and Lawrence K Saul. A probabilistic topic model for music analysis. In Proc. of NIPS, volume 9. Citeseer, 2009b.
  • [17] Rebecca Ann Finnangan Kemp. personal communication.
  • [18] Jonathan Kruger. personal communication.
  • Krumhansl and Kessler [1982] Carol L. Krumhansl and Edward J. Kessler. Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review, 89(4):334–368, 1982. doi: 10.1037//0033-295x.89.4.334.
  • Krumhansl and Schmuckler [1990] Carol L Krumhansl and Mark Schmuckler. A key-finding algorithm based on tonal hierarchies. Cognitive Foundations of Musical Pitch, pages 77–110, 1990.
  • Le Cun et al. [1990] Yann Le Cun, Ofer Matan, Bernhard Boser, John S Denker, Don Henderson, Richard E Howard, Wayne Hubbard, LD Jacket, and Henry S Baird. Handwritten zip code recognition with multilayer networks. In [1990] Proceedings. 10th International Conference on Pattern Recognition, volume 2, pages 35–40. IEEE, 1990.
  • Longuet-Higgins and Steedman [1971] H Christopher Longuet-Higgins and Mark J Steedman. On interpreting bach. Machine intelligence, 6:221–241, 1971.
  • Mcauliffe and Blei [2008] Jon D Mcauliffe and David M Blei. Supervised topic models. In Advances in neural information processing systems, pages 121–128, 2008.
  • Ripley [2007] Brian D Ripley. Pattern recognition and neural networks. Cambridge university press, 2007.
  • Silge [2018] Julia Silge. The game is afoot! topic modeling of sherlock holmes stories, 2018.
  • Temperley et al. [2007] David Temperley et al. Music and probability. Mit Press, 2007.
  • Toiviainen and Eerola [2016] P. Toiviainen and T. Eerola. MIDI toolbox 1.1. https://github.com/miditoolbox/, 2016.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
322050
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description