Mind and Computation Talk, Monday, Dec. 3

November 28, 2007

Musicat: a Model of Musical Expectation

Eric Nichols
Computer Science Department

Monday, Dec. 3, 2007, 4:00-5:30    
Ballantine Hall 005

In the field of music cognition, melodic expectation has been a topic of much interest in the past several decades. Several theoretical and computer models have been developed by experts in the field, but I contend that these models differ in significant ways from the way humans generate expectations. For instance, Margulis’ (2005) model, a synthesis of many major prior models, represents the state of the art: given an input piece of music, it simulates the process of listening to the music, one beat at a time, computing an expectation for each beat as a probability distribution over all possible successive notes. This distribution is based primarily on the shape of the melody in the preceding few notes and the harmonic context; a small set of formulas deterministically computes the probabilities. Absent from the model is flexible perception of the input; there is no room for creativity in the generated expectations.

The new model discussed in this talk, called Musicat (after Copycat and Metacat), extends Hofstadter and Mitchell’s Copycat architecture for use in a musical domain. Whereas Copycat and related programs operated on essentially static inputs, Musicat will take into account the temporal nature of music. Musicat will model the dynamic experience of listening as music unfolds through time, where representations and expectations are formed and quickly revised as new notes are perceived and previously perceived notes fade into memory.

In this talk I will first describe a pilot study in-progress to study melodic expectation. I recruited music students and recorded their improvised, sung responses to two-note melodic fragments. I will share preliminary results from analysis of this data and discuss how these results might influence the development of Musicat. Second, I will describe the overall architecture of Musicat and share some ideas for novel elements I hope to include in the program which were not present in earlier FARG models.


The Mind and Computation Talk Series is a forum for Indiana University Computer Science and Informatics students and faculty, as well as visitors to IU, who work in the areas of artificial intelligence and cognitive science to make presentations about their research. The talks are open to the general IU public and will be announced to the mailing lists of Computer Science, Informatics, and Cognitive Science. Students in the Cognitive Science PhD Program or in the Joint PhD Program in Cognitive Science and Computer Science or Informatics may satisfy the program’s public colloquium requirement by giving a talk in this series.


One comment

  1. When one hears an unfinished musical phrase, one’s obvious expectation is to hear a basic ‘cadence.’ If someone is asked to complete an unfinished melody, unless they are a skilled composer, will find their way back to the I. chord, melodically, as soon as possible, because that is the natural resolution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: