Experiments in music

From IMC wiki
Jump to: navigation, search

Music Lessons

Sam turn-taking - measurements of turn-taking; self-repair (preference, same as in dialogue) - pedagogical - goal-directed. Construction units tend to be phrases.

Mismatch in musical stuff versus vagueness of linguistic stuff

Analogously Trevor Marchand's stuff about the craft

What makes them respond with language or music?

Individuality and interpretation of written music (grade 8 students)

Conditions - score/gestural score (divided score/shared score)

Compare with jazz/improvisation?


Julian predicting repair from n-gram models - where are people likely to need to do repair?

How do people represent music - predict where breakdowns would happen (timing slowdown when structure is unpredictable).

Likelihood of repair is lower than its correct continuation (relies on sentence without reparandum is grammatical)


Are polyrhythms glommed together things (routinisation) or kept apart. Would you predict different responses to interruptions in each case? If so, what?

Is there a way to get at this using language? Or something that we could get undergraduates to be able to do?

Processing restrictions in language carry over to music - can you hang on to underspecified things.

Perceptual tasks - polymetric passage which changed into single meter - processing should be facilitated if it's one of the components and they are processed separately, not otherwise.

Many meters hypothesis?

(Putting polyrhythms into google scholar comes up with a load of experimental stuff that might be relevant)

Domain generality

Same representation - strong

Same kind of representation (same form) - weakitory streaming (sine tones); standard - detection of patterns in the constituent parts - tonal/pitch/rhythmic?

1. Strong: two cognitive operations share the exactly the same processing/representation (i.e., they share the same implementation of the cognitive/neural resources, creating an actual bottleneck)

2. Weak: two cognitive operations use separate processing/representations but they are subject to the same computational constraints (e.g., a neural network trained on different inputs)

There is a related idea of architectural innateness proposed by Elman et al., in their book Rethinking Innateness


and here is Conway and Christiansen on multimodal statistical learning (not strongly domain general is the conclusion though possibly weakly domain general) http://psych.cornell.edu/sites/default/files/cc-Psych-Science-2006.pdf

On a separate note, here are some references related to the polyrhythm and multiple-meter vs single-percept discussion:

1. This is a really interesting paper by Bruno Repp (the crown prince of auditory rhythm research) which uses exactly the streaming method that I was talking about yesterday. The following line caught my eye:

"In this task, perceptual integration was disadvantageous, but apparently could not be avoided."


2. A paper by Jeff Pressing on performance of polyrhythms, in which they interpret the two component rhythms as a figure and ground:


3. For some more general background: a classic paper by Jeff Pressing on linear stochastic timing models: http://psycnet.apa.org/psycinfo/1999-11924-003

4. For starting to think about interaction, Pecenka and Keller on temporal prediction and synchronisation in music performance http://pkpublications.weebly.com/uploads/1/1/8/3/11835433/pecenkakeller_ebr2011.pdf

5. For an example from tonality, Schmuckler and Krumhansl on bitonality in Petrushka, suggesting that listeners form a single percept of a bitonal passage of music: http://music.psych.cornell.edu/articles/tonality/PetroushkaChord.pdf