In the Solo Response Project, I recorded my own responses to a couple dozen pieces of music everyday for most of a month, self report and psychophysiological, to generate a data set that would let me compare experiences as captured through these measurement systems. The data set has mostly been used behind the scenes to tune signal processing and statistics, but there is plenty to learn about the music as well, given how I reacted to these stimuli.
On the project website, there is now a complete set of stimulus-wise posts sharing plots of how I responded to these pieces of music as they played and over successive listenings. Each post includes a recording of the stimulus (more or less), and figures about each of:
Continuous felt emotion ratings,
facial surface Electromyography (Zygomaticus and Corrugator) and of the upper Trapezius,
Heart rate and Respiration rate,
Skin Conductance and Finger Temperature.
The text doesn’t explain much but those familiar with any of these signals will find it interesting to see how a single participant’s responses can vary over time. Some highlights from the amalgam above (left to right, top to bottom):
The familiar subito fortissimo [100s] and continued thundering in O Fortuna from Carmina Burana is so effective that my skin conductance kept peaking through that final section. (At least on those days when GSR was being picked up at all.)
Some instances of respiratory phase aligning were unbelievably strong, for example to Theiving Boy by Cleo Laine [85s].
Evidence that I still can’t help but smile at the way Charles Trenet pronounces the word play in “Boum!” (“flic-flac-flic-flic” [60s])
Self-reported felt emotional responses can change from listening to listening, particularly to complex stimuli like Beethoven’s String Quartet No. 14 in C-sharp minor.
With great relief, I can say that the Activity Analysis paper for Music Perception has been accepted for publication! I don’t know exactly when it will come out but here are the essential components:
Activity Analysis and Coordination in Continuous Responses to Music
by Finn Upham and Stephen McAdams
Music affects us physically and emotionally. Determining when changes in these reactions tend to manifest themselves can help us understand how and why. Activity Analysis quantifies alignment of response events across listeners and listenings through continuous responses to musical works. Its coordination tests allow us to determine if there is enough inter-response coherence to merit linking their summary time series to the musical event structure and to identify moments of exceptional alignment in response events. In this paper, we apply Activity Analysis to continuous ratings from several music experiments, using this wealth of data to compare its performance with that of statistics used in previous studies. We compare the Coordination Scores and nonparametric measures of local activity coordination to other coherence measures, including those derived from correlations and Cronbach’s α. Activity Analysis reveals the variation in coordination of participants’ responses for different musical works, picks out moments of coordination in response to different interpretations of the same music, and demonstrates that responses along the two dimensions in continuous 2D rating tasks can be independent.
Besides vast improvements in terms of writing style and the like, this version also includes a quick comparison of ratings to two performances of the same piece. Here is the figure related to that analysis.
Music listeners often fall into quiet breathing and yet music has been shown to influence when individual listeners inhale. Here is an explanation of how deviations in quiet breathing can be measured in the respiratory sequence, and tests of how these deviations can depend on the musical work.
Defining Quiet Breath
When we are at rest and not preparing to act or thinking about acting, our bodies generally fall into the state of quiet breathing:
Short inspiralation, ~1 s
Short elastic expiration ~ 2.2 s
Stable periodic cycle
Quiet breathing is efficient and discrete, a respiratory sequence that does not require attention or conscious control. Compared breathing behaviour during physical actions, the regularity of quiet breathing suggests that it should be relatively easy to model.
(This is post is derived from a poster presentation at the Making Time in Music conference, hosted by the Faculty of Music of Oxford University, Sept 14-16th, 2016)
Our breath marks time for the entirety of our lives. Whether a period of 2 seconds or 20, we know roughly how it will continue or be adjusted to new demands, and this need for fresh air imposes an inescapable rhythm just beyond what is readily heard as metrical. We use breath to communicate with speech and affective displays, but we also monitor each others’ breathing and use this information to coordinate interactions: breathing in anti-phase when in dialogue, or together when synchronising actions. Obviously, musical activities such as singing and playing wind instruments involve exhalations and the particular physical constraints of our respiratory system. Other components of breath are used to prepare and set the timing of actions. For example, the inhalation at the beginning of a piece defines tempo and intensity for many solo performers and small ensembles, and some types of musicians are extremely practiced at picking up all that is needed to play in synch from one careful gasp. We might consider breath to be auxiliary to the actions of music making, just a means to the sound, but this biological system may be play a fundamental role in our understanding of music and musical time. There is growing evidence that listening to music can engage our respiratory system, drawing us into a specific physical division of time. This coordination is not so strict as breathing with the heard performers, but rather a subtle alignment of phase at specific moments in a particular piece. For this to occur, even intermittently, our respiratory system must be engaged in the work of understanding what we hear. Voluntarily or unconsciously, breathing informs synchrony on the scale of milliseconds, seconds, and minutes, and this phasic and adaptive system promises to be powerful in defining musical time both physically and metaphorically.
I’d been keeping quiet about my thesis research prior to running experiments, but an update is now long over due.
My dissertation is on the measurement of changes in respiration during music listening, capturing when and how a (seated, attentive) listener’s breathing changes. It’s messy measurements from multiple data sets, and musical stimuli of many genres, and lots of heavy non-parametric statistics, and it makes me smile every time I work on it. Considering the respiratory cycle is not terribly difficult to track passively, the more information we can gather via this discrete signal, the better, right? There are a lot of potential uses, a lot of different types of information we might glean from this signal, but I am hesitant to write out these possibilities before more I’ve completed a few more tests.