Before I forget everything, let me get down the setup details of the experiment I ran last summer (2012). Besides selecting the 25 pieces and working out where I was going to run the experiment, there was a lot of other relevant details. The following descriptions are for the purpose of documenting the experiment’s methodology; I hope anyone interested in employing these methods will seek higher authorities for instructions of best practices.
Experiment setup
Stephen McAdams was kind enough to let me borrow some CIRMMT equipment (Thought Technologies’ ProComp Infiniti and a pile of sensors) and occupy some of his lab space for a month. Though a little casual by some standards, I made myself a cubical out of spare sound absorption panels in a large room that was usually unoccupied while I was recording. To get data from the ProComp in a useful format, Bennet Smith helped sort out some of his old scripts that conveniently time stamped the physiological sensor data and packaged it as UDP messages. That left me with getting some system set up run the experiment, provide a behavioural response interface, and save the recorded responses in a reliable fashion.
User Interface and behavioural signal collection

I settled on pureData for creating an interface. As the experimenter and the subject, I included both experiment controls and the response interface in the same principle window. The interface had the basics, like a button to start the experiment, a slider for volumn control (which I purposely tried to avoid using) and an “emergency” stop button that I never had to use.
The stimulus order was randomly set with the click of the start experiment button (though I had to hit it twice to properly seed the random number generator). The upcoming stimulus information was shown right away so I could wouldn’t be as startled by the music. Clicking Next would create a file for response data and begin the recording of all the signals. The playMusic subpatch waited 10 seconds after the Next button was clicked before presenting the stimulus. Once the recording finished, the response data file was saved and closed, and the next stimulus’s information would be displayed. The data of all the sensors were printed in a rather complicated fashion that gave me confidence I wasn’t missing anything, but did results in a fair bit of duplication that was a headache when transforming the data to analysis friendly formats. Instead of automating the stimulus presentation, I chose to give myself time to note any important problems or reflections on the experience between tracks. And when an experiment runs close to two hours, any subject needs a break in there to stretch and wake up a little. I used the subpatch windows to make sure the data was streaming in from the procomp and confirm that the stimulus order had been randomised while I set up for a sessions. They were all tucked away before the recordings got underway.
About the response interface, there were two inputs. The grey square marks the reporting area for 2D emotion, Valence X Arousal. After a few sessions, I made some adjustments to the interface by making this square larger, removing the halfway mark for the arousal dimension, and setting the automatic beginning position to mid-Valence and zero-Arousal (see the red marker) instead of the centre. The slider on the side I included to test whether I found in it impactful to consider how I was relating to the music. Sometimes when I listen to a performance, 1 really feel things from the perspective of the “expresser”, other times my feelings are more like that of a person recieving some expression of sentiment or witnessing a performance. While over the course of this experiment this question of perspective arose to prominence with response to some stimuli, for the most part I ignored it as it was not as dynamic as my felt emotions (perhaps a post stimulus report would have been sufficient).
During every session, I recorded in a text file a bunch of information about the day (how sleepy I was, how hungry, how bad my allergies were, any factors I could think of which might effect the responses), and used it to keep track of stuff as the experiment progressed. I took note of when a stimulus feel like it was taking on a new meaning, if I sneezed or yawned during a piece (cursed allergies), if I adjusted any of the sensors or put on a sweater (cursed air conditioning), and if I felt particularly distracted from the task at hand. I also added to these any ideas about how the experiment was going and intuitions as to what might be going on.
Physiological signal collection

I collected seven continuous physiological signals in these experiments, all of which should be visible in the pictures. The ProComp Infiniti unit is the box-like thing I’m holding, which transforms the optical signals from the sensors into messages to be sent over USB at 256Hz. On my other hand are three sensors: skin conductance sensors (also called galvantic skin response) on my index and ring fingers measuring hand sweat, the BVP sensor (photoplethsmographic infrared sensor) on my middle finger, and the temperature signal tucked against the same finger in the BVP velcro cuff. I used three surface Electromyography sensors (sEMG). On my face, one is positioned to measure contractions of the corrugator super cilius (spell?), the electrodes place a bit above my eyebrow, another is positioned over the zygomaticus, to capture the contractions which make a smile. The last sEMG sensor was applied to the back of my neck, measure contractions of some part of the trapezius which seems to be involved when I nod my head. The second physiological sensor photo shows the electrode positions more clearly, with the black ground electrode placed on a bump of my spinal column. I’m a pretty enthusiastic head nodder, and this seemed like a simpler way of capturing that behaviour than integrating a third response measurement system.

The last sensor I employed was the respiration band, positioned visibly in both pictures. This used a stretch gauge to follow the expansion and contraction of my ribcage as I breathed. This last sensor was often the most finicky, as I often attached it too tightly and it often took some time for me to realise it was uncomfortable.
For all of these physiological sensors, I tried to position them the same way each time, but I will freely admit that I am no expert at with these technologies. The flexibility of the experiment interface at least allowed me to verify that all the sensors were behaving properly before each session. There were a couple of mishaps which resulted in mid-session readjustments; the electrodes at the back of my neck were really close to the hairline and once detached form my skin to float on the hairs; the respiration band was sometimes a bother, as was the tightness of the finger cuffs on a couple of days. Most of these issues can be handled along the way to analysis through data cleaning and normalisation.
In a usual session, it would take me 15 minutes to get set up to record (more initially, less as June wore on) and anywhere from 95 minutes to two full hours to get through the stimuli. By the end, I was usually quite tired; the task took a lot of focus and following the range of the music was emotionally draining. In 27 days, I recorded 24 response sessions, most recorded in the early afternoon or evenings. During that time I hardly listened to music outside of the sessions, and I avoided extra exposure to the stimuli in particular. By the end of it, I didn’t hate the music, but I was more than happy to stop.
One thought on “Solo Response recording set up”