I’ve begun writing what I hope will be a regular series of short essays on performance and cognition, intended for non-neuroscience, non-music cognition readers. These should be treated as exercises rather than properly-formed treatments of issues dear to my heart. I don’t like writing, but I need the practice, and sharing is a good motivator for editing. Given those caveats, here is the first. Sorry for the second person.
Note: MOTL stands for More On That Later, signalling a topic I hope to return to another day.
****
Embodied perception, Mirror Neurons, and Empathy
Different parts of the brain are crucially involved in different cognitive functions. Sensory systems have key locations, auditory on the sides, vision in the back, and somatosensory arching between the ears. While thought doesn’t simply happen in discrete locations (MOTL), the sequences of neural activity (electric and chemical) that supports action, reaction, thought, and sensation concentrate in these areas, depending on what is going.
When we make an action, like when we say a word, the motor commands which run to the muscles in our face, throat, and abdomen are shadowed by efferent copies of the action to our sensory processing areas. The auditory cortex is readied to hear the acoustic consequences of speech, somatosensory cortex is prepared for our eventual lip and tongue positions (proprioception) and all the transient changes in how these sensitive areas are in tactile contact. Anticipated sensory consequences of our actions are then compared to inputs collected from our sensory organs such as the vibration of vocalisation through bone and acoustic reflections which reach our eardrums. If the input matches the expected well enough, our brain suppresses the information of the expected from our attention so we can keep focused on the next thing. If the expected doesn’t match the feedback, our attention is drawn to the error and sometimes this interferes with continuing action. One hypothesized cause of stuttering is the mistaken flagging of errors in the heard consequences of speech, sometimes alleviated by making the sound of one’s own voice even more foreign. You’ve surely heard the confused and broken speech of people using microphones with live feedback for the first time. Changes in timbre and delays are very distracting, but with some experience with a particular set up, our brains figure out new models and flow is possible again (MOTL).
Imagined actions appear to trigger these efferent copies of the consequences in our sensory systems, and the connection goes both ways: sensory input related to motor action can evoke shadows of our own actions in the motor system (typically in the premotor cortex). Musicians, or even amateurs with only a few weeks training, show similar patterns of hemodynamic responses in the motor system when thinking, performing, and hearing performances involving actions they’ve practiced. Work on speech also provides evidence of common neural resources evoked in imagined articulation and heard syllables: if you think of saying “ba” and then hear someone say “ba” half second later, the spike of activity in your brain to the heard “ba” will be slightly attenuated because the some of the same neurones are involved in both, and they won’t have had time to recover fully.
This all points to perception-action networks in the brain: bi-directional mappings between actions of ourselves or others and the perception of said actions. But it isn’t possible to tease apart how these shadows of action and perception connect via trans-cranial experiments on human subjects because of temporal and/or spatial vagueness. In the early nineties, a monkey with electrodes stuck directly into his (pre?) motor cortex saw a researcher pick up something, and neurones fired which had otherwise been active when the monkey itself picked up the same object. Rather than firing at the sight of a hand, or a fruit, or the extension of the arm, these mirror neurones fired both for the action and the perception of grasping, meaning their activity reflects the common action or goal of the agent, whether self or other. The implications of the same neurones firing to two related stimuli is not at all obvious, but this discovery got people very excited about meaning, modelling, and theory of mind, generating many naive and unrealistic theories (MOTL, maybe). Mirror neurones have been measured in many spots in animal brains and are presumed to be crucial for the perception-action networks described above.
This relates to the initial perceptual layer of the experiences I was describing in the last letter, motor imagery active during music listening. How do we get from motor simulation to emotion? There is much I’ve yet to learn about affective neuroscience, but I can share a few key pieces. Emotions, given their important consequences on our actions and behaviour, are mostly manifest through our lower brain regions, the stuff we share with lizards. However, there are plenty of connections between these early regions with the neocortex, where our big brain advantage resides (presumably). We assume a lot of socially oriented capacities are in these higher cortical layers of the brain, connecting down into the system which controls fight or flight, rewards learning, and modulates the inclination to approach elements in our environment. Behaviourally, humans report affective empathy, experiencing some degree of happiness when seeing others happy, and a little bit of discomfort when witnessing physical harm to others. Does this mean observing the emotive actions of others triggers a shadow of emotion the same way it triggers a shadow of, say, proprioception? Well, maybe, but it really isn’t that simple. It is one thing to recognize how someone feels (or at least to deduce a reasonable approximation with the information at hand) and another to feel it with them. Since the feelings of others come with consequences for ourselves (e.g., the glee of an adversary) there is an important intermediate step of self-referential processing (in the cortical midline structure) where the brain works out how the present data relates to our memories, goals, and the like. Some stimuli get fast-tracked to the midbrain, prompting action before introspection kicks in (it’s a snake!) It is likely that information about the mind-states of others get passed around to many systems in parallel, with multivalent consequences on our own felt states arriving at different delays from the first assessment.
A question for performance, and in particular for story telling, is what happens to the self-referential processing when following something which is fictional (practically or explicitly). Do we have control over how we relate to, empathise with, simulate the experience of the protagonist? When observing a performance by a group, how do we navigate the impressions and expressions of many distinct minds? This I will have to get back to another time.
I really should add references to this.