Multi-tasking in auditory cortex
We recently published a study in the Journal of Neuroscience on the effects of inactivating auditory cortex on hearing, as well as an accompanying article discussing the optogenetic work within. This post explains how the effects of cortical inactivation shine a light on the mechanisms of hearing and what that means for our understanding of the brain and intelligent systems more broadly.
A quick primer on auditory neuroscience
Hearing is a pretty amazing sensory system, because it can build an understanding of auditory scenes from just two signals - the sound waves arriving at your left and right ears. This allows us to identify multiple sound sources in our environment and where each source is located.
Sound identity and location are independent features, in that a person can move while talking and you can still recognize they're the same sound source, despite their change in position. This is one example of a broader ability known as perceptual invariance and reflects the brain's ability to extract features of sound such as location, pitch and timbre. As auditory neuroscientists, we're interested in how neural networks extract these features and use them to build our perception of auditory scenes.
One theory, inspired by work in the visual system, is that the brain contains dedicated regions within the cerebral cortex that process sound identity and sound location separately. This theory is sometimes called the 'what' vs 'where' hypothesis (see cartoon below), and predicts that damage to specific brain regions - for example, after a stroke - would result in selective impairments in either the ability to localize or identify sound sources. Likewise, we would expect the activity of neurons in certain regions to be sensitive to sound location but not identity, while the reverse should be true in other brain areas.
The 'what vs. where hypothesis' reflects a broader view of the brain as a series of functionally specialized regions, each with a highly localized role, for example in processing reward or representing our position in the environment. This perspective can be contrasted with theories of distributed processing, in which many neurons across large areas of the brain play a role in multiple aspects of cognition. According to the distributed view, neurons are more like generalists that do many things rather than acting as dedicated components with narrowly defined functions.
Whether we can best understand the brain as functionally specialized or distributed is still very much debated and most neuroscientists will fall on a spectrum between the two viewpoints. Indeed, it's probable that some brain structures are better understood in relation to a single function, while others act in many diverse roles (or perhaps even in everything we do). In the auditory system, and particularly in the study of auditory cortex however, there's a lot of empirical data needed to drive our understanding forward.
In past work, we've found that the activity of auditory cortical neurons supports the idea of distributed processing: Many neurons are sensitive to both the location and identity of sounds, and often we find neurons that are also modulated by cognitive factors, such as whether the animal is actively listening and how the animal is planning to respond to sounds when performing behavioral tasks. These neurons are found across auditory cortex, and aren't limited to specific cortical fields, as would be predicted by the 'what vs. where' hypothesis.
We're not the only ones to observe these patterns, which have been seen in a variety of species, with many different types of sounds and behaviors, and by researchers across the world. This suggests that distributed processing may be important to our ability to analyse and understand sounds.
Mixed selectivity and cortical inactivation
The pattern of activity in which neurons are sensitive to multiple sensory and cognitive dimensions is sometimes called Mixed Selectivity. In the past decade it has become clear that mixed selectivity is widespread in many brain regions and, by creating high dimensional representations, has the potential to provide much more flexibility than groups of functionally specialized units. Indeed as Stefano Fusi , Earl K. Miller, PhD and Mattia Rigotti point out in a highly influential paper, mixed selectivity may be vital in enabling complex behavior and cognition.
The majority of evidence for mixed selectivity comes from recordings in which we measure the association between neuronal activity and sensory (or cognitive) variables. However to go further and understand how mixed selectivity contributes to behavior, we need to conduct intervention studies in which we removing neurons showing such patterns of activity and observe the effects on behavior. This was the goal of our current study...
Our research team at UCL has developed techniques to reversibly inactivate neurons in behaving animals, most recently Optogenetics (discussed here), but also using cortical cooling, led by Katherine Wood . Cooling does exactly what is says on the tin, as reducing the temperature of neurons below 20°C prevents the cells from firing. Cooling has a long history in neuroscience, and although less flexible than optogenetics, is capable of fully reversible inactivation of large cortical areas (a major challenge in animals with large brains!).
Recommended by LinkedIn
We tested the effects of auditory cortical inactivation on the ability of ferrets to perform multiple listening tasks. Here, ferrets are a great choice because they're able to learn a number of complex tasks that are directly relevant to hearing research in humans. Specifically, we tested the effects of cortical inactivation on the ability of ferrets to discriminate speech sounds (vowels) in...
We also trained ferrets in a second task to locate sounds at one of seven possible positions, while ignoring sound identity of sounds (achieved by presenting just noise bursts). The figure below comes from the paper and illustrates the different tasks that every ferret learned.
Selective hearing impairments
The area of auditory cortex that we inactivated contains neurons with mixed selectivity for vowel identity and sound location, which led us to predict that cooling or optogenetic interventions should impair both the ability do discriminate vowel identity and locate sounds. We did indeed find that performance in sound location and vowel discrimination in noise were worse during cortical inactivation. Our main finding was thus consistent with the idea that a brain area in which mixed selectivity is observed contributes to behaviors with very different demands, and thus might comprise at least part of a general purpose system (see the next section for more discussion).
Interestingly, we didn't see any effect on the ability to discriminate vowel identity in clean conditions. This "null result" shows that the inactivation procedure did not impair the general ability of animals to move or concentrate, which is actually a useful piece of information. Why animals could still discriminate vowel identity without a large area of auditory cortex is unclear; though it's surprising what one can do without cortex. We believe that our current results could arise if brain areas before the cortex (the brainstem, thalamus and striatum) can compute the identity of sounds in quiet conditions and coordinate responses. Alternatively, the relative simplicity of the task (what we call 'low dimensionality') might enable redundant system to fill in for missing neurons in ways that would not be possible in more complex tasks.
We also looked at an aspect of hearing known as spatial release from masking, which we experience when competing sound sources are separated in space and listening becomes easier. We experience this in every day situations, when for example, people are seated in a restaurant and it becomes harder to hold a conversation if all the customers are sat in the same area. In our study, ferrets performed the vowel identity task better when the competing noise was presented at a separate location from the vowels. Intriguingly, this benefit of spatial separation remained (and actually got larger!) during cortical inactivation. This indicates that spatial release from masking isn't cortical, but also suggests that spatial separation of competing sounds compensates for the lack of cortical function during inactivation. This idea support that involvement of auditory cortex plays a key role in separating overlapping sounds - something we've suspected for a long time but never had the evidence (until now) to show.
General purpose hearing
Humans and other animals have evolved the ability to identify and exploit relevant features of their environment to solve problems flexibly. A hallmark of hearing is thus not just the ability to perform one, or even a few tasks, but rather to create a general purpose system that can adapt to ongoing demands that may arise either externally (e.g. recognizing new words) or internally (adapting to age-related hearing loss). Our research, both in the current study and past findings indicates that auditory cortex plays a role in a variety of different aspects of hearing.
In the current study, we trained ferrets in two different tasks (sound localization and vowel discrimination in a variety of noise conditions) but in future it will be important to extend this work to a broader range of behaviors. This will include more naturalistic situations in which traditional trial-based approaches are replaced with less structured tests that exist within a higher dimensional space of behaviors that the brain has evolved to deal with. These higher dimensional studies may be critical for understanding the value of mixed selectivity and coding strategies that leverage the computational capacity of neural populations rather than individual cells. Theoretical work increasingly suggests that our insights into brain function are limited by the behavioral context in which we conduct neuroscience research and so these steps may be vital in raising the ceiling on our current understanding of the auditory system.
Similar principles apply to the study of both biological and artificial intelligences, and it is notable that AI research is moving in a similar direction with groups investigating how neural networks learn to solve multiple tasks with potentially competing demands. Our future work, and the development of real-world hearing in artificial systems are both thus likely to benefit from comparisons between natural and engineered systems that make sense of sound.
Epilogue
Huge thanks again to Katarina Poole, Katherine Wood and Jenny Bizley for their work to get this project over the line and to Wellcome Trust and BBSRC for funding. Stay tuned for future posts discussing more about of research and work at the interface between artificial and biological intelligence. In the meantime, if you want to know more or have further questions, please feel free to contact me, Stephen Town.