In the spring of 2021, the Buschman lab published two studies. The first, “Shared mechanisms underlie the control of working memory and attention” was published in Nature in March 2021. Led by Matthew Panichello, former PNI graduate student and current post-doctoral fellow at Stanford, this study investigates how the brain manipulates objects held in memory. Matthew Panichello and Tim Buschman demonstrate that this process is similar to the attentional manipulation of objects perceived in the physical world. The second, “Rotational dynamics reduce interference between sensory and memory representations” was published in Nature Neuroscience in April 2021. Led by Alexandra Libby, a graduate student at PNI, this study investigates how the brain can simultaneously represent the memory of the past and the perception of the present in an efficient way. Although focused on different cognitive processes, these two studies focus on how the brain transforms information.
The first study investigates how the brain manipulates information in memory. Even when performing a relatively simple task, like cooking, the brain is continuously transforming information. Imagine you are baking a cake and want to take out the ingredients that are in the fridge (see Fig. 1A). You might already have the entire list of ingredients memorized, and from it you could mentally select the ingredients that are in the fridge. Alternatively, you could read through the list and filter for ingredients that are stored in the fridge. The two strategies have the same end result: “take the eggs and milk out of the fridge”, but ask the brain to transform the information in different ways. One of the central questions raised by Panichello and Buschman is how the manipulation of mental objects in working memory is similar to perceiving objects in the environment.
For their study, Panichello and Buschman trained monkeys to report a color they had previously seen, with two variations of the task. In the first version, two colors were briefly shown in the upper and lower part of a screen. After a delay, the monkeys were then told whether they had to report the color at the top or bottom (memory condition). The monkeys had to hold the two colors in working memory during the delay and then select the correct color once the position was revealed (see Fig. 1B). In the second version of the task, the monkeys were first told whether the relevant color was at the top or the bottom of the screen and were then shown the colors. In this case, they had to attend to a specific location on the screen (attention condition).
The authors recorded neurons in visual, parietal, and prefrontal cortices of the monkeys while they performed this task. When monkeys had to hold in mind the colors before seeing the attentional cue (memory condition), the prefrontal cortex was the first to represent whether the color to report was at the top or the bottom. Critically, the same population of neurons also represented the attended location when the attentional cue was given before the colors (attention condition) (see Fig. 2A). This means that the prefrontal cortex is a general controller of information, selecting top or bottom either in the mind or seen in the world. Just like attending to one half of the screen ‘boosts’ the representation of that color compared to the unattended one, selecting from memory one of the colors did the same thing across all the regions (see Fig. 2B).
In the memory condition, neurons separately represent the colors at both the top and bottom of the screen during the delay (see Fig. 3A). When the attentional cue is given, the selected color rotates its representation in the prefrontal cortex. In this new representation, selected colors from different locations are aligned (see Fig. 3B). In the attention condition, the color to report from different locations are also aligned. “This is where the lines between working memory and decision-making are blurring, the brain represents the information as an action plan: at the end of the trial, report this color”, explains Matthew Panichello. “It’s also a question of preventing interference, you want to move one item out of the way and keep the other in an appropriate format to be used later”.
The second article will tell us more about how the brain prevents interference. How does the brain maintain multiple types of information – sometimes even, as we’ll see, contradictory – without getting mixed up? Alexandra Libby and Tim Buschman wanted to understand this at the single neuron level by looking at the relationship between sensory information and short-term memory.
The authors created an unsupervised sequence learning task in which mice listen to sequences of chords over four days (see Fig. 4A). The first two chords were highly predictable of the third one and the fourth chord was always the same (ABCD and XYC*D) (see Fig. 4B). In rare cases, the third chord would be switched (ABC*D or XYCD in 20% of cases) creating an unexpected event (see Fig 4C).
They recorded individual neurons in the auditory cortex, a low-level sensory area, while the mice became familiar with the sequences over several days. “When we look at neurons’ population activity, we often have a ‘black box’ view of the brain. We wanted to open the box and understand what the individual neurons are doing.” explained Alexandra Libby. As the mice hear the chords, neurons activated in the auditory cortex. The patterns of neural activation create the sensory representation of the chords in the brain. They observed that as time passed, the sensory representation of the expected stimulus (C or C’) converged with that of the initial chord that usually precedes it (A for C, and X for C’). They believe that this is how the auditory cortex creates the association between the chords (see Fig. 5A).
The convergence of the sensory representations is potentially problematic because the information about the first chord is lost immediately when the unexpected chord is heard. However, the authors found that the memory of the chords is represented in a different format than the sensory representation (see Fig. 5B). This means that the sensory representation and the memory representation are independent and cannot interfere with each other.
Libby and Buschman went on to show that there are two main types of neurons creating the independent representations: stable and switching (see Fig. 5C). The stable neurons did not change their preference during the task. If a stable neuron was more active for A than X (yellow trace in Fig. 5C), then it maintained this higher activity throughout the sequence. In contrast, the switching neurons inverted its preference after the first chord was heard (blue trace in Fig. 5C). The authors show that in this way, the brain can represent the sensory perception of the present and the memory of the past without interference. This method of creating independent representations required fewer neurons and less energy to transition between states.
Together these studies show how the brain efficiently transforms and stores information for future use. By delving into the dynamics of these transformations and the geometry of their representation, they uncovered novel fundamental principles of neural processes. These principles could be extended to other domains of cognition and help the field move toward a more complete understanding of brain computation.
by Caroline Jahn