New research from the Cohen lab sheds light on our capacity for multitasking

Why can humans sometimes effortlessly perform multiple tasks simultaneously and sometimes not? For example, when sharing a meal with a friend you can eat, talk, listen, and even breathe all at the same time. However, if you were to try to write down your grocery list for the week while simultaneously performing complex mental arithmetic you would likely find it too challenging. New research from the lab of Princeton Neuroscience Institute Professor Jonathan Cohen published earlier this year in Nature Physics uses artificial neural networks and advanced mathematical techniques from physics to explain this apparent contradiction.

The study, titled Topological limits to the parallel processing capability of network architectures and jointly led by Senior Research Scientist Giovanni Petri of the ISI Foundation in Turin, Italy and PNI graduate student Sebastian Musslick, provides a fresh perspective on this old question. Historically, cognitive scientists believed that our limited capacity for multitasking might be due to a fundamental limitation in the way in which we process information, wherein all ‘mental computations’ pass through a centralized processing center in our brain, creating a ‘bottleneck’ in our ability to do multiple things simultaneously. 

Petri, Musslick, and their collaborators posit an alternate view: the brain, in an effort to simplify the process of learning new tasks, creates ‘shared’ mental representations of similar tasks and multitasking failures arise when multiple tasks attempt to simultaneously use one of these shared mental representations. 

Consider that task of learning to type on a keyboard. “In the beginning, you type using just your index fingers because that's easy to do. But when you type a lot, then you need to be able to move all your fingers at the same time, independently,” says Petri. “When you learn one finger at a time you create a mental representation for each finger. When you try to use the same representation to use two fingers for each hand it fails because you have cognitive inputs going to the same mental representation and being mixed-up and confused.”

Their work suggests that there is a fundamental tradeoff between ease of learning and multitasking capacity that the brain needs to optimize. “Sharing representations improves your generalization capacity and also you learn faster,” says Petri. “Of course it's a good thing for you to do that as long as you need to perform the task alone because the representation becomes richer and more powerful and can be shared by multiple tasks. But then comes the time when you have to do two things at the same time, using the same representation, and then you fail.”

Recent work in cognitive psychology has illustrated how these representational conflicts impact the ability of artificial neural systems to perform computations in parallel, but Petri and Musslick wanted to take this analysis a step further. To do so, they and their colleagues developed formal analyses based on the mathematics of graphs —  mathematical objects composed of nodes and connections between them — to understand how catastrophic to human multitasking ability using shared representations can be. Developing these analyses was critical to understand how shared representations impacted multitasking for systems, like the brain, that have a very large number of resources. “Is this really a problem? — because the brain is very, very large,” says Petri.

A crucial insight from their work was recognizing that the maximum number of tasks a system, biological or artificial, can perform simultaneously can be gleaned from a somewhat well-understood, although difficult to compute, quantity from the mathematics of graph theory called the maximum independent set (MIS). Imagine a set of islands all connected by bridges, where islands represent a task a system must perform, and bridges represent a dependency, and thus a potential conflict, between tasks. This would be something like the graphs Petri and Musslick studied, where each island is called a node and each bridge is called an edge. The maximum independent set is the largest grouping of some smaller set of all the islands such that no two islands within this smaller set are directly connected to each other by a bridge. In other words, no tasks rely on shared resources.

The collection of blue nodes comprise the maximum independent set (MIS) of this graph. The number of blue nodes —  the cardinality of the MIS — is 9, and that specifies the maximum number of tasks that a system that shares resources in a way determined by the edges of this graph can perform simultaneously. Credit: Life of Riley - Own work, GFDL, https://commons.wikimedia.org/w/index.php?curid=8321640


Using this insight, Petri and Musslick illustrated that artificial neural networks trained to perform multiple tasks while sharing their computational resources hit a limit on their performance at exactly the number of tasks the MIS of a graph equivalent to the neural network predicted. This equivalency allowed Petri to forgo the cumbersome task of actually constructing artificial neural networks to perform tasks to understand these limitations, and instead study the mathematical properties of their equivalent graphs. Moving away from specific instantiations of neural networks performing multiple tasks and into the space of the mathematics of graphs allowed Petri and Musslick to study how resource-sharing impacts multitasking performance for very large systems, such as the brain that has access to many neurons. 

Using their approach, they next asked a more practical question: although the MIS can be informative about the maximum number of tasks that can be performed simultaneously, can we understand how well a system that shares resources performs, on average, when confronted with a more standard battery of tasks? In other words, the MIS can tell us the best a system can do, but how does a resource-sharing system typically perform? For example, while it may be true that you can talk, listen, eat, and breathe at the same time, performing four tasks simultaneously may simply be an outlier, whereas your more ‘standard’ multitasking capacity is two or three tasks. 

Using similar graph theoretical approaches, Petri and Musslick showed that, unfortunately, the answer is not encouraging. Even for typical graphs that exhibit a modest level of resource sharing, a resources-sharing system’s capacity to multitask quickly saturates and its limit is far below that of the MIS theoretical maximum limit. In other words, even modest representation sharing imposes strong constraints on the number of tasks that can be performed simultaneously. What’s worse, this capacity does not improve as more resources (neurons in the context of the brain) are added to the system. The limit, therefore, is not neurons, but the resource sharing scheme itself. “What this is telling us is that even in a system that is very large, you don't really get that much extra parallel capacity by adding more neurons. Even though, for a very specific set of tasks, you can do very well, in general you do pretty badly,” says Petri.

The limit of multitasking capacity (the MIS), shown as boxes for graphs of different sizes (different colors) grows slowly as the number of tasks attempted by a single system grows. However, the multitasking capacity for a ‘typical’ battery of tasks (colored lines) grows even more slowly, and this growth is the same irrespective of the size of the system (different colors). Adapted from Petri, Musslick, et. al. 2021.


Petri and Musslick’s results have profound consequences for the design of artificial intelligence systems, which need to strike a balance between resource sharing and ease of learning, and for human cognitive strategies for learning and task performance. “Hopefully, one day, we should be able to do this also on data. So if I do this on people that are in an fMRI machine where I’m asking them to perform multiple tasks, I don’t know what the task structure is, in truth. And I hope to be able to estimate it from the data,” says Petri.

Petri and Musslick’s approach is an exciting springboard to consider the much broader question of the benefits and drawbacks of developing shared representations. Although representation sharing can benefit the speed of learning, it can drastically limit multitasking performance. Under what scenarios might it be preferable to not share task representations, at the cost of slower learning, and vice versa? “There are people that can count and do a grocery list at the same time,” says Petri. “If you train, you can do it. But the point is, is it worth it?”

by Brian DePasquale