Gating through the richness of neural dynamics

A central problem in neuroscience is developing network models that mimic brain function in its full complexity, providing a bridge to both experimental neuroscience where one can test hypotheses generated by those models and to machine learning where one would aim to develop algorithms that perform complex brain-like functions on large scale data. 

Kamesh Krishnamurthy is a C.V. Starr and CPBF fellow at Princeton University. Prior to Princeton, he did graduate work in Physics and Neuroscience at University of Pennsylvania. He is a theorist with wide ranging interests in theoretical problems at the intersection of neuroscience, biophysics and machine learning. Along with his collaborators, he recently published a study presenting novel conceptual insights for neural networks. They extended recurrent neural network (RNN) models that have gained widespread interest in recent years to perform rich behavior with significant functional implications. In an elegant and elaborate theoretical treatment of state-of-the-art recurrent neural network models, the team has shown that if one imbues classical recurrent neural network with characteristics observed in single neurons such as gating –or multiplicative interactions – the complexity of the tasks that these networks can do increase tremendously. Machine learning wise, ultimately you would like to design and train neural networks in a principled manner instead of trial and error or brute force approaches. In order to be principled one needs to understand how specific mechanisms influence the overall behavior of those networks. This is the task that Kamesh and his colleagues, in their recent paper, took upon themselves in order to thoroughly understand modern RNNs, and give machine learning practitioners a recipe on how to initialize their networks – a critical factor for performance. 

Sensory gating is one of the mechanisms through which our brains interact with the environment. As we live in a world with overwhelming and rich dynamics, it is very crucial to filter out and gate sensory information needed for survival. The same holds for memories, if we are to store all events we are exposed to, it will lead to strong interference, unintended erasure or the need for larger and larger networks. It is handy to have  gating mechanisms in network models to provide these critical functions. Moreover, gating mechanisms have been observed down to the single neuron level where shunting provides a way by which neurons gate inputs. Kamesh and his collaborators have shown that adding a gating mechanism in a neural network provides selective retention and erasure of memories in a context dependent manner. 

Intriguingly, recurrent neural networks with a gating mechanism can also act as  robust integrators. Integration is a key concept in neuroscience. Many neuronal processes have been shown to be underlined by integration for example different types of decision making behaviors from simple perceptual to social decision processes. Moreover, integration was shown to be involved in a variety of memory - type processes such as working memory, short term memory,  

Training recurrent neural networks to perform complex tasks akin to what the brain is doing is a rapidly emerging area. To date, training classical recurrent neural networks is seldom done in a principled way and the researchers have little control over the complexity of the tasks the networks can learn. Adding a gating mechanism, the study provides a knob that gives the ability to control the dimensionality of the network dynamics thus having a handle over which tasks can be learnt, at which time point. The more dimensions the network have, the more complex the learning tasks that the network can be trained on. A similar allocation of resources happens in the brain when neuronal networks adjust its information processing bandwidth to the task at hand. 

Another hurdle faced by the machine learning community when training recurrent neural network is how to initialize the training. Typically, classical RNNs achieve the best performance when they are trained near a transition to chaotic dynamics.. However for modern RNNs with gating, the parameters corresponding to such a transition were unknown, thus making the choice of initialization ad hoc. In their gated RNN model, Kamesh and his colleagues have derived the full phase diagram so that one can systematically identify the regime in which the network is chaotic and initialize the network there thus making training those networks a much easier task.  

“ I would like to apply this RNN model to a variety of behavioral tasks and connect it to brain activity. Being at PNI and CPBF , there are a lot of experimental collaborations that I can establish to realize this” Kamesh said in an interview with him.

This tight connection between theory and experiments that Kamesh mentioned, being able to interact closely with experimentalists, test hypotheses and refine theories is at the heart of the function of theory in neuroscience.  In neuroscience, we are at the cusp of a revolution driven by the vast amount of data generated from labs across the world and the increased interest of theoreticians and machine learning researchers to help make sense of this deluge of data. Approaches like the one developed by Kamesh and his collaborators will significantly contribute to those efforts. 

by Ahmed El Hady