PNI Innovator awards to fund new molecular and computational approaches

The Princeton Neuroscience Institute has granted innovator awards to two cutting-edge collaborative teams. Professors Gould and Buschman, together with the director of the PNI viral core, Dr. Huang, will use their award to develop innovative tools to expand the space of questions we can ask about perineuronal nets. Professors Daw and Witten will use their award to support innovative research at the computational level, testing a new comprehensive framework for modelling reward prediction error in the brain.

Cellular: Developing New Tools to Probe Perineuronal Net Function

While most of us know neuronal function is important for behavior, we may be less aware of the role that the space between neurons plays. Important activities like social recognition - nonsocial memory - and avoidance behavior - are just some of the activities that are facilitated by the biochemical matrix that surrounds some neurons, called perineuronal nets (PNNs).

So far, almost everything we know about PNNs has been gleaned from methods that basically destroy them. The most common method for studying PNNs is to infuse the brain with a degradative enzyme that transiently breaks down all PNNs (and the entire extracellular matrix, of which PNNs are a specialized part). However, while their constituent components are similar, intact PNNs differ along multiple dimensions (type and number of side chains, sulfation patterns, relation to electrophysiological and behavioral phenotypes, etc.) – “differences that may be important determinants of function”.

In their ambitious new line of research, Professor Gould, Professor Buschman, and Doctor Huang will create and test new viral tools that will selectively increase or decrease the expression of genes that are associated with PNNs in the brains of mice, specifically in the ventral CA1 region of the hippocampus. This will allow them to explore, for the first time, the ways that different molecular constituents of PNNs affect their function.

Computational: Testing neural circuit models to explain dopaminergic heterogeneity

The ventral tegmental area of the brain (VTA) has special neurons that fire in response to reward. Specifically, these neurons release dopamine whenever a reward is surprising. This can be a bad surprise, like when you expect to taste sweet chocolate milk but accidentally sip water (yuck), or a good surprise, like when you expect water but instead get sweet chocolate milk (yum).

Models of this neuronal activity have traditionally assumed that the dopamine neurons convey a scalar, global signal. However, different neurons in the VTA respond differently to various aspects of reward tasks (experimental cues, variables, etc.). This led Professors Daw and Witten to develop a new model of VTA dopamine neuron activity (the Feature-Specific Reward Prediction Error model). In their model, a seemingly wide variety of dopamine neuron responses are modeled as the vector decomposition of the single, classic scalar reward prediction signal. This makes it easier to extend models of dopamine neuron activity to more complicated real-world scenarios with many variables. But how does this new model fit with other findings that suggesting dopamine responses may reflect predictions about the specific stimuli themselves (as opposed to their associated reward)? What about evidence suggesting different dopamine neurons actually learn different parts of a predicted reward distribution?

Professors Daw and Witten will undertake experiments to clarify these points. First, they will conduct imaging experiments with mice in which the sensory aspects of the environment are held constant, but which aspects of the environment lead to reward will change. This will dissociate dopamine neuron activity related to specific stimuli or stimulus features from the reward value of the stimuli. Second, they will use viral methods to image projection-defined subpopulations of dopamine neurons in VTA (2-photon imaging), to test if heterogeneous dopamine signals within a projection can be explained by their feature-specific RPE model, consistent with these signals cooperating in training a common reward prediction.

This work will serve to lay the groundwork for a new comprehensive framework for reward prediction error in the brain.

by Kirsten Ziman