Areas of Research: Human and animal reinforcement learning and decision making
Learning has long been conceptualized as the formation of associations between stimuli, actions and outcomes, which can then guide decision making in the presence of similar stimuli. But how should we define these stimuli (also called states in reinforcement learning theory) in complex, real-world environments? Implementations of reinforcement learning, whether in a world-class backgammon player or in modeling the choices of a rat in a conditioning experiment, typically use specialized, hand-crafted state representations that are uniquely suited to the task at hand. But how do humans and animals craft task representations in naturalistic scenarios?
A main focus of our research is to elucidate the computational, cognitive and neural processes involved in learning task representations from experience. At the theoretical level, we are extending the framework of reinforcement learning to allow it to flexibly adapt to—and take advantage of—the structure of the task at hand, such that learning is more efficient. At the cognitive level, we are bringing processes such as attention and memory, which can serve to organize perceptual inputs and to shape the boundaries of generalization, to bear on trial-and-error learning. At the neural level, we are investigating cortical and subcortical processes that shape the inputs that model-free learning in the basal ganglia and model-based learning in the frontal cortex operate on.
- SJ Gershman, A Radulescu, KA Norman & Y Niv (2014) – Statistical computations underlying the dynamics of memory updating – PLoS Computational Biology 10(11) e1003939
- FA Soto, SJ Gershman & Y Niv (2014) – Explaining compound generalization in associative and causal learning through rational principles of dimensional generalization – Psychological Review 121(3):526-558
- RC Wilson, YK Takahashi, G Schoenbaum* & Y Niv* (2014) – Orbitofrontal cortex encodes a cognitive map of task space – Neuron 81(2): 267-279
- SJ Gershman, CJ Jones, KA Norman, M-H Monfils & Y Niv (2013) – Gradual extinction prevents the return of fear: implications for the discovery of state – Frontiers in Behavioral Neuroscience 7:164
- SJ Gershman & Y Niv (2013) –Perceptual estimation obeys Occam’s razor – Frontiers in Psychology 4:623 E Eldar, JD Cohen & Y Niv (2013) – The effects of neural gain on attention and learning – Nature Neuroscience 16:1146-1153
- Y Niv, J Edlund, P Dayan & JP O’Doherty (2012) – Neural prediction errors reveal a risk-sensitive reinforcement learning process in the human brain – The Journal of Neuroscience 32(2):551-562
- YK Takahashi, MR Roesch, RC Wilson, K Toreson, P O’Donnell, Y Niv* & G Schoenbaum* (2011) – Expectancy-related changes in firing of dopamine neurons depend on orbitofrontal cortex – Nature Neuroscience 14(12):1590-1597
View Complete Publications list.