States versus rewards: Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning

Jan Gläscher, Nathaniel Daw, Peter Dayan, John P. O'Doherty

Research output: Contribution to journalArticle

Abstract

Reinforcement learning (RL) uses sequential experience with situations (" states" ) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

Original languageEnglish (US)
Pages (from-to)585-595
Number of pages11
JournalNeuron
Volume66
Issue number4
DOIs
StatePublished - May 2010

Fingerprint

Reward
Learning
Parietal Lobe
Prefrontal Cortex
Magnetic Resonance Imaging
Reinforcement (Psychology)

Keywords

  • Sysneuro

ASJC Scopus subject areas

  • Neuroscience(all)

Cite this

States versus rewards : Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. / Gläscher, Jan; Daw, Nathaniel; Dayan, Peter; O'Doherty, John P.

In: Neuron, Vol. 66, No. 4, 05.2010, p. 585-595.

Research output: Contribution to journalArticle

Gläscher, Jan ; Daw, Nathaniel ; Dayan, Peter ; O'Doherty, John P. / States versus rewards : Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. In: Neuron. 2010 ; Vol. 66, No. 4. pp. 585-595.
@article{ef6e91813b0e4ab4a2f60f3b76afa514,
title = "States versus rewards: Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning",
abstract = "Reinforcement learning (RL) uses sequential experience with situations ({"} states{"} ) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.",
keywords = "Sysneuro",
author = "Jan Gl{\"a}scher and Nathaniel Daw and Peter Dayan and O'Doherty, {John P.}",
year = "2010",
month = "5",
doi = "10.1016/j.neuron.2010.04.016",
language = "English (US)",
volume = "66",
pages = "585--595",
journal = "Neuron",
issn = "0896-6273",
publisher = "Cell Press",
number = "4",

}

TY - JOUR

T1 - States versus rewards

T2 - Dissociable neural prediction error signals underlying model-based and model-free reinforcement learning

AU - Gläscher, Jan

AU - Daw, Nathaniel

AU - Dayan, Peter

AU - O'Doherty, John P.

PY - 2010/5

Y1 - 2010/5

N2 - Reinforcement learning (RL) uses sequential experience with situations (" states" ) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

AB - Reinforcement learning (RL) uses sequential experience with situations (" states" ) and outcomes to assess actions. Whereas model-free RL uses this experience directly, in the form of a reward prediction error (RPE), model-based RL uses it indirectly, building a model of the state transition and outcome structure of the environment, and evaluating actions by searching this model. A state prediction error (SPE) plays a central role, reporting discrepancies between the current model and the observed state transitions. Using functional magnetic resonance imaging in humans solving a probabilistic Markov decision task, we found the neural signature of an SPE in the intraparietal sulcus and lateral prefrontal cortex, in addition to the previously well-characterized RPE in the ventral striatum. This finding supports the existence of two unique forms of learning signal in humans, which may form the basis of distinct computational strategies for guiding behavior.

KW - Sysneuro

UR - http://www.scopus.com/inward/record.url?scp=77953260848&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77953260848&partnerID=8YFLogxK

U2 - 10.1016/j.neuron.2010.04.016

DO - 10.1016/j.neuron.2010.04.016

M3 - Article

VL - 66

SP - 585

EP - 595

JO - Neuron

JF - Neuron

SN - 0896-6273

IS - 4

ER -