Abstract
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
Original language | English (US) |
---|---|
Article number | e1002927 |
Journal | PLoS Computational Biology |
Volume | 9 |
Issue number | 2 |
DOIs | |
State | Published - Feb 2013 |
Fingerprint
ASJC Scopus subject areas
- Cellular and Molecular Neuroscience
- Ecology
- Molecular Biology
- Genetics
- Ecology, Evolution, Behavior and Systematics
- Modeling and Simulation
- Computational Theory and Mathematics
Cite this
No Evidence for an Item Limit in Change Detection. / Keshvari, Shaiyan; van den Berg, Ronald; Ma, Wei Ji.
In: PLoS Computational Biology, Vol. 9, No. 2, e1002927, 02.2013.Research output: Contribution to journal › Article
}
TY - JOUR
T1 - No Evidence for an Item Limit in Change Detection
AU - Keshvari, Shaiyan
AU - van den Berg, Ronald
AU - Ma, Wei Ji
PY - 2013/2
Y1 - 2013/2
N2 - Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
AB - Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
UR - http://www.scopus.com/inward/record.url?scp=84874770550&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84874770550&partnerID=8YFLogxK
U2 - 10.1371/journal.pcbi.1002927
DO - 10.1371/journal.pcbi.1002927
M3 - Article
C2 - 23468613
AN - SCOPUS:84874770550
VL - 9
JO - PLoS Computational Biology
JF - PLoS Computational Biology
SN - 1553-734X
IS - 2
M1 - e1002927
ER -