Human-level concept learning through probabilistic program induction

Brenden Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum

Research output: Contribution to journalArticle

Abstract

People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

Original languageEnglish (US)
Pages (from-to)1332-1338
Number of pages7
JournalScience
Volume350
Issue number6266
DOIs
StatePublished - Dec 11 2015

Fingerprint

Learning
Imagination
Aptitude
Creativity
Machine Learning

ASJC Scopus subject areas

  • General

Cite this

Human-level concept learning through probabilistic program induction. / Lake, Brenden; Salakhutdinov, Ruslan; Tenenbaum, Joshua B.

In: Science, Vol. 350, No. 6266, 11.12.2015, p. 1332-1338.

Research output: Contribution to journalArticle

Lake, B, Salakhutdinov, R & Tenenbaum, JB 2015, 'Human-level concept learning through probabilistic program induction', Science, vol. 350, no. 6266, pp. 1332-1338. https://doi.org/10.1126/science.aab3050
Lake, Brenden ; Salakhutdinov, Ruslan ; Tenenbaum, Joshua B. / Human-level concept learning through probabilistic program induction. In: Science. 2015 ; Vol. 350, No. 6266. pp. 1332-1338.
@article{a855d9fc35fa423ab5022fec85de0d6e,
title = "Human-level concept learning through probabilistic program induction",
abstract = "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several {"}visual Turing tests{"} probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.",
author = "Brenden Lake and Ruslan Salakhutdinov and Tenenbaum, {Joshua B.}",
year = "2015",
month = "12",
day = "11",
doi = "10.1126/science.aab3050",
language = "English (US)",
volume = "350",
pages = "1332--1338",
journal = "Science",
issn = "0036-8075",
publisher = "American Association for the Advancement of Science",
number = "6266",

}

TY - JOUR

T1 - Human-level concept learning through probabilistic program induction

AU - Lake, Brenden

AU - Salakhutdinov, Ruslan

AU - Tenenbaum, Joshua B.

PY - 2015/12/11

Y1 - 2015/12/11

N2 - People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

AB - People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

UR - http://www.scopus.com/inward/record.url?scp=84949683101&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84949683101&partnerID=8YFLogxK

U2 - 10.1126/science.aab3050

DO - 10.1126/science.aab3050

M3 - Article

VL - 350

SP - 1332

EP - 1338

JO - Science

JF - Science

SN - 0036-8075

IS - 6266

ER -