End-to-end memory networks

Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Robert Fergus

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages2440-2448
Number of pages9
Volume2015-January
StatePublished - 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

Other

Other29th Annual Conference on Neural Information Processing Systems, NIPS 2015
CountryCanada
CityMontreal
Period12/7/1512/12/15

Fingerprint

Data storage equipment
Neural networks

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-to-end memory networks. In Advances in Neural Information Processing Systems (Vol. 2015-January, pp. 2440-2448). Neural information processing systems foundation.

End-to-end memory networks. / Sukhbaatar, Sainbayar; Szlam, Arthur; Weston, Jason; Fergus, Robert.

Advances in Neural Information Processing Systems. Vol. 2015-January Neural information processing systems foundation, 2015. p. 2440-2448.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sukhbaatar, S, Szlam, A, Weston, J & Fergus, R 2015, End-to-end memory networks. in Advances in Neural Information Processing Systems. vol. 2015-January, Neural information processing systems foundation, pp. 2440-2448, 29th Annual Conference on Neural Information Processing Systems, NIPS 2015, Montreal, Canada, 12/7/15.
Sukhbaatar S, Szlam A, Weston J, Fergus R. End-to-end memory networks. In Advances in Neural Information Processing Systems. Vol. 2015-January. Neural information processing systems foundation. 2015. p. 2440-2448
Sukhbaatar, Sainbayar ; Szlam, Arthur ; Weston, Jason ; Fergus, Robert. / End-to-end memory networks. Advances in Neural Information Processing Systems. Vol. 2015-January Neural information processing systems foundation, 2015. pp. 2440-2448
@inproceedings{6ef97fabdf0a490884faa83519d7d596,
title = "End-to-end memory networks",
abstract = "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
author = "Sainbayar Sukhbaatar and Arthur Szlam and Jason Weston and Robert Fergus",
year = "2015",
language = "English (US)",
volume = "2015-January",
pages = "2440--2448",
booktitle = "Advances in Neural Information Processing Systems",
publisher = "Neural information processing systems foundation",

}

TY - GEN

T1 - End-to-end memory networks

AU - Sukhbaatar, Sainbayar

AU - Szlam, Arthur

AU - Weston, Jason

AU - Fergus, Robert

PY - 2015

Y1 - 2015

N2 - We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.

AB - We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.

UR - http://www.scopus.com/inward/record.url?scp=84965143740&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84965143740&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84965143740

VL - 2015-January

SP - 2440

EP - 2448

BT - Advances in Neural Information Processing Systems

PB - Neural information processing systems foundation

ER -