Show, attend and tell: Neural image caption generation with visual attention

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
PublisherInternational Machine Learning Society (IMLS)
Pages2048-2057
Number of pages10
Volume3
ISBN (Print)9781510810587
Publication statusPublished - 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Other

Other32nd International Conference on Machine Learning, ICML 2015
CountryFrance
CityLile
Period7/6/157/11/15

    Fingerprint

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications

Cite this

Xu, K., Ba, J. L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., ... Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In 32nd International Conference on Machine Learning, ICML 2015 (Vol. 3, pp. 2048-2057). International Machine Learning Society (IMLS).