Show, attend and tell

Neural image caption generation with visual attention

Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

Original languageEnglish (US)
Title of host publication32nd International Conference on Machine Learning, ICML 2015
PublisherInternational Machine Learning Society (IMLS)
Pages2048-2057
Number of pages10
Volume3
ISBN (Print)9781510810587
StatePublished - 2015
Event32nd International Conference on Machine Learning, ICML 2015 - Lile, France
Duration: Jul 6 2015Jul 11 2015

Other

Other32nd International Conference on Machine Learning, ICML 2015
CountryFrance
CityLile
Period7/6/157/11/15

Fingerprint

Backpropagation
Visualization
Object detection

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Science Applications

Cite this

Xu, K., Ba, J. L., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., ... Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In 32nd International Conference on Machine Learning, ICML 2015 (Vol. 3, pp. 2048-2057). International Machine Learning Society (IMLS).

Show, attend and tell : Neural image caption generation with visual attention. / Xu, Kelvin; Ba, Jimmy Lei; Kiros, Ryan; Cho, Kyunghyun; Courville, Aaron; Salakhutdinov, Ruslan; Zemel, Richard S.; Bengio, Yoshua.

32nd International Conference on Machine Learning, ICML 2015. Vol. 3 International Machine Learning Society (IMLS), 2015. p. 2048-2057.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Xu, K, Ba, JL, Kiros, R, Cho, K, Courville, A, Salakhutdinov, R, Zemel, RS & Bengio, Y 2015, Show, attend and tell: Neural image caption generation with visual attention. in 32nd International Conference on Machine Learning, ICML 2015. vol. 3, International Machine Learning Society (IMLS), pp. 2048-2057, 32nd International Conference on Machine Learning, ICML 2015, Lile, France, 7/6/15.
Xu K, Ba JL, Kiros R, Cho K, Courville A, Salakhutdinov R et al. Show, attend and tell: Neural image caption generation with visual attention. In 32nd International Conference on Machine Learning, ICML 2015. Vol. 3. International Machine Learning Society (IMLS). 2015. p. 2048-2057
Xu, Kelvin ; Ba, Jimmy Lei ; Kiros, Ryan ; Cho, Kyunghyun ; Courville, Aaron ; Salakhutdinov, Ruslan ; Zemel, Richard S. ; Bengio, Yoshua. / Show, attend and tell : Neural image caption generation with visual attention. 32nd International Conference on Machine Learning, ICML 2015. Vol. 3 International Machine Learning Society (IMLS), 2015. pp. 2048-2057
@inproceedings{da67e5600dce464c847ee22571319780,
title = "Show, attend and tell: Neural image caption generation with visual attention",
abstract = "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.",
author = "Kelvin Xu and Ba, {Jimmy Lei} and Ryan Kiros and Kyunghyun Cho and Aaron Courville and Ruslan Salakhutdinov and Zemel, {Richard S.} and Yoshua Bengio",
year = "2015",
language = "English (US)",
isbn = "9781510810587",
volume = "3",
pages = "2048--2057",
booktitle = "32nd International Conference on Machine Learning, ICML 2015",
publisher = "International Machine Learning Society (IMLS)",

}

TY - GEN

T1 - Show, attend and tell

T2 - Neural image caption generation with visual attention

AU - Xu, Kelvin

AU - Ba, Jimmy Lei

AU - Kiros, Ryan

AU - Cho, Kyunghyun

AU - Courville, Aaron

AU - Salakhutdinov, Ruslan

AU - Zemel, Richard S.

AU - Bengio, Yoshua

PY - 2015

Y1 - 2015

N2 - Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

AB - Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

UR - http://www.scopus.com/inward/record.url?scp=84970002232&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84970002232&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781510810587

VL - 3

SP - 2048

EP - 2057

BT - 32nd International Conference on Machine Learning, ICML 2015

PB - International Machine Learning Society (IMLS)

ER -