Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks

Kyunghyun Cho, Aaron Courville, Yoshua Bengio

Research output: Contribution to journalArticle

Abstract

Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.

Original languageEnglish (US)
Article number7243334
Pages (from-to)1875-1886
Number of pages12
JournalIEEE Transactions on Multimedia
Volume17
Issue number11
DOIs
StatePublished - Nov 1 2015

Fingerprint

Recurrent neural networks
Speech recognition
Random variables
Neural networks
Deep neural networks

Keywords

  • Attention mechanism
  • deep learning
  • recurrent neural networks

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Signal Processing
  • Media Technology
  • Computer Science Applications

Cite this

Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks. / Cho, Kyunghyun; Courville, Aaron; Bengio, Yoshua.

In: IEEE Transactions on Multimedia, Vol. 17, No. 11, 7243334, 01.11.2015, p. 1875-1886.

Research output: Contribution to journalArticle

Cho, Kyunghyun ; Courville, Aaron ; Bengio, Yoshua. / Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks. In: IEEE Transactions on Multimedia. 2015 ; Vol. 17, No. 11. pp. 1875-1886.
@article{d9a0a446635842c98096ec7b55e22633,
title = "Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks",
abstract = "Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.",
keywords = "Attention mechanism, deep learning, recurrent neural networks",
author = "Kyunghyun Cho and Aaron Courville and Yoshua Bengio",
year = "2015",
month = "11",
day = "1",
doi = "10.1109/TMM.2015.2477044",
language = "English (US)",
volume = "17",
pages = "1875--1886",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "11",

}

TY - JOUR

T1 - Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks

AU - Cho, Kyunghyun

AU - Courville, Aaron

AU - Bengio, Yoshua

PY - 2015/11/1

Y1 - 2015/11/1

N2 - Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.

AB - Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition. All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.

KW - Attention mechanism

KW - deep learning

KW - recurrent neural networks

UR - http://www.scopus.com/inward/record.url?scp=84946763507&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84946763507&partnerID=8YFLogxK

U2 - 10.1109/TMM.2015.2477044

DO - 10.1109/TMM.2015.2477044

M3 - Article

VL - 17

SP - 1875

EP - 1886

JO - IEEE Transactions on Multimedia

JF - IEEE Transactions on Multimedia

SN - 1520-9210

IS - 11

M1 - 7243334

ER -