Embedding word similarity with neural machine translation

Felix Hill, Kyunghyun Cho, Sébastien Jean, Coline Devin, Yoshua Bengio

Research output: Contribution to conferencePaper

Abstract

Neural language models learn word representations, or embeddings, that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model. We show that embeddings from translation models outperform those learned by monolingual models at tasks that require knowledge of both conceptual similarity and lexical-syntactic role. We further show that these effects hold when translating from both English to French and English to German, and argue that the desirable properties of translation embeddings should emerge largely independently of the source and target languages. Finally, we apply a new method for training neural translation models with very large vocabularies, and show that this vocabulary expansion algorithm results in minimal degradation of embedding quality. Our embedding spaces can be queried in an online demo and downloaded from our web page. Overall, our analyses indicate that translation-based embeddings should be used in applications that require concepts to be organised according to similarity and/or lexical function, while monolingual embeddings are better suited to modelling (nonspecific) inter-word relatedness.

Original languageEnglish (US)
StatePublished - Jan 1 2015
Event3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States
Duration: May 7 2015May 9 2015

Conference

Conference3rd International Conference on Learning Representations, ICLR 2015
CountryUnited States
CitySan Diego
Period5/7/155/9/15

Fingerprint

vocabulary
language
Syntactics
Linguistics
Machine Translation
Websites
linguistics
Degradation
Vocabulary
Language Model
World Wide Web
Modeling
Translating
Lexical Functions
Syntax
Language

ASJC Scopus subject areas

  • Education
  • Linguistics and Language
  • Language and Linguistics
  • Computer Science Applications

Cite this

Hill, F., Cho, K., Jean, S., Devin, C., & Bengio, Y. (2015). Embedding word similarity with neural machine translation. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.

Embedding word similarity with neural machine translation. / Hill, Felix; Cho, Kyunghyun; Jean, Sébastien; Devin, Coline; Bengio, Yoshua.

2015. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.

Research output: Contribution to conferencePaper

Hill, F, Cho, K, Jean, S, Devin, C & Bengio, Y 2015, 'Embedding word similarity with neural machine translation', Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States, 5/7/15 - 5/9/15.
Hill F, Cho K, Jean S, Devin C, Bengio Y. Embedding word similarity with neural machine translation. 2015. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.
Hill, Felix ; Cho, Kyunghyun ; Jean, Sébastien ; Devin, Coline ; Bengio, Yoshua. / Embedding word similarity with neural machine translation. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.
@conference{ec6f72ebe56c4521b8833e582f1a33fa,
title = "Embedding word similarity with neural machine translation",
abstract = "Neural language models learn word representations, or embeddings, that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model. We show that embeddings from translation models outperform those learned by monolingual models at tasks that require knowledge of both conceptual similarity and lexical-syntactic role. We further show that these effects hold when translating from both English to French and English to German, and argue that the desirable properties of translation embeddings should emerge largely independently of the source and target languages. Finally, we apply a new method for training neural translation models with very large vocabularies, and show that this vocabulary expansion algorithm results in minimal degradation of embedding quality. Our embedding spaces can be queried in an online demo and downloaded from our web page. Overall, our analyses indicate that translation-based embeddings should be used in applications that require concepts to be organised according to similarity and/or lexical function, while monolingual embeddings are better suited to modelling (nonspecific) inter-word relatedness.",
author = "Felix Hill and Kyunghyun Cho and S{\'e}bastien Jean and Coline Devin and Yoshua Bengio",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
note = "3rd International Conference on Learning Representations, ICLR 2015 ; Conference date: 07-05-2015 Through 09-05-2015",

}

TY - CONF

T1 - Embedding word similarity with neural machine translation

AU - Hill, Felix

AU - Cho, Kyunghyun

AU - Jean, Sébastien

AU - Devin, Coline

AU - Bengio, Yoshua

PY - 2015/1/1

Y1 - 2015/1/1

N2 - Neural language models learn word representations, or embeddings, that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model. We show that embeddings from translation models outperform those learned by monolingual models at tasks that require knowledge of both conceptual similarity and lexical-syntactic role. We further show that these effects hold when translating from both English to French and English to German, and argue that the desirable properties of translation embeddings should emerge largely independently of the source and target languages. Finally, we apply a new method for training neural translation models with very large vocabularies, and show that this vocabulary expansion algorithm results in minimal degradation of embedding quality. Our embedding spaces can be queried in an online demo and downloaded from our web page. Overall, our analyses indicate that translation-based embeddings should be used in applications that require concepts to be organised according to similarity and/or lexical function, while monolingual embeddings are better suited to modelling (nonspecific) inter-word relatedness.

AB - Neural language models learn word representations, or embeddings, that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model. We show that embeddings from translation models outperform those learned by monolingual models at tasks that require knowledge of both conceptual similarity and lexical-syntactic role. We further show that these effects hold when translating from both English to French and English to German, and argue that the desirable properties of translation embeddings should emerge largely independently of the source and target languages. Finally, we apply a new method for training neural translation models with very large vocabularies, and show that this vocabulary expansion algorithm results in minimal degradation of embedding quality. Our embedding spaces can be queried in an online demo and downloaded from our web page. Overall, our analyses indicate that translation-based embeddings should be used in applications that require concepts to be organised according to similarity and/or lexical function, while monolingual embeddings are better suited to modelling (nonspecific) inter-word relatedness.

UR - http://www.scopus.com/inward/record.url?scp=85011889372&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85011889372&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85011889372

ER -