Abstract
This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.
Original language | English (US) |
---|---|
Title of host publication | Advances in Neural Information Processing Systems |
Publisher | Neural information processing systems foundation |
Pages | 649-657 |
Number of pages | 9 |
Volume | 2015-January |
State | Published - 2015 |
Event | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada Duration: Dec 7 2015 → Dec 12 2015 |
Other
Other | 29th Annual Conference on Neural Information Processing Systems, NIPS 2015 |
---|---|
Country | Canada |
City | Montreal |
Period | 12/7/15 → 12/12/15 |
Fingerprint
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing
Cite this
Character-level convolutional networks for text classification. / Zhang, Xiang; Zhao, Junbo; LeCun, Yann.
Advances in Neural Information Processing Systems. Vol. 2015-January Neural information processing systems foundation, 2015. p. 649-657.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - Character-level convolutional networks for text classification
AU - Zhang, Xiang
AU - Zhao, Junbo
AU - LeCun, Yann
PY - 2015
Y1 - 2015
N2 - This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.
AB - This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.
UR - http://www.scopus.com/inward/record.url?scp=84965162393&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84965162393&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84965162393
VL - 2015-January
SP - 649
EP - 657
BT - Advances in Neural Information Processing Systems
PB - Neural information processing systems foundation
ER -