Fast convolutional nets with fbfft: A GPU performance evaluation

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun

Research output: Contribution to conferencePaper

Abstract

We examine the performance profile of Convolutional Neural Network (CNN) training on the current generation of NVIDIA Graphics Processing Units (GPUs). We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5 x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA’s cuDNN implementation for many common convolutional layers (up to 23.5 x for a synthetic kernel configuration). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.

Original languageEnglish (US)
StatePublished - Jan 1 2015
Event3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States
Duration: May 7 2015May 9 2015

Conference

Conference3rd International Conference on Learning Representations, ICLR 2015
CountryUnited States
CitySan Diego
Period5/7/155/9/15

Fingerprint

Convolution
evaluation
performance
Fast Fourier transforms
CNN
facebook
neural network
hardware
Graphics processing unit
Performance Evaluation
regime
Neural networks
Hardware

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Vasilache, N., Johnson, J., Mathieu, M., Chintala, S., Piantino, S., & LeCun, Y. (2015). Fast convolutional nets with fbfft: A GPU performance evaluation. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.

Fast convolutional nets with fbfft : A GPU performance evaluation. / Vasilache, Nicolas; Johnson, Jeff; Mathieu, Michael; Chintala, Soumith; Piantino, Serkan; LeCun, Yann.

2015. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.

Research output: Contribution to conferencePaper

Vasilache, N, Johnson, J, Mathieu, M, Chintala, S, Piantino, S & LeCun, Y 2015, 'Fast convolutional nets with fbfft: A GPU performance evaluation', Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States, 5/7/15 - 5/9/15.
Vasilache N, Johnson J, Mathieu M, Chintala S, Piantino S, LeCun Y. Fast convolutional nets with fbfft: A GPU performance evaluation. 2015. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.
Vasilache, Nicolas ; Johnson, Jeff ; Mathieu, Michael ; Chintala, Soumith ; Piantino, Serkan ; LeCun, Yann. / Fast convolutional nets with fbfft : A GPU performance evaluation. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.
@conference{c20bbfea5e7f4e7e9467e576138ebbc4,
title = "Fast convolutional nets with fbfft: A GPU performance evaluation",
abstract = "We examine the performance profile of Convolutional Neural Network (CNN) training on the current generation of NVIDIA Graphics Processing Units (GPUs). We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5 x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA’s cuDNN implementation for many common convolutional layers (up to 23.5 x for a synthetic kernel configuration). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.",
author = "Nicolas Vasilache and Jeff Johnson and Michael Mathieu and Soumith Chintala and Serkan Piantino and Yann LeCun",
year = "2015",
month = "1",
day = "1",
language = "English (US)",
note = "3rd International Conference on Learning Representations, ICLR 2015 ; Conference date: 07-05-2015 Through 09-05-2015",

}

TY - CONF

T1 - Fast convolutional nets with fbfft

T2 - A GPU performance evaluation

AU - Vasilache, Nicolas

AU - Johnson, Jeff

AU - Mathieu, Michael

AU - Chintala, Soumith

AU - Piantino, Serkan

AU - LeCun, Yann

PY - 2015/1/1

Y1 - 2015/1/1

N2 - We examine the performance profile of Convolutional Neural Network (CNN) training on the current generation of NVIDIA Graphics Processing Units (GPUs). We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5 x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA’s cuDNN implementation for many common convolutional layers (up to 23.5 x for a synthetic kernel configuration). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.

AB - We examine the performance profile of Convolutional Neural Network (CNN) training on the current generation of NVIDIA Graphics Processing Units (GPUs). We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5 x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA’s cuDNN implementation for many common convolutional layers (up to 23.5 x for a synthetic kernel configuration). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.

UR - http://www.scopus.com/inward/record.url?scp=85070891854&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070891854&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85070891854

ER -