On rectified linear units for speech processing
Top Cited Papers
- 1 May 2013
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 15 (15206149), 3517-3521
- https://doi.org/10.1109/icassp.2013.6638312
Abstract
Deep neural networks have recently become the gold standard for acoustic modeling in speech recognition systems. The key computational unit of a deep network is a linear projection followed by a point-wise non-linearity, which is typically a logistic function. In this work, we show that we can improve generalization and make training of deep networks faster and simpler by substituting the logistic units with rectified linear units. These units are linear when their input is positive and zero otherwise. In a supervised setting, we can successfully train very deep nets from random initialization on a large vocabulary speech recognition task achieving lower word error rates than using a logistic network with the same topology. Similarly in an unsupervised setting, we show how we can learn sparse features that can be useful for discriminative tasks. All our experiments are executed in a distributed environment using several hundred machines and several hundred hours of speech data.Keywords
This publication has 7 references indexed in Scilit:
- Application of pretrained deep neural networks to large vocabulary speech recognitionPublished by International Speech Communication Association ,2012
- Scalable minimum Bayes risk training of deep neural network acoustic models using distributed hessian-free optimizationPublished by International Speech Communication Association ,2012
- Parallel training for deep stacking networksPublished by International Speech Communication Association ,2012
- Auto-encoder bottleneck features using deep belief networksPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- Improved pre-training of Deep Belief Networks using Sparse Encoding Symmetric MachinesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2012
- Sparse Coding via Thresholding and Local Competition in Neural CircuitsNeural Computation, 2008
- Training Products of Experts by Minimizing Contrastive DivergenceNeural Computation, 2002