A way out of the odyssey: analyzing and combining recent insights for lstms

By Shayne Longpre

LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of augmentations and modifications to LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, average pooling, and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model.

Citation credit

If you reference this paper in published work, please cite:

Shayne Longpre, Sabeek Pradhan, Caiming Xiong, Richard Socher
A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs
x
We use cookies to make interactions with our websites and services easy and meaningful, to better understand how they are used and to tailor advertising. You can read more and make your cookie choices here. By continuing to use this site you are giving us your consent to do this.