From the course: Natural Language Processing (NLP) Fundamentals by Pearson

Unlock this course with a free trial

Join today to access over 25,500 courses taught by industry experts.

Recurrent neural networks

Recurrent neural networks

Recurrent neural networks. As we've seen so far, feedforward networks essentially provide a straight path from the input to the output. That's what we mean by feedforward. You're always feeding it onto the next layer. Recurrent neural networks, on the other hand, allow us to close the information flow loop, and allow us to remember information that we've seen before and act accordingly. It's essentially a way of keeping track of where we are within the sequence and remembering the previous output. In the simplest possible case, the recurrent neural network simply takes into account the whatever the previous output was for the previous input. One direct consequence of this is that each output depends implicitly on all previous outputs. RNNs are particularly useful to model sequential systems like time series, audio, or streams of text. Input sequences would typically generate output sequences, so these are often referred to as sequence-to-sequence models. In many applications, however,…

Contents