Traditional neural networks treat each input independently. They cannot handle sequences where the order of inputs matters.
Examples: sentences, time-series, audio signals.
RNNs were introduced to allow memory over time, letting the model use previous information while processing the current input.
RNN solves the problem of sequential dependency, allowing the model to capture context from previous elements in a sequence.
It introduces a recurrent connection between time steps that carries hidden state information forward.
x1 →[RNN]→ h1 → y1
↑
x2 →[RNN]→ h2 → y2
↑
x3 →[RNN]→ h3 → y3
A “self-RNN” is essentially the self-loop inside the RNN cell that allows the hidden state to pass from time step to time step.
Keeps track of context over time by feeding the hidden state back into itself.