1. Recurrent Neural Networks (RNN)

Why RNN Came as an Idea

Traditional neural networks treat each input independently. They cannot handle sequences where the order of inputs matters.

Examples: sentences, time-series, audio signals.

RNNs were introduced to allow memory over time, letting the model use previous information while processing the current input.

What Problem RNN Solves

RNN solves the problem of sequential dependency, allowing the model to capture context from previous elements in a sequence.

How RNN Solves the Problem

It introduces a recurrent connection between time steps that carries hidden state information forward.

RNN Equations

RNN Architecture

x1 →[RNN]→ h1 → y1
      ↑
x2 →[RNN]→ h2 → y2
      ↑
x3 →[RNN]→ h3 → y3


2. Self RNN (Vanilla RNN Recurrence)

A “self-RNN” is essentially the self-loop inside the RNN cell that allows the hidden state to pass from time step to time step.

Problem It Solves

Keeps track of context over time by feeding the hidden state back into itself.

How It Works