HTTP/1.1 (1997) was designed for simple text pages. Today's web is complex (hundreds of images, scripts, APIs). HTTP/1.1 is too slow because of "Head-of-Line Blocking". HTTP/2 (2015) fixed the application layer. HTTP/3 (2022) fixed the transport layer.

The Problem: Head-of-Line Verification

In HTTP/1.1, you can only download one file at a time per connection. If `style.css` is blocked, `script.js` waits. Browsers "hacked" this by opening 6 connection per domain.

1. HTTP/2: Multiplexing

HTTP/2 introduced Binary Framing.
Instead of sending "GET /index.html", we send a binary frame with Stream ID 1.
We can send requests for `style.css` (ID 1) and `script.js` (ID 2) simultaneously over a single TCP connection.
The Server sends the chunks back mixed together. The browser reassembles them.
HPACK: It also compresses Headers. (Why send `User-Agent: Chrome` 100 times? Send it once).

2. The TCP Bottleneck

HTTP/2 solved blocking at the App Layer, but not at the TCP layer.
If ONE packet of Stream ID 1 is lost, TCP pauses the ENTIRE connection (including Stream ID 2) to wait for the retransmission. The OS doesn't know about Streams. It only knows TCP.

3. HTTP/3: Enter QUIC

Google engineers realized TCP is the bottleneck. So they replaced it with UDP.
QUIC (Quick UDP Internet Connections) moves reliability from the Kernel (TCP) to the User Space (QUIC).
Feature: Streams are independent at the transport layer. If Packet A (Stream 1) is lost, Packet B (Stream 2) is delivered immediately. No blocking.

4. 0-RTT Handshake

QUIC combines the Transport Handshake and the TLS 1.3 Handshake.
If you have visited the site before, you can send encrypted data in the Very First Packet. Zero Round Trip Time.

Feature HTTP/1.1 HTTP/2 HTTP/3
Transport TCP TCP UDP (QUIC)
Format Text Binary Binary
Multiplexing No Yes Yes (Better)