HTTP/2

Protocols & Transport Security Notes Jan 6, 2025 JAVASCRIPT

Definition

Imagine you’re at a coffee shop with one barista who can only make drinks one at a time. Even if the shop has all the ingredients for five different drinks ready, you wait for each person’s drink to be finished before the next one starts. Now imagine upgrading to a shop where the barista can work on multiple drinks simultaneously - steaming milk for one while pulling espresso for another. HTTP/2 is this upgrade for the web. It transforms the one-at-a-time nature of HTTP/1.1 into a system where multiple requests and responses can flow simultaneously over a single connection.

HTTP/2, standardized in 2015, was born from Google’s SPDY protocol and represents a fundamental reimagining of how web communication works. Instead of text-based messages like HTTP/1.1, HTTP/2 uses binary framing - breaking messages into small frames that can be interleaved. This enables multiplexing: you can request 100 resources simultaneously, and they all stream back over one connection with their frames interwoven. No more waiting for one slow image to download before the next one starts. HTTP/2 also compresses headers (reducing repeated header overhead by up to 90%), supports server push (sending resources before they’re requested), and maintains priorities so critical resources load first.

The performance gains are substantial. Websites with many resources (images, scripts, CSS) can load significantly faster. API clients making multiple concurrent requests see reduced latency. And all of this happens over a single TCP connection instead of the 6+ connections HTTP/1.1 typically requires. The catch? HTTP/2 is binary, so you can’t just read it in a text editor like HTTP/1.1. But for production performance, HTTP/2 is now the standard - most major websites and cloud platforms support it, and browsers have used it by default for years.

Diagram

sequenceDiagram
    participant C as Client
    participant S as Server

    Note over C,S: Single TCP Connection with Multiplexing

    rect rgb(200, 230, 255)
        C->>S: SETTINGS frame
        S-->>C: SETTINGS ACK

        par Stream 1 - HTML
            C->>S: HEADERS (GET /index.html) [Stream 1]
            S-->>C: HEADERS + DATA [Stream 1]
        and Stream 3 - CSS
            C->>S: HEADERS (GET /style.css) [Stream 3]
            S-->>C: HEADERS + DATA [Stream 3]
        and Stream 5 - JavaScript
            C->>S: HEADERS (GET /app.js) [Stream 5]
            S-->>C: HEADERS + DATA [Stream 5]
        end

        Note over C,S: All streams multiplexed on ONE connection

        rect rgb(255, 230, 200)
            Note over S,C: Server Push (proactive)
            S-->>C: PUSH_PROMISE (font.woff2)
            S-->>C: DATA (font.woff2) [Stream 2]
        end
    end

    Note over C,S: HPACK compresses repeated headers

Example

Real-World Scenario 1: Modern Website Loading When you visit a news site like nytimes.com, your browser opens a single HTTP/2 connection. Through that one connection, it simultaneously requests the HTML, 50+ images, multiple CSS files, and JavaScript bundles. The server sends responses as they’re ready - small CSS files arrive while large images are still streaming. The page renders progressively instead of waiting for the slowest resource. This is why modern sites feel faster even though they have more content.

Real-World Scenario 2: gRPC API Communication Google’s gRPC framework runs entirely on HTTP/2. When your microservices communicate via gRPC, they’re using HTTP/2’s multiplexing to stream multiple RPC calls over one connection. A service handling thousands of concurrent requests needs just one HTTP/2 connection to each downstream service. This dramatically reduces connection overhead and improves throughput.

Real-World Scenario 3: CDN Edge Delivery Cloudflare, Akamai, and other CDNs use HTTP/2 between their edge servers and visitors. When you request a page, the CDN’s edge server can push critical CSS and fonts to your browser before you even ask for them (server push). The edge knows from experience that after the HTML, you’ll need these resources - so it sends them proactively, shaving hundreds of milliseconds off page loads.

Real-World Scenario 4: Mobile App API Requests A mobile app making multiple API calls (user profile, notifications, feed content) can send them all simultaneously over one HTTP/2 connection. Instead of waiting for the profile call to complete before requesting notifications, all three requests go out together and responses stream back as they’re ready. Battery life improves too - one sustained connection uses less power than multiple connection setups.

Analogy

The Multi-Lane Highway: HTTP/1.1 is a single-lane road where cars wait behind each other. HTTP/2 is an 8-lane highway where multiple cars (requests) travel simultaneously. Even better - the lanes are virtual and unlimited. A slow-moving truck in one “lane” doesn’t block the sports cars in other lanes. All traffic shares the same physical road (TCP connection) but doesn’t block each other.

The Restaurant with Multiple Kitchen Stations: HTTP/1.1 is a restaurant with one cook handling one order at a time. HTTP/2 is a kitchen with multiple stations working simultaneously - grill, sauté, prep, dessert - all preparing different parts of different orders in parallel. Your appetizer doesn’t wait for the previous table’s dessert to finish.

The Package Delivery with Convoys: Imagine if delivery trucks had to drive single-file on dedicated roads. HTTP/1.1. Now imagine multiple packages of different sizes sharing one truck, sorted by priority, delivered together. HTTP/2. You get your small urgent package first, while your big furniture delivery is still unloading - same truck, same trip, no blocking.

The Orchestra vs. One Musician: HTTP/1.1 is a solo musician playing each part of a symphony sequentially - violin part, then cello, then trumpet. HTTP/2 is an orchestra playing all parts simultaneously, coordinated to create the complete music efficiently. Both deliver the same symphony, but the orchestra finishes much faster and sounds better.

Code Example


// Node.js HTTP/2 server
const http2 = require('http2');
const fs = require('fs');

const server = http2.createSecureServer({
  key: fs.readFileSync('server-key.pem'),
  cert: fs.readFileSync('server-cert.pem')
});

server.on('stream', (stream, headers) => {
  const path = headers[':path'];

  // Handle API requests
  if (path === '/api/data') {
    stream.respond({
      'content-type': 'application/json',
      ':status': 200
    });
    stream.end(JSON.stringify({
      message: 'HTTP/2 response',
      multiplexed: true
    }));
  }

  // Demonstrate server push
  if (path === '/index.html') {
    // Push CSS before client asks for it
    stream.pushStream({ ':path': '/styles.css' }, (err, pushStream) => {
      pushStream.respond({ 'content-type': 'text/css', ':status': 200 });
      pushStream.end('body { font-family: sans-serif; }');
    });

    // Send the main HTML
    stream.respond({ 'content-type': 'text/html', ':status': 200 });
    stream.end('<html><link rel="stylesheet" href="/styles.css">...</html>');
  }
});

server.listen(443);

// HTTP/2 benefits for APIs:
// 1. Multiplexing - concurrent requests on one connection
// 2. Header compression - less bandwidth for repeated headers
// 3. Stream prioritization - critical requests first
// 4. Server push - proactive resource delivery
// 5. Binary protocol - more efficient parsing

// Client-side: Browsers handle HTTP/2 automatically
// If server supports it, fetch() uses HTTP/2 transparently:
fetch('/api/endpoint');  // May use HTTP/2 if server supports it

Security Notes

SECURITY NOTES

CRITICAL: HTTP/2 improves performance but introduces new attack vectors. Requires HTTPS with modern TLS.

Multiplexing Security:

  • Multiplexing enabled: Multiple requests on single connection improve performance
  • Priority-based DoS: Attackers can manipulate stream priority for DoS
  • Request explosion: Rapid stream creation can exhaust resources
  • Resource limits: Implement maximum streams per connection

Header Compression (HPACK):

  • Header compression: Reduces overhead but creates new vulnerabilities
  • Compression attacks: CRIME/BREACH-style attacks possible on compressed headers
  • Sensitive headers: Minimize sensitive data in headers (use body instead)
  • Randomization: Add random padding to prevent compression-based attacks

Server Push:

  • Server push: Server can proactively send resources
  • Cache directives: Respect client cache; don’t push cached resources
  • Privacy: Avoid pushing resources that expose user behavior
  • Resource exhaustion: Rate limit pushed resources

Connection Management:

  • Connection reuse: Single connection multiplexes multiple requests
  • Connection hijacking: Protect connection tokens/identifiers
  • Reset attacks: Stream resets can cause resource exhaustion
  • Goaway frames: Properly close connections with goaway frames

TLS Requirements:

  • TLS 1.2 minimum: HTTP/2 requires modern TLS
  • Forward secrecy: Use ephemeral key exchange (ECDHE, DHE)
  • Cipher suite: Use strong modern cipher suites

Best Practices:

  • Limit streams: Set reasonable maximum concurrent streams
  • Timeout streams: Timeout inactive streams
  • Flow control: Implement proper flow control windows
  • Monitor abuse: Detect and rate limit priority/reset abuse

Standards & RFCs