5xx Server Error Status Codes

Fundamentals Jan 9, 2026 HTTP
http status-codes errors server-errors rest

Definition

5xx Server Error status codes indicate that the server encountered an error or is incapable of fulfilling a valid request. Unlike 4xx errors, 5xx errors are the server’s fault, not the client’s.

The most common 5xx status codes are:

  • 500 Internal Server Error - Generic server error (unhandled exception, crash)
  • 502 Bad Gateway - Upstream server returned invalid response (proxy/gateway issue)
  • 503 Service Unavailable - Server temporarily unavailable (maintenance, overload)
  • 504 Gateway Timeout - Upstream server didn’t respond in time
  • 501 Not Implemented - Server doesn’t support the requested method
  • 505 HTTP Version Not Supported - Server doesn’t support HTTP version

5xx codes are defined in RFC 7231 and indicate that the client should retry the request later (often with exponential backoff).

Example

500 Internal Server Error - Unhandled Exception:

GET /api/users/123 HTTP/1.1
Host: api.example.com
Authorization: Bearer YOUR_TOKEN

HTTP/1.1 500 Internal Server Error
Content-Type: application/json

{
  "error": "Internal Server Error",
  "message": "An unexpected error occurred. Please try again later.",
  "requestId": "req-abc-123"
}

502 Bad Gateway - Upstream Failure:

GET /api/users HTTP/1.1
Host: api.example.com

HTTP/1.1 502 Bad Gateway
Content-Type: application/json

{
  "error": "Bad Gateway",
  "message": "The upstream service returned an invalid response",
  "service": "user-service"
}

503 Service Unavailable - Maintenance Mode:

GET /api/users HTTP/1.1
Host: api.example.com

HTTP/1.1 503 Service Unavailable
Retry-After: 3600
Content-Type: application/json

{
  "error": "Service Unavailable",
  "message": "The service is temporarily unavailable for maintenance",
  "retryAfter": 3600,
  "estimatedRestoration": "2026-01-09T12:00:00Z"
}

504 Gateway Timeout - Upstream Timeout:

GET /api/reports/generate HTTP/1.1
Host: api.example.com

HTTP/1.1 504 Gateway Timeout
Content-Type: application/json

{
  "error": "Gateway Timeout",
  "message": "The upstream service did not respond in time",
  "timeout": "30s"
}

Code Example

JavaScript (Fetch API):

const fetchUser = async (userId, retries = 3, backoff = 1000) => {
  for (let attempt = 0; attempt < retries; attempt++) {
    try {
      const response = await fetch(`https://api.example.com/users/${userId}`, {
        headers: {
          'Accept': 'application/json',
          'Authorization': 'Bearer YOUR_TOKEN'
        }
      });
      
      // Handle 5xx errors with retry logic
      if (response.status === 500) {
        const error = await response.json();
        console.error('Internal Server Error:', error);
        
        // Retry with exponential backoff
        if (attempt < retries - 1) {
          const delay = backoff * Math.pow(2, attempt);
          console.log(`Retrying in ${delay}ms... (attempt ${attempt + 1}/${retries})`);
          await new Promise(resolve => setTimeout(resolve, delay));
          continue;
        }
        
        throw new Error('Server error: ' + error.message);
      }
      
      if (response.status === 502) {
        console.error('Bad Gateway - upstream service failure');
        
        // Retry
        if (attempt < retries - 1) {
          await new Promise(resolve => setTimeout(resolve, backoff * Math.pow(2, attempt)));
          continue;
        }
        
        throw new Error('Upstream service unavailable');
      }
      
      if (response.status === 503) {
        const retryAfter = response.headers.get('Retry-After');
        console.error('Service Unavailable. Retry after:', retryAfter);
        
        // Wait for Retry-After duration
        if (retryAfter && attempt < retries - 1) {
          const delay = parseInt(retryAfter) * 1000;
          console.log(`Waiting ${delay}ms before retry...`);
          await new Promise(resolve => setTimeout(resolve, delay));
          continue;
        }
        
        throw new Error('Service temporarily unavailable');
      }
      
      if (response.status === 504) {
        console.error('Gateway Timeout - upstream service too slow');
        
        // Retry
        if (attempt < retries - 1) {
          await new Promise(resolve => setTimeout(resolve, backoff * Math.pow(2, attempt)));
          continue;
        }
        
        throw new Error('Request timeout');
      }
      
      // Check if response is successful
      if (!response.ok) {
        throw new Error(`HTTP ${response.status}: ${response.statusText}`);
      }
      
      return await response.json();
      
    } catch (error) {
      if (attempt === retries - 1) {
        console.error('Max retries reached:', error);
        throw error;
      }
      
      console.error(`Attempt ${attempt + 1} failed:`, error.message);
    }
  }
};

// Usage
fetchUser(123)
  .then(user => console.log('User:', user))
  .catch(error => console.error('Failed after retries:', error));

Python (requests library):

import requests
import time

def fetch_user(user_id, retries=3, backoff=1):
    for attempt in range(retries):
        try:
            response = requests.get(
                f'https://api.example.com/users/{user_id}',
                headers={
                    'Accept': 'application/json',
                    'Authorization': 'Bearer YOUR_TOKEN'
                },
                timeout=30
            )
            
            # Handle 5xx errors with retry logic
            if response.status_code == 500:
                error = response.json()
                print(f'Internal Server Error: {error}')
                
                # Retry with exponential backoff
                if attempt < retries - 1:
                    delay = backoff * (2 ** attempt)
                    print(f'Retrying in {delay}s... (attempt {attempt + 1}/{retries})')
                    time.sleep(delay)
                    continue
                
                raise Exception(f"Server error: {error.get('message')}")
            
            if response.status_code == 502:
                print('Bad Gateway - upstream service failure')
                
                # Retry
                if attempt < retries - 1:
                    time.sleep(backoff * (2 ** attempt))
                    continue
                
                raise Exception('Upstream service unavailable')
            
            if response.status_code == 503:
                retry_after = response.headers.get('Retry-After')
                print(f'Service Unavailable. Retry after: {retry_after}')
                
                # Wait for Retry-After duration
                if retry_after and attempt < retries - 1:
                    delay = int(retry_after)
                    print(f'Waiting {delay}s before retry...')
                    time.sleep(delay)
                    continue
                
                raise Exception('Service temporarily unavailable')
            
            if response.status_code == 504:
                print('Gateway Timeout - upstream service too slow')
                
                # Retry
                if attempt < retries - 1:
                    time.sleep(backoff * (2 ** attempt))
                    continue
                
                raise Exception('Request timeout')
            
            # Check if response is successful
            response.raise_for_status()
            
            return response.json()
            
        except requests.exceptions.RequestException as e:
            if attempt == retries - 1:
                print(f'Max retries reached: {e}')
                raise
            
            print(f'Attempt {attempt + 1} failed: {e}')

# Usage
try:
    user = fetch_user(123)
    print(f'User: {user}')
except Exception as error:
    print(f'Failed after retries: {error}')

Diagram

graph TB
    subgraph "5xx Server Errors"
        E500[500 Internal Error
Unhandled exception] E502[502 Bad Gateway
Upstream invalid] E503[503 Unavailable
Maintenance/overload] E504[504 Timeout
Upstream slow] E501[501 Not Implemented
Method unsupported] end subgraph "Client Actions" A1[Retry with backoff] A2[Check upstream status] A3[Wait Retry-After] A4[Increase timeout] A5[Use different method] end subgraph "Server Actions" S1[Fix bug, deploy] S2[Fix upstream service] S3[Scale up, end maintenance] S4[Optimize upstream] S5[Implement method] end E500 --> A1 E500 --> S1 E502 --> A2 E502 --> S2 E503 --> A3 E503 --> S3 E504 --> A4 E504 --> S4 E501 --> A5 E501 --> S5

Analogy

Think of 5xx errors like problems in a restaurant kitchen:

  • 500 Internal Server Error β†’ “The oven broke, we can’t cook your order”
  • 502 Bad Gateway β†’ “Our supplier sent us bad ingredients”
  • 503 Service Unavailable β†’ “We’re closed for renovations, come back in an hour”
  • 504 Gateway Timeout β†’ “The delivery truck is too slow, we gave up waiting”

The problem is with the restaurant (server), not your order (request).

Best Practices

  1. Don’t Expose Stack Traces - Never return internal errors or stack traces in 5xx responses
  2. Include Request IDs - Add unique request IDs for debugging and support
  3. Set Retry-After - Include Retry-After header with 503 to guide retry timing
  4. Log Everything - Log all 5xx errors with full context for debugging
  5. Use Circuit Breakers - Prevent cascading failures in microservices
  6. Implement Retries - Retry 5xx errors with exponential backoff
  7. Monitor 5xx Rates - Alert on elevated 5xx error rates
  8. Graceful Degradation - Return partial data or cached responses when possible

Common Mistakes

  • Exposing Internal Details - Including stack traces, database errors, or file paths
  • 500 for Everything - Using 500 for all errors instead of specific codes
  • No Retry-After - Not telling clients when to retry during maintenance
  • No Request IDs - Making it impossible to trace specific failed requests
  • Ignoring 5xx Errors - Not logging or monitoring server errors
  • No Retry Logic - Clients not implementing exponential backoff for transient failures
  • Cascading Failures - Not using circuit breakers to prevent downstream failures

Standards & RFCs