Remote Backend Patterns

Remote backends are essential for team collaboration, but choosing the right backend configuration and implementing proper access patterns can make the difference between smooth operations and constant headaches. Different backends have different strengths, limitations, and operational characteristics that affect how your team works with Terraform.

This part covers advanced backend patterns that work well in production environments, from basic S3 configurations to complex multi-account and multi-region setups.

S3 Backend with DynamoDB Locking

The S3 backend with DynamoDB locking is the most popular choice for AWS-based teams:

terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "infrastructure/production/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    encrypt        = true
    kms_key_id     = "arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012"
    
    # Additional security and performance options
    skip_credentials_validation = false
    skip_metadata_api_check     = false
    skip_region_validation      = false
    force_path_style           = false
  }
}

Setting up the backend infrastructure:

# S3 bucket for state storage
resource "aws_s3_bucket" "terraform_state" {
  bucket = "company-terraform-state"
}

resource "aws_s3_bucket_versioning" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  
  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.terraform_state.arn
      sse_algorithm     = "aws:kms"
    }
    bucket_key_enabled = true
  }
}

resource "aws_s3_bucket_public_access_block" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# DynamoDB table for state locking
resource "aws_dynamodb_table" "terraform_locks" {
  name           = "terraform-locks"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "LockID"
  
  attribute {
    name = "LockID"
    type = "S"
  }
  
  server_side_encryption {
    enabled     = true
    kms_key_arn = aws_kms_key.terraform_state.arn
  }
  
  point_in_time_recovery {
    enabled = true
  }
  
  tags = {
    Name        = "Terraform State Locks"
    Environment = "shared"
  }
}

# KMS key for encryption
resource "aws_kms_key" "terraform_state" {
  description             = "KMS key for Terraform state encryption"
  deletion_window_in_days = 7
  enable_key_rotation     = true
  
  tags = {
    Name = "terraform-state-key"
  }
}

Multi-Environment Backend Strategies

Different approaches work for different organizational structures:

Separate backends per environment:

# environments/dev/backend.hcl
bucket         = "company-terraform-state-dev"
key            = "infrastructure/terraform.tfstate"
region         = "us-west-2"
dynamodb_table = "terraform-locks-dev"
encrypt        = true

# environments/prod/backend.hcl
bucket         = "company-terraform-state-prod"
key            = "infrastructure/terraform.tfstate"
region         = "us-west-2"
dynamodb_table = "terraform-locks-prod"
encrypt        = true
# Initialize with environment-specific backend
terraform init -backend-config=environments/dev/backend.hcl
terraform init -backend-config=environments/prod/backend.hcl

Shared backend with environment-specific keys:

terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "infrastructure/${var.environment}/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    encrypt        = true
  }
}

Workspace-based approach:

terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "infrastructure/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    encrypt        = true
    workspace_key_prefix = "environments"
  }
}

Cross-Account Backend Access

Multi-account architectures require careful IAM configuration:

# Cross-account role for state access
resource "aws_iam_role" "terraform_state_access" {
  name = "TerraformStateAccess"
  
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          AWS = [
            "arn:aws:iam::111111111111:root",  # Dev account
            "arn:aws:iam::222222222222:root",  # Prod account
          ]
        }
        Condition = {
          StringEquals = {
            "sts:ExternalId" = "terraform-state-access"
          }
        }
      }
    ]
  })
}

resource "aws_iam_role_policy" "terraform_state_access" {
  name = "TerraformStateAccess"
  role = aws_iam_role.terraform_state_access.id
  
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject"
        ]
        Resource = "${aws_s3_bucket.terraform_state.arn}/*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:ListBucket"
        ]
        Resource = aws_s3_bucket.terraform_state.arn
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:DeleteItem"
        ]
        Resource = aws_dynamodb_table.terraform_locks.arn
      }
    ]
  })
}

Using cross-account backend:

terraform {
  backend "s3" {
    bucket         = "shared-terraform-state"
    key            = "accounts/dev/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    encrypt        = true
    
    role_arn    = "arn:aws:iam::333333333333:role/TerraformStateAccess"
    external_id = "terraform-state-access"
  }
}

Azure Storage Backend

For Azure-based infrastructure:

terraform {
  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "terraformstatestorage"
    container_name       = "terraform-state"
    key                  = "infrastructure/terraform.tfstate"
    
    # Use managed identity when running in Azure
    use_msi = true
    
    # Or use service principal
    # subscription_id = "12345678-1234-1234-1234-123456789012"
    # tenant_id       = "12345678-1234-1234-1234-123456789012"
    # client_id       = "12345678-1234-1234-1234-123456789012"
    # client_secret   = "client-secret"
  }
}

Azure backend infrastructure:

resource "azurerm_resource_group" "terraform_state" {
  name     = "terraform-state-rg"
  location = "West US 2"
}

resource "azurerm_storage_account" "terraform_state" {
  name                     = "terraformstatestorage"
  resource_group_name      = azurerm_resource_group.terraform_state.name
  location                 = azurerm_resource_group.terraform_state.location
  account_tier             = "Standard"
  account_replication_type = "GRS"
  
  blob_properties {
    versioning_enabled = true
  }
  
  tags = {
    Environment = "shared"
    Purpose     = "terraform-state"
  }
}

resource "azurerm_storage_container" "terraform_state" {
  name                  = "terraform-state"
  storage_account_name  = azurerm_storage_account.terraform_state.name
  container_access_type = "private"
}

Google Cloud Storage Backend

For GCP-based infrastructure:

terraform {
  backend "gcs" {
    bucket = "company-terraform-state"
    prefix = "infrastructure/production"
    
    # Use service account key
    credentials = "path/to/service-account-key.json"
    
    # Or use application default credentials
    # credentials = null
  }
}

GCS backend infrastructure:

resource "google_storage_bucket" "terraform_state" {
  name     = "company-terraform-state"
  location = "US"
  
  versioning {
    enabled = true
  }
  
  encryption {
    default_kms_key_name = google_kms_crypto_key.terraform_state.id
  }
  
  lifecycle_rule {
    condition {
      age = 30
    }
    action {
      type = "Delete"
    }
  }
  
  uniform_bucket_level_access = true
}

resource "google_kms_key_ring" "terraform_state" {
  name     = "terraform-state"
  location = "global"
}

resource "google_kms_crypto_key" "terraform_state" {
  name     = "terraform-state-key"
  key_ring = google_kms_key_ring.terraform_state.id
  
  rotation_period = "7776000s"  # 90 days
}

Terraform Cloud Backend

For teams using Terraform Cloud or Enterprise:

terraform {
  cloud {
    organization = "company-name"
    
    workspaces {
      name = "production-infrastructure"
    }
  }
}

Multiple workspaces:

terraform {
  cloud {
    organization = "company-name"
    
    workspaces {
      tags = ["infrastructure", "production"]
    }
  }
}

Backend Migration Strategies

Moving between backends requires careful planning:

# 1. Backup current state
terraform state pull > backup-$(date +%Y%m%d-%H%M%S).tfstate

# 2. Update backend configuration
# Edit backend configuration in terraform block

# 3. Initialize new backend
terraform init -migrate-state

# 4. Verify state migration
terraform plan  # Should show no changes

# 5. Test with a small change
terraform apply

Automated migration script:

#!/bin/bash
# migrate-backend.sh

set -e

BACKUP_FILE="state-backup-$(date +%Y%m%d-%H%M%S).tfstate"

echo "Creating state backup..."
terraform state pull > "$BACKUP_FILE"

echo "Migrating to new backend..."
terraform init -migrate-state -input=false

echo "Verifying migration..."
if terraform plan -detailed-exitcode; then
    echo "Migration successful - no changes detected"
else
    echo "WARNING: Migration may have issues - review plan output"
    exit 1
fi

echo "Backup saved as: $BACKUP_FILE"
echo "Migration complete!"

Performance Optimization

Large state files can slow down operations:

State file optimization:

# Remove unused resources from state
terraform state list | grep "old_resource" | xargs terraform state rm

# Split large configurations
terraform state mv aws_instance.web module.web.aws_instance.server

# Use targeted operations
terraform plan -target="module.database"
terraform apply -target="module.database"

Backend performance tuning:

terraform {
  backend "s3" {
    bucket         = "terraform-state"
    key            = "infrastructure/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    encrypt        = true
    
    # Performance optimizations
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
    max_retries                = 5
  }
}

What’s Next

Remote backend configuration provides the foundation for reliable state management, but real-world operations often require moving state between backends, refactoring configurations, and handling complex migration scenarios.

In the next part, we’ll explore state migration and refactoring techniques that let you reorganize your Terraform configurations safely while preserving your infrastructure.