Skip to main content

Scope

The core infrastructure bundle does not create or modify your VPC or EKS cluster. It assumes an existing EKS cluster and provisions only the supporting AWS services required prior to application installation:
  • S3 - Cache storage bucket
  • DynamoDB - Cache metadata table
  • SQS - Revalidation queue
  • SSM - Parameter store entries
  • IAM - IRSA role for pod identity
EKS add-ons (Karpenter NodePools, Sysbox runtime) are also included for sandbox workloads.

Security Posture

  • IRSA-based access - IAM permissions use Kubernetes service account identity (IRSA), not static keys
  • Scoped permissions - IAM policies are limited to the specific resources created by this bundle
  • Encryption configurable - Default AWS-managed encryption; customer-managed KMS keys supported (see Encryption Options)

Prerequisites

  • AWS Account with permissions to create S3, DynamoDB, SQS, and IAM resources
  • AWS CLI configured with credentials
  • **Karpenter controller v1.1.1+ **
  • Karpenter CRDs v1 API (not v1alpha5 or v1beta1)
  • EKS cluster v1.31+

Quick Start (6 Steps)

1. Install Tools

brew bundle
This installs atmos, opentofu, helm, kubectl, and other required tools.
Not using Homebrew? See Manual Installation below.

2. Run Bootstrap

atmos bootstrap
This shows the quick start guide, checks prerequisites, and installs required tool versions from .tool-versions. Skip this if you already have the tools installed.

3. Configure Environment

atmos bootstrap configure
This will:
  • Ask for your EKS cluster name (required)
  • Ask for your namespace (e.g., acme - your organization identifier)
  • Confirm your AWS region and account ID (auto-detected)
  • Derive environment from region (e.g., us-west-2usw2)
  • Create the Terraform state backend (S3 bucket + DynamoDB table)
The stack name is built as {namespace}-{environment} (e.g., acme-usw2) and the state bucket as {namespace}-{environment}-tfstate. What gets configured: The atmos/stacks/valinor/_config.yaml file is updated with your environment settings. This is imported by the Atmos stack configuration that defines variables and backend settings for all Terraform components.

4. Deploy Infrastructure

atmos bootstrap terraform
Creates AWS resources: S3 buckets, DynamoDB tables, SQS queues, and IAM roles.
Preview first? Run atmos bootstrap terraform --plan

Deploying Individual Components

To deploy a single component instead of all at once:
# Plan a single component
atmos terraform plan s3-bucket/opennext -s acme-dev

# Apply a single component
atmos terraform apply s3-bucket/opennext -s acme-dev
Replace acme-dev with your stack name (namespace-environment). Available components: OpenNext (backing services):
  • s3-bucket/opennext - S3 cache bucket
  • dynamodb/opennext - DynamoDB table
  • sqs-queue/opennext - SQS revalidation queue
  • iam-role/opennext - IAM role for pod access
  • ssm-parameters/opennext - SSM parameters
EKS add-ons:
  • eks/karpenter - Karpenter autoscaler
  • eks/karpenter-node-pool - Karpenter node pool configuration
  • eks/sysbox-runtime - Sysbox container runtime
  • eks/sysbox-deployment - Sysbox DaemonSet deployment
See Atmos Terraform Commands for more options. Installation will be handled by Replicated.

Reference

Command Summary

CommandDescription
atmos bootstrapShow quick start guide and check prerequisites
atmos bootstrap configureConfigure environment + create state backend
atmos bootstrap terraformDeploy AWS infrastructure
atmos bootstrap terraform --planPreview infrastructure changes

Configuration

All configuration is managed through Atmos stack files. The atmos bootstrap configure command updates atmos/stacks/valinor/_config.yaml with your environment settings:
vars:
  namespace: acme          # Your organization identifier
  environment: usw2        # Derived from region (us-west-2 → usw2)
  region: us-west-2        # AWS region
  account_id: "123456789"  # AWS account ID

terraform:
  backend:
    s3:
      bucket: acme-usw2-tfstate  # {namespace}-{environment}-tfstate
To modify configuration manually, edit this file directly. See Atmos Stack Configuration for details.

Understanding Atmos

This bundle uses Atmos for infrastructure orchestration. In practice, it functions as a helm and terraform orchestrator.
ConceptDescriptionDocumentation
StacksConfiguration that defines what to deployStacks
ComponentsReusable Terraform modulesComponents
VendoringPull components from upstream sourcesVendoring

Key Files

FilePurpose
atmos.yamlAtmos CLI configuration
atmos/stacks/valinor/_config.yamlYour customizable settings (namespace, region, backend)
atmos/stacks/valinor/_defaults.yamlStack defaults (imports _config.yaml)
atmos/stacks/valinor/opennext.yamlOpenNext component configuration
atmos/stacks/workflows/*.yamlDeployment workflows
atmos/components/terraform/Terraform component code

Manual Installation

If not using Homebrew, install these tools manually: See .tool-versions for specific versions.

VPC Requirements

This bundle does not create or modify your VPC. It assumes:
  • An existing AWS account with appropriate permissions
  • An existing EKS cluster already reachable via kubectl
  • Networking already configured (VPC, subnets, security groups)
We deploy backing services (S3, DynamoDB, SQS, IAM) and optional EKS add-ons (Karpenter NodePools, Sysbox) into your existing infrastructure.

Private Subnets Without NAT Gateway

If your EKS worker nodes run in private subnets without a NAT gateway, you must configure VPC endpoints for AWS service access:
Endpoint TypeServiceRequired For
Gatewaycom.amazonaws.<region>.s3S3 cache bucket access
Gatewaycom.amazonaws.<region>.dynamodbDynamoDB table access
Interfacecom.amazonaws.<region>.sqsSQS revalidation queue
Interfacecom.amazonaws.<region>.ssmSSM parameter retrieval
Interfacecom.amazonaws.<region>.stsIAM role assumption (IRSA)
Interfacecom.amazonaws.<region>.ecr.apiContainer image pulls
Interfacecom.amazonaws.<region>.ecr.dkrContainer image pulls
Interfacecom.amazonaws.<region>.logsCloudWatch logging (if enabled)
Interfacecom.amazonaws.<region>.eksEKS API access
Note: Gateway endpoints (S3, DynamoDB) are free. Interface endpoints incur hourly and data processing charges. Also note that VPC endpoints alone do not enable Replicated image pulls. Either NAT or an internal registry mirror is still required to pull images.

Security Group Requirements

Ensure your EKS worker node security groups allow:
  • Outbound HTTPS (443) to VPC endpoints or NAT gateway
  • Inbound from VPC endpoint ENIs (for interface endpoints)
  • EKS cluster security group communication (already configured if EKS is working)

Air-Gapped / Restricted Networks

For environments without direct internet access, there are two supported ways to pull images:
  1. Direct from Replicated (default) - Kubernetes pulls images directly from registry.replicated.com.
  2. Artifactory pull-through cache - Artifactory proxies Replicated and can fall back to Docker Hub.

Option A: Direct from Replicated (default)

If your cluster can reach the endpoints listed above, follow the normal flow (including atmos bootstrap valinor). If you are fully air-gapped:
  1. Mirror Container Images: Pull required images and push to your internal registry (ECR, Harbor, etc.)
  2. Mirror Helm Charts: Download charts and host in an internal chart repository
  3. Configure Image Overrides: Update values.yaml to point to your internal registry:
# Example: Override image registries in Helm values
opennext:
  image:
    registry: your-internal-registry.example.com
ai-streaming:
  image:
    registry: your-internal-registry.example.com
  1. Replicated Air-Gap Bundle: Contact your Context account representative for air-gap installation bundles

Option B: Artifactory pull-through cache (Replicated + Docker Hub fallback)

Step 1: Configure Artifactory (customer side) Create a Remote repository:
  • Type: Docker Registry
  • URL: https://proxy.replicated.com
  • Username: <customer-email>
  • Password: <license-id>
If you need Docker Hub fallback, create a Docker Hub remote and a Virtual repository that searches Replicated first, then Docker Hub. Use the virtual repository endpoint for pulls.
Note: Virtual repositories resolve in their configured order. Put the Replicated remote first, then Docker Hub. Note: Fallback only works when the requested image name exists in the fallback registry. If you keep proxy/valinor/docker.io/... paths, those requests will only resolve via Replicated.
After configuration, images are accessible at:
<artifactory-host>/<repo-key>/proxy/valinor/...
If you use sub-domain access instead of repository path access, the format is:
<repo-key>.<artifactory-host>/proxy/valinor/...
Step 2: Create Artifactory pull secret
kubectl create namespace valinor
kubectl create secret docker-registry artifactory-pull-secret \
  --namespace valinor \
  --docker-server=<artifactory-host> \
  --docker-username=<artifactory-user> \
  --docker-password=<artifactory-token>
If you use sub-domain access, set --docker-server=<repo-key>.<artifactory-host> to match your image host. Step 3: Create values override file Create values-artifactory.yaml to override registries and add the pull secret:
# values-artifactory.yaml
# Format: <artifactory-host>/<remote-or-virtual-repo>
# Image paths remain: proxy/valinor/...
# If you use sub-domain access, set registry to: <repo-key>.<artifactory-host>
imagePullSecrets:
  - name: artifactory-pull-secret

opennext:
  image:
    registry: <artifactory-host>/<repo-key>
  imagePullSecrets:
    - name: artifactory-pull-secret
ai-streaming:
  image:
    registry: <artifactory-host>/<repo-key>
  imagePullSecrets:
    - name: artifactory-pull-secret
migrations:
  image:
    registry: <artifactory-host>/<repo-key>
  imagePullSecrets:
    - name: artifactory-pull-secret
redis:
  image:
    registry: <artifactory-host>/<repo-key>
    repository: proxy/valinor/docker.io/valkey/valkey
  imagePullSecrets:
    - name: artifactory-pull-secret
supabase:
  imagePullSecrets:
    - name: artifactory-pull-secret
  db:
    image:
      registry: <artifactory-host>/<repo-key>
      repository: proxy/valinor/docker.io/supabase/postgres
  storage:
    image:
      registry: <artifactory-host>/<repo-key>
      repository: proxy/valinor/docker.io/supabase/storage-api
  kong:
    image:
      registry: <artifactory-host>/<repo-key>
      repository: proxy/valinor/docker.io/library/kong
  rest:
    image:
      registry: <artifactory-host>/<repo-key>
      repository: proxy/valinor/docker.io/postgrest/postgrest
Step 4: Install via Helm
# Login to Replicated registry (for the Helm chart)
helm registry login registry.replicated.com \
  --username <customer-email> \
  --password <license-id>

# Install with Artifactory overrides
helm install valinor oci://registry.replicated.com/context-vpc/stable/valinor \
  --version 0.1.3 \
  --namespace valinor \
  --values values-artifactory.yaml \
  --values values-customer.yaml
Step 5: Verify
atmos verify

Encryption Options

By default, all resources use AWS-managed encryption keys. For organizations requiring customer-managed keys (CMKs), you can configure KMS keys for each resource type.

Default Encryption

ResourceDefault Encryption
S3 bucketsSSE-S3 (AES-256)
DynamoDB tablesAWS-owned key
SQS queuesSSE-SQS
EBS volumesAWS-managed key

Customer-Managed Keys (CMK Mode)

To use your own KMS keys, update the component configurations in your stack file: S3 Bucket:
components:
  terraform:
    s3-bucket/opennext:
      vars:
        kms_master_key_arn: "arn:aws:kms:us-west-2:123456789:key/your-key-id"
DynamoDB Table:
components:
  terraform:
    dynamodb/opennext:
      vars:
        server_side_encryption_kms_key_arn: "arn:aws:kms:us-west-2:123456789:key/your-key-id"
SQS Queue:
components:
  terraform:
    sqs-queue/opennext:
      vars:
        kms_master_key_id: "arn:aws:kms:us-west-2:123456789:key/your-key-id"
EBS Volumes (Karpenter nodes):
components:
  terraform:
    eks/sysbox-node-pool:
      vars:
        node_pools:
          sysbox:
            block_device_mappings:
              - deviceName: /dev/sda1
                ebs:
                  volumeSize: 300Gi
                  volumeType: gp3
                  encrypted: true
                  kmsKeyId: "arn:aws:kms:us-west-2:123456789:key/your-key-id"
                  deleteOnTermination: true

KMS Key Policy Requirements

Your CMK key policy must allow the following principals:
  • S3: The S3 service principal needs kms:GenerateDataKey* and kms:Decrypt
  • DynamoDB: The DynamoDB service principal needs kms:Encrypt, kms:Decrypt, kms:GenerateDataKey*
  • SQS: The SQS service principal and your IAM role need encrypt/decrypt permissions
  • EC2/EBS: The EC2 service principal and the Karpenter node IAM role need kms:CreateGrant, kms:Decrypt, kms:GenerateDataKey*
Example key policy statement for cross-service access:
{
  "Sid": "AllowServiceAccess",
  "Effect": "Allow",
  "Principal": {
    "Service": [
      "s3.amazonaws.com",
      "dynamodb.amazonaws.com",
      "sqs.amazonaws.com",
      "ec2.amazonaws.com"
    ]
  },
  "Action": [
    "kms:Encrypt",
    "kms:Decrypt",
    "kms:GenerateDataKey*",
    "kms:CreateGrant",
    "kms:DescribeKey"
  ],
  "Resource": "*",
  "Condition": {
    "StringEquals": {
      "kms:CallerAccount": "123456789012"
    }
  }
}

Key Rotation

  • AWS-managed keys: Automatically rotated annually
  • Customer-managed keys: Enable automatic rotation in KMS (rotates annually) or manage rotation manually
  • Recommendation: Enable automatic key rotation for all CMKs used with Context
# Enable automatic rotation for a CMK
aws kms enable-key-rotation --key-id your-key-id

Troubleshooting

AWS Authentication

# Verify credentials
aws sts get-caller-identity

# Configure if needed
aws configure
# or
aws sso login

Kubernetes Connection

# Update kubeconfig
aws eks update-kubeconfig --name <cluster> --region <region>

# Verify access
kubectl get nodes

Terraform State Lock

If you see “Error acquiring state lock”:
# Wait for other processes to finish, or force unlock
atmos terraform force-unlock <component> -s <stack> <lock-id>

Helm Issues

# Check release status
helm list -n valinor
helm history valinor -n valinor

# Reset if stuck
helm uninstall valinor -n valinor
atmos bootstrap valinor

Support

For issues with this infrastructure bundle, contact your Context account representative.