Scope
The core infrastructure bundle does not create or modify your VPC or EKS cluster. It assumes an existing EKS cluster and provisions only the supporting AWS services required prior to application installation:- S3 - Cache storage bucket
- DynamoDB - Cache metadata table
- SQS - Revalidation queue
- SSM - Parameter store entries
- IAM - IRSA role for pod identity
Security Posture
- IRSA-based access - IAM permissions use Kubernetes service account identity (IRSA), not static keys
- Scoped permissions - IAM policies are limited to the specific resources created by this bundle
- Encryption configurable - Default AWS-managed encryption; customer-managed KMS keys supported (see Encryption Options)
Prerequisites
- AWS Account with permissions to create S3, DynamoDB, SQS, and IAM resources
- AWS CLI configured with credentials
- **Karpenter controller v1.1.1+ **
- Karpenter CRDs v1 API (not v1alpha5 or v1beta1)
- EKS cluster v1.31+
Quick Start (6 Steps)
1. Install Tools
Not using Homebrew? See Manual Installation below.
2. Run Bootstrap
.tool-versions. Skip this if you already have the tools installed.
3. Configure Environment
- Ask for your EKS cluster name (required)
- Ask for your namespace (e.g.,
acme- your organization identifier) - Confirm your AWS region and account ID (auto-detected)
- Derive environment from region (e.g.,
us-west-2→usw2) - Create the Terraform state backend (S3 bucket + DynamoDB table)
{namespace}-{environment} (e.g., acme-usw2) and the state bucket as {namespace}-{environment}-tfstate.
What gets configured: The atmos/stacks/valinor/_config.yaml file is updated with your environment settings. This is imported by the Atmos stack configuration that defines variables and backend settings for all Terraform components.
4. Deploy Infrastructure
Preview first? Run atmos bootstrap terraform --plan
Deploying Individual Components
To deploy a single component instead of all at once:acme-dev with your stack name (namespace-environment). Available components:
OpenNext (backing services):
s3-bucket/opennext- S3 cache bucketdynamodb/opennext- DynamoDB tablesqs-queue/opennext- SQS revalidation queueiam-role/opennext- IAM role for pod accessssm-parameters/opennext- SSM parameters
eks/karpenter- Karpenter autoscalereks/karpenter-node-pool- Karpenter node pool configurationeks/sysbox-runtime- Sysbox container runtimeeks/sysbox-deployment- Sysbox DaemonSet deployment
Reference
Command Summary
| Command | Description |
|---|---|
atmos bootstrap | Show quick start guide and check prerequisites |
atmos bootstrap configure | Configure environment + create state backend |
atmos bootstrap terraform | Deploy AWS infrastructure |
atmos bootstrap terraform --plan | Preview infrastructure changes |
Configuration
All configuration is managed through Atmos stack files. Theatmos bootstrap configure command updates atmos/stacks/valinor/_config.yaml with your environment settings:
Understanding Atmos
This bundle uses Atmos for infrastructure orchestration. In practice, it functions as a helm and terraform orchestrator.| Concept | Description | Documentation |
|---|---|---|
| Stacks | Configuration that defines what to deploy | Stacks |
| Components | Reusable Terraform modules | Components |
| Vendoring | Pull components from upstream sources | Vendoring |
Key Files
| File | Purpose |
|---|---|
atmos.yaml | Atmos CLI configuration |
atmos/stacks/valinor/_config.yaml | Your customizable settings (namespace, region, backend) |
atmos/stacks/valinor/_defaults.yaml | Stack defaults (imports _config.yaml) |
atmos/stacks/valinor/opennext.yaml | OpenNext component configuration |
atmos/stacks/workflows/*.yaml | Deployment workflows |
atmos/components/terraform/ | Terraform component code |
Manual Installation
If not using Homebrew, install these tools manually:
See
.tool-versions for specific versions.
VPC Requirements
This bundle does not create or modify your VPC. It assumes:- An existing AWS account with appropriate permissions
- An existing EKS cluster already reachable via
kubectl - Networking already configured (VPC, subnets, security groups)
Private Subnets Without NAT Gateway
If your EKS worker nodes run in private subnets without a NAT gateway, you must configure VPC endpoints for AWS service access:| Endpoint Type | Service | Required For |
|---|---|---|
| Gateway | com.amazonaws.<region>.s3 | S3 cache bucket access |
| Gateway | com.amazonaws.<region>.dynamodb | DynamoDB table access |
| Interface | com.amazonaws.<region>.sqs | SQS revalidation queue |
| Interface | com.amazonaws.<region>.ssm | SSM parameter retrieval |
| Interface | com.amazonaws.<region>.sts | IAM role assumption (IRSA) |
| Interface | com.amazonaws.<region>.ecr.api | Container image pulls |
| Interface | com.amazonaws.<region>.ecr.dkr | Container image pulls |
| Interface | com.amazonaws.<region>.logs | CloudWatch logging (if enabled) |
| Interface | com.amazonaws.<region>.eks | EKS API access |
Security Group Requirements
Ensure your EKS worker node security groups allow:- Outbound HTTPS (443) to VPC endpoints or NAT gateway
- Inbound from VPC endpoint ENIs (for interface endpoints)
- EKS cluster security group communication (already configured if EKS is working)
Air-Gapped / Restricted Networks
For environments without direct internet access, there are two supported ways to pull images:- Direct from Replicated (default) - Kubernetes pulls images directly from
registry.replicated.com. - Artifactory pull-through cache - Artifactory proxies Replicated and can fall back to Docker Hub.
Option A: Direct from Replicated (default)
If your cluster can reach the endpoints listed above, follow the normal flow (includingatmos bootstrap valinor). If you are fully air-gapped:
- Mirror Container Images: Pull required images and push to your internal registry (ECR, Harbor, etc.)
- Mirror Helm Charts: Download charts and host in an internal chart repository
- Configure Image Overrides: Update
values.yamlto point to your internal registry:
- Replicated Air-Gap Bundle: Contact your Context account representative for air-gap installation bundles
Option B: Artifactory pull-through cache (Replicated + Docker Hub fallback)
Step 1: Configure Artifactory (customer side) Create a Remote repository:- Type: Docker Registry
- URL:
https://proxy.replicated.com - Username:
<customer-email> - Password:
<license-id>
Note: Virtual repositories resolve in their configured order. Put the Replicated remote first, then Docker Hub. Note: Fallback only works when the requested image name exists in the fallback registry. If you keep proxy/valinor/docker.io/... paths, those requests will only resolve via Replicated.
After configuration, images are accessible at:
--docker-server=<repo-key>.<artifactory-host> to match your image host.
Step 3: Create values override file
Create values-artifactory.yaml to override registries and add the pull secret:
Encryption Options
By default, all resources use AWS-managed encryption keys. For organizations requiring customer-managed keys (CMKs), you can configure KMS keys for each resource type.Default Encryption
| Resource | Default Encryption |
|---|---|
| S3 buckets | SSE-S3 (AES-256) |
| DynamoDB tables | AWS-owned key |
| SQS queues | SSE-SQS |
| EBS volumes | AWS-managed key |
Customer-Managed Keys (CMK Mode)
To use your own KMS keys, update the component configurations in your stack file: S3 Bucket:KMS Key Policy Requirements
Your CMK key policy must allow the following principals:- S3: The S3 service principal needs
kms:GenerateDataKey*andkms:Decrypt - DynamoDB: The DynamoDB service principal needs
kms:Encrypt,kms:Decrypt,kms:GenerateDataKey* - SQS: The SQS service principal and your IAM role need encrypt/decrypt permissions
- EC2/EBS: The EC2 service principal and the Karpenter node IAM role need
kms:CreateGrant,kms:Decrypt,kms:GenerateDataKey*
Key Rotation
- AWS-managed keys: Automatically rotated annually
- Customer-managed keys: Enable automatic rotation in KMS (rotates annually) or manage rotation manually
- Recommendation: Enable automatic key rotation for all CMKs used with Context