Skip to main content

Deploy on AWS

This guide uses AWS Aurora as a database, eksctl to deploy the application, and AWS Load Balancer Controller and External DNS to realize Ingress resources.

Production warning#

Please take note that these instructions are intended as a guide. Your needs will dictate how to deploy papercups into your environment. Some considerations which must be taken into account when deploying to production include:

  • Securing the Database's credentials, encryption at rest and in transit, backups and disaster planning, etc..
  • We are reusing the same subnets for application to database communication as we are for ALB to application communication. You may wish to separate these subnets.
  • We are enabling database network connectivity to the entire node pool. You may wish to use amazon-vpc-cni-k8s to attach security groups to the pods.
  • You may wish to enable SSL support.

Your pre-existing infrastructure will dictate what steps you must perform or skip.

This guide is not meant to dictate architecture, but merely represents a possible path. You might choose to use Aurora, as described here, or an RDS instance, or the Bitnami Postgresql chart. You may choose to use EKS, or ECS, with eksctl or with kops. There are many variables available to chooose from, and this approach should be used as the start of a conversation. Please find us at slack to chat!


Step By Step Installation#

This guide presumes a clean AWS account with no resources.

Database Setup (Aurora)#

  1. Create Security Group for Aurora.
    # Create a VPC security group for Aurora
    aws ec2 create-security-group \
    --description "Allow connections to Aurora" \
    --group-name "papercups-db"
  2. Create an RDS instance for your cluster
    # Create the Aurora instance
    # WARNING: In production, you probably donut want AutoPause enabled
    aws rds create-db-cluster \
    --engine aurora-postgresql \
    --db-cluster-identifier papercups \
    --engine-mode serverless \
    --scaling-configuration MinCapacity=2,MaxCapacity=4,SecondsUntilAutoPause=300,AutoPause=true \
    --master-username papercups \
    --master-user-password changeit \
    --vpc-security-group-ids $(aws ec2 describe-security-groups --group-name "papercups-db" --query 'SecurityGroups[0].GroupId' --output text)

EKS Setup#

  1. Create the cluster with the subnets from the "default" db subnet group.
    eksctl create cluster \
    --name papercups \
    --version=1.19 \
    --enable-ssm \
    --vpc-public-subnets=$(aws rds describe-db-subnet-groups --db-subnet-group-name "default" --query 'DBSubnetGroups[-1].Subnets[*].SubnetIdentifier' --output text | perl -pe 's/\h/,/g')
  2. Enable the Node Group security group to communicate with the Aurora cluster.
    # Enable Node Group to DB communication
    aws ec2 authorize-security-group-ingress \
    --group-name "papercups-db" \
    --source-group $(eksctl utils describe-stacks --cluster=papercups-eks -o json -v 0 | jq --raw-output '.[].Outputs[] | select(.OutputKey == "SharedNodeSecurityGroup").OutputValue') \
    --protocol tcp \
    --port 5432

Setup the AWS Load Balancer Controller#

  1. Tag the subnets as usable by the ELB.

    aws ec2 create-tags \
    --resources $(aws rds describe-db-subnet-groups --db-subnet-group-name "default" --query 'DBSubnetGroups[-1].Subnets[*].SubnetIdentifier' --output text) \
  2. Download AWSLoadBalancerControllerIAMPolicy

    curl -o iam_policy.json
  3. Create AWSLoadBalancerControllerIAMPolicy

    aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json
  4. Create IAM Open ID Connect provider

    eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=papercups --approve
  5. Install the AWS Load Balancer Controller

    eksctl create iamserviceaccount \
    --cluster=papercups \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --attach-policy-arn=arn:aws:iam::$(aws sts get-caller-identity --query 'Account' --output text):policy/AWSLoadBalancerControllerIAMPolicy \
    --override-existing-serviceaccounts \
  6. Check to see if the controller is installed.

    kubectl get deployment -n kube-system alb-ingress-controller
  7. If Step 6 had a result, perform step 5 from

  8. Install the TargetGroupBinding CRD

    kubectl apply -k ""
  9. Add the EKS-charts repository

    helm repo add eks
  10. Use Helm to install the AWS Load Balancer Controller to the kube-system namespace

    helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
    --set clusterName=papercups \
    --set serviceAccount.create=false \
    --set \
    -n kube-system
  11. Verify the controller is installed

    kubectl get deployment -n kube-system aws-load-balancer-controller

Setup the External DNS Controller#

If you choose to skip this section, after deploying Papercups and the ingress controller is created, add a CNAME or ALIAS record for that points to the created load balancer from kubectl --namespace papercups describe ing papercups*

  1. Create the IAM policy.

    You may wish to fine tune this policy document to only permit explicit Hosted Zone IDs

    # Press Ctrl+D after pasting this.
    cat > policy-document.json
    "Version": "2012-10-17",
    "Statement": [
    "Effect": "Allow",
    "Action": [
    "Resource": [
    "Effect": "Allow",
    "Action": [
    "Resource": [
    # Create AllowExternalDNSUpdates
    aws iam create-policy \
    --policy-name AllowExternalDNSUpdates \
    --policy-document file://policy-document.json
  2. Create the service account.

    # Install the AWS Load Balancer Controller
    eksctl create iamserviceaccount \
    --cluster=papercups \
    --namespace=kube-system \
    --name=external-dns-controller \
    --attach-policy-arn=arn:aws:iam::$(aws sts get-caller-identity --query 'Account' --output text):policy/AllowExternalDNSUpdates \
    --override-existing-serviceaccounts \
  3. If you need to create the hosted zone, do so now. Take note of the output of the command. Please read this guide if you are unfamiliar with DNS management.

    # Create
    aws route53 create-hosted-zone --name "" --caller-reference "papercupsexample-com-$(date +%s)"
  4. In case you forget, this is how you recall the HostedZoneID and Name Servers for the zone.

    # This is the Hosted Zone ID.
    aws route53 list-hosted-zones-by-name --output text --dns-name "" --query 'HostedZones[0].Id'
    # These are the Name Servers for the new zone. You'll want to update your Registrar with these.
    aws route53 list-resource-record-sets \
    --output text \
    --hosted-zone-id $(aws route53 list-hosted-zones-by-name --output text --dns-name "" --query 'HostedZones[0].Id') \
    --query "ResourceRecordSets[?Type == 'NS'].[*][0][0][3][*].Value"
  5. Deploy the external-dns-controller

    # Add the bitnami repository
    helm repo add bitnami
    # Install the External DNS Controller
    helm upgrade -i external-dns-controller bitnami/external-dns \
    --set provider=aws \
    --set txtPrefix=external-dns-controller@papercups \
    --set domainFilters[0] \
    --set serviceAccount.create=false \
    --set \
    --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::$(aws sts get-caller-identity --query 'Account' --output text):policy/AllowExternalDNSUpdates \
    -n kube-system
    # Verify the controller is installed
    kubectl get deployment -n kube-system external-dns-controller

Deploying Papercups#

  1. Deploy the application using helm
    helm repo add papercups
    helm upgrade -i papercups-release papercups/papercups \
    --create-namespace \
    --namespace papercups \
    --set \
    --set ingress.annotations."kubernetes\.io/ingress\.class"=alb \
    --set ingress.annotations."alb\.ingress\.kubernetes\.io/scheme"=internet-facing \
    --set ingress.annotations."alb\.ingress\.kubernetes\.io/target-type"=instance \
    --set ingress.enabled=true \
    --set ingress.hosts[0] \
    --set ingress.hosts[0].paths[0]=\/ \
    --set secrets.DATABASE_URL="ecto://papercups:changeit@$(aws rds describe-db-clusters --db-cluster-identifier papercups --query 'DBClusters[0].Endpoint' | sed -e 's/\"//g')/papercups" \
    --set service.type="NodePort"
  2. Follow up to check on the status of the deployment.
    kubectl logs -n papercups $(kubectl get po -n papercups | grep Running | egrep -o 'papercups-release[a-zA-Z0-9-]+')
  3. Follow up to check on the status of the ingress controller
    kubectl --namespace papercups describe ing papercups
    # Observe the logs of the aws-load-balancer-controller if you are having trouble
    kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o 'aws-load-balancer-controller[a-zA-Z0-9-]+')
    # Observe the logs of the external-dns-controller if you are having trouble
    kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o 'external-dns-controller[a-zA-Z0-9-]+')

Deleting the deployment#

  1. Delete the EKS cluster
    eksctl delete cluster --name papercups
  2. Delete the RDS instance
    aws rds delete-db-cluster --db-cluster-identifier papercups --skip-final-snapshot
  3. Empty the DNS Hosted Zone
    aws route53 change-resource-record-sets --hosted-zone-id $HOSTED_ZONE_ID --change-batch file://<(aws route53 list-resource-record-sets --output json --hosted-zone-id $(aws route53 list-hosted-zones-by-name --output text --dns-name "" --query 'HostedZones[0].Id') --query '{Changes: ResourceRecordSets[?Name!=``].{Action: `DELETE`, ResourceRecordSet: @}}')
  4. Delete the DNS Hosted Zone
    aws route53 delete-hosted-zone --id $(aws route53 list-hosted-zones-by-name --output text --dns-name "" --query 'HostedZones[0].Id')



AWS Load Balancer Controller#