3-tier application deployment on AWS using Terraform and Docker

Oyunbileg Davaanyam
13 min readDec 16, 2022

--

In this blog post, we will build a highly available application on top of AWS cloud services. Basic AWS understanding is required to follow this tutorial. If you have no experience with AWS, you can start here right away.

To make the infrastructure configurations more illustrative, let’s imagine a scenario where CLOUD is very much needed!

Scenario description

We will design an architecture for a vocational training web application, where learners will be able to take courses in the form of pre-recorded videos. This web app is intended to solve the problem of people in rural areas who want to take vocational training courses but cannot due to the over-centralization of training centers in the capital city of Mongolia.

Requirements:

  1. High availability is needed as there should be no interruption when the learner is watching pre-recorded lessons
  2. The server should always be up to enable users to access courses anytime
  3. It must be easy to scale up or down because we do not know the traffic load yet
  4. The app servers that deal with the database should not be accessible by the rest of the world to ensure security (Intellectual property, user data security)
  5. There should be no single point of failure (especially when uploading videos from the course provider interface)

Architecture

By the end of this tutorial, we will have the following architecture deployed. All the services and deployments are done on AWS to ensure consistency and a smoother experience. However, it might make the app too dependent on AWS, which means if AWS fails due to unexpected events, the app will fail too. The main technologies used are AWS, Terraform, and Docker.

Rough Architecture on AWS: https://drive.google.com/file/d/1apt_10VxAyF_w-eYLPfnzWcqnIhBZOJp/view?usp=sharing

We will follow the three-tier architecture design. Three-tier architecture organizes applications into three logical and physical computing tiers: the presentation tier (webserver), the application tier (app server), and the data tier, where the data is stored and managed.

Our application

Our application consists of two tiers that will be deployed to EC2 instances as Docker containers:

  • presentation tier (webserver) — this is where the user interacts with the website
  • application tier (app server) — this is where we have our business logics.

In other words, the presentation tier forwards requests from the user to the app server, that in turn runs queries on the rds instance to fetch the lesson recordings. Our database will be a relational database with a MySQL engine just like how YouTube stores videos. Further infrastructure details will be discussed in a bit.

Main directory structure

As a placeholder, we can have a simple app that returns “Hello World”. The main directory structure is shown above. Each app tier has a Dockerfile that installs the dependencies and bundles the app source code.

# // Dockerfile

# Select node version and set working directory
FROM node:17-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app

RUN npm install

# Bundle app source
COPY . /usr/src/app

# Expose publc port and run npm command
EXPOSE 3000
CMD ["npm", "start"]

Before the infrastructure deployment, we will build images for these applications and push them to separate ECR (Amazon Elastic Container Registry) repositories. For this, make sure you have AWS CLI installed and configured locally. We also need Docker to be running.

  1. Login to ECR: replace region and AWS account ID. If you don’t know where to find your account ID, please refer to this page.
aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com

2. Create ECR repositories: replace ecr_application_repo_name. This cmd line describes an ecr repository if it exists. Otherwise, it creates a new repository with the name specified.

ECR_APPLICATION_REPO_NAME=app-application-tier
aws ecr describe-repositories --repository-names ${ECR_APPLICATION_REPO_NAME} || aws ecr create-repository --repository-name ${ECR_APPLICATION_REPO_NAME}

Then, we will do the same for the presentation tier.

ECR_PRESENTATION_REPO_NAME=app-presentation-tier
aws ecr describe-repositories --repository-names ${ECR_PRESENTATION_REPO_NAME} || aws ecr create-repository --repository-name ${ECR_PRESENTATION_REPO_NAME}

3. Build and push the images for each tier: replace ecr_application_repo_name with the one you specified earlier.

cd ./application-tier/
ECR_APPLICATION_TIER_REPO=$(aws ecr describe-repositories --repository-names ${ECR_APPLICATION_REPO_NAME} | jq -r '.repositories[0].repositoryUri')
docker build -t app-application-tier .
docker tag app-application-tier:latest $ECR_APPLICATION_TIER_REPO:latest
docker push $ECR_APPLICATION_TIER_REPO:latest

Then, let’s do the same for the presentation tier.

cd ../presentation-tier/
ECR_PRESENTATION_TIER_REPO=$(aws ecr describe-repositories --repository-names ${ECR_PRESENTATION_REPO_NAME} | jq -r '.repositories[0].repositoryUri')
docker build -t app-presentation-tier .
docker tag app-presentation-tier:latest $ECR_PRESENTATION_TIER_REPO:latest
docker push $ECR_PRESENTATION_TIER_REPO:latest

Now, that we have built and pushed our app images to ECR, we can start with the infrastructure deployment. For the infrastructure deployment, we will use Terraform, an open-source infrastructure-as-code software tool that enables easy infrastructure deployment and management.

Terraform resources

As Terraform is a declarative language, we need to define the desired state, and Terraform takes care of provisioning and other infrastructure tasks.

The following resources will be created through Terraform.

  • autoscaling-groups.tf -> creates autoscaling groups
  • cloud-front.tf -> creates an S3 bucket and defines cloud front distribution for cashing
  • ec2.tf -> creates the compute instances
  • eip.tf -> creates Elastic IP addresses that allow redirecting traffic into another instance in the event of a failure
  • lb.tf -> creates the application load balancer, the listener, and the target group
  • main.tf -> declares the providers to use
  • nat-gateways.tf -> creates nat gateways for the public subnets to allow the web servers access to the internet
  • outputs.tf -> to output the deployment URL
  • rds.tf -> creates an was database instance with MySQL as an engine
  • variables.tf -> declares the variables used in the different resources
  • vpc.tf -> creates the vpc, public subnets, internet gateway, security group, route table

***

Creating the infrastructure

(all the default values for variables are declared here)

First, we define the cloud provider to use, which is AWS in our case.

# main.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.5.0"
}
}

required_version = ">= 0.14.9"
}

provider "aws" {
profile = "default"
region = var.region
}

Then, we create a vpc resource named main that is declared with a cidr block of “10.0.0.0/16”.

# vpc.tf 

# Declare the VPC
resource "aws_vpc" "main" {
cidr_block = var.main_cidr_block

create_nat_gateways = true
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "main"
}
}

As we have two availability zones, we will have two public and two private subnets in our vpc as described in the architecture diagram. The web server will be in the public subnets while the app server will be in the private subnets.

# vpc.tf

# Create public subnets in the first two availability zones
resource "aws_subnet" "public_subnets" {
count = length(var.public_cidr_blocks)
vpc_id = aws_vpc.main.id
cidr_block = var.public_cidr_blocks[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true

tags = {
Name = "public_subnet_${count.index + 1}"
}
}

# Create private subnets in the first two availability zones
resource "aws_subnet" "private_subnets" {
count = length(var.private_cidr_blocks)
vpc_id = aws_vpc.main.id
cidr_block = var.private_cidr_blocks[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = false

tags = {
Name = "private_subnet_${count.index + 1}"
}
}

To allow traffic between the load balancers, and public and private instances, we will create security groups for application and presentation tiers as well as their application load balancers. Below, we create security groups for the presentation tier and its load balancer. We should create security groups for the application tier and its load balancer as well.

# vpc.tf 

# Create security group for presentation tier
resource "aws_security_group" "presentation_tier" {
name = "presentation_tier_connection"
description = "Allow HTTP requests"
vpc_id = aws_vpc.main.id

ingress {
description = "HTTP from anywhere"
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.alb_presentation_tier.id]
}

ingress {
description = "HTTP from anywhere"
from_port = 3000
to_port = 3000
protocol = "tcp"
security_groups = [aws_security_group.alb_presentation_tier.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "presentation_tier_sg"
}
}

# Create security group for presentation tier load balancer
resource "aws_security_group" "alb_presentation_tier" {
name = "alb_presentation_tier_connection"
description = "Allow HTTP requests"
vpc_id = aws_vpc.main.id

ingress {
description = "HTTP from anywhere"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

ingress {
description = "HTTP from anywhere"
from_port = 3000
to_port = 3000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "alb_presentation_tier_sg"
}
}

Now, we create an internet gateway to allow access to the internet for the public subnet EC2 instances. We also need to create route tables for resources in the public and private subnets. Route tables contain a set of routes that determine where network traffic from our subnet or gateway is directed. Route table associations create an association between a route table and a subnet or a route table and an internet gateway or virtual private gateway.

# vpc.tf 

# Create route tables and route table associations
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}

tags = {
Name = "public_route"
}
}

resource "aws_route_table_association" "public_route_association" {
count = length(var.public_cidr_blocks)
subnet_id = aws_subnet.public_subnets[count.index].id
route_table_id = aws_route_table.public_route.id
}

resource "aws_route_table" "private_route" {
count = length(aws_subnet.private_subnets)
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.gw[count.index].id
}
tags = {
Name = "private_route_${count.index + 1}"
}
}

resource "aws_route_table_association" "private-route" {
count = length(var.private_cidr_blocks)
subnet_id = aws_subnet.private_subnets[count.index].id
route_table_id = aws_route_table.private_route[count.index].id
}

To allow access to the internet, we should define nat gateway and elastic IP addresses.

# nat-gateways.tf

# Create nat gateways
resource "aws_nat_gateway" "gw" {
count = length(aws_subnet.public_subnets)
allocation_id = aws_eip.nat_ip[count.index].id
subnet_id = aws_subnet.public_subnets[count.index].id
depends_on = [aws_internet_gateway.gw]
tags = {
"Name" = "nat_gw_${count.index + 1}"
}
}
# eip.tf

# Create elastic AP address
resource "aws_eip" "nat_ip" {
count = length(aws_subnet.public_subnets)
depends_on = [aws_internet_gateway.gw]
tags = {
"Name" = "nat_ip_${count.index + 1}"
}
}

Creating application load balancer

Application load balancers are configured for both the application and presentation tiers. They will be used to ensure no single point of failure (SPOF). To eliminate a SPOF, we need to use Route 53 to distribute application access across multiple AZs, with load balancers and autoscaling groups in each AZ. For each load balancer, we will define a load balancer listener and target group.

# lb.tf

# Create a load balancer, listener, and target group for presentation tier
resource "aws_lb" "front_end" {
name = "front-end-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb_presentation_tier.id]
subnets = aws_subnet.public_subnets.*.id

enable_deletion_protection = false
}

resource "aws_lb_listener" "front_end" {
load_balancer_arn = aws_lb.front_end.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.front_end.arn
}
}

resource "aws_lb_target_group" "front_end" {
name = "front-end-lb-tg"
port = 3000
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}

# Create a load balancer, listener, and target group for application tier
resource "aws_lb" "application_tier" {
name = "application-tier-lb"
internal = true
load_balancer_type = "application"
security_groups = [aws_security_group.alb_application_tier.id]
subnets = aws_subnet.private_subnets.*.id

enable_deletion_protection = false
}

resource "aws_lb_listener" "application_tier" {
load_balancer_arn = aws_lb.application_tier.arn
port = "80"
protocol = "HTTP"

default_action {
type = "forward"
target_group_arn = aws_lb_target_group.application_tier.arn
}
}

resource "aws_lb_target_group" "application_tier" {
name = "application-tier-lb-tg"
port = 3000
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}

Creating compute instances

We create EC2 instances that will run Docker images we pushed to the ECR earlier. To do so, we need to create an instance role that allows our EC2 instances to have access to ECR. We define two launch templates: one for the presentation tier and another for the application tier. Launch templates will be used when initiating the instances with Docker images.

# user-data/user-data-presentation-tier.sh
#!/bin/bash
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${ecr_url}
docker run --restart always -e APPLICATION_LOAD_BALANCER=${application_load_balancer} -p 3000:3000 -d ${ecr_url}/${ecr_repo_name}:latest

This script installs and runs Docker. Then, it logs into the ECR so we can run our presentation tier docker container.

We can pass the variables in the docker container as follows:

user_data = base64encode(templatefile("./../user-data/user-data-presentation-tier.sh", {
application_load_balancer = aws_lb.application_tier.dns_name,
ecr_url = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
ecr_repo_name = var.ecr_presentation_tier,
region = var.region
}))

Using aws_caller_identity, we can get the account id for getting the ecr URL. The following is the complete ec2.tf file.

# ec2.tf 

# Create a compute instance
data "aws_ami" "amazon_linux_2" {
most_recent = true

owners = ["amazon"]

filter {
name = "owner-alias"
values = ["amazon"]
}

filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}

# Get the current caller identity
data "aws_caller_identity" "current" {}

# Define a role for ecr connection
resource "aws_iam_instance_profile" "ec2_ecr_connection" {
name = "ec2_ecr_connection"
role = aws_iam_role.role.name
}

resource "aws_iam_role" "role" {
name = "allow_ec2_access_ecr"
path = "/"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

# Create an iam role policy
resource "aws_iam_role_policy" "access_ecr_policy" {
name = "allow_ec2_access_ecr"
role = aws_iam_role.role.id

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}

# Create a launch template for presentation tier
resource "aws_launch_template" "presentation_tier" {
name = "presentation_tier"

block_device_mappings {
device_name = "/dev/xvda"

ebs {
volume_size = 8
}
}

iam_instance_profile {
name = aws_iam_instance_profile.ec2_ecr_connection.name
}

instance_type = "t2.nano"
image_id = data.aws_ami.amazon_linux_2.id

network_interfaces {
associate_public_ip_address = true
security_groups = [aws_security_group.presentation_tier.id]
}

user_data = base64encode(templatefile("./../user-data/user-data-presentation-tier.sh", {
application_load_balancer = aws_lb.application_tier.dns_name,
ecr_url = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
ecr_repo_name = var.ecr_presentation_tier,
region = var.region
}))

depends_on = [
aws_lb.application_tier
]
}

# Create a launch template for application tier
resource "aws_launch_template" "application_tier" {
name = "application_tier"

block_device_mappings {
device_name = "/dev/xvda"

ebs {
volume_size = 8
}
}

iam_instance_profile {
name = aws_iam_instance_profile.ec2_ecr_connection.name
}

instance_type = "t2.nano"
image_id = data.aws_ami.amazon_linux_2.id

network_interfaces {
associate_public_ip_address = false
security_groups = [aws_security_group.application_tier.id]
}

user_data = base64encode(templatefile("./../user-data/user-data-application-tier.sh", {
rds_hostname = module.rds.rds_address,
rds_username = var.rds_db_admin,
rds_password = var.rds_db_password,
rds_port = 3306,
rds_db_name = var.db_name
ecr_url = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
ecr_repo_name = var.ecr_application_tier,
region = var.region
}))

depends_on = [
aws_nat_gateway.gw
]
}

Creating autoscaling groups

When provisioning the EC2 instances for the presentation and the application tiers, we will create autoscaling groups. So, there are two autoscaling groups, one for each of the two tiers. It will allow AWS to scale up or down the nodes depending on the traffic automatically.

# autoscaling-groups.tf

# Create autoscaling group for presentation tier
resource "aws_autoscaling_group" "presentation_tier" {
name = "ASG-Presentation-Tier"
max_size = 4
min_size = 2
health_check_grace_period = 300
health_check_type = "EC2"
desired_capacity = 2
vpc_zone_identifier = aws_subnet.public_subnets.*.id

launch_template {
id = aws_launch_template.presentation_tier.id
version = "$Latest"
}

lifecycle {
ignore_changes = [load_balancers, target_group_arns]
}

tag {
key = "Name"
value = "presentation_app"
propagate_at_launch = true
}
}

# Create autoscaling group for application tier
resource "aws_autoscaling_group" "application_tier" {
name = "ASG-Application-Tier"
max_size = 4
min_size = 2
health_check_grace_period = 300
health_check_type = "EC2"
desired_capacity = 2
vpc_zone_identifier = aws_subnet.private_subnets.*.id

launch_template {
id = aws_launch_template.application_tier.id
version = "$Latest"
}

lifecycle {
ignore_changes = [load_balancers, target_group_arns]
}

tag {
key = "Name"
value = "application_app"
propagate_at_launch = true
}
}

# Create a new ALB Target Group attachment
resource "aws_autoscaling_attachment" "presentation_tier" {
autoscaling_group_name = aws_autoscaling_group.presentation_tier.id
lb_target_group_arn = aws_lb_target_group.front_end.arn
}

resource "aws_autoscaling_attachment" "application_tier" {
autoscaling_group_name = aws_autoscaling_group.application_tier.id
lb_target_group_arn = aws_lb_target_group.application_tier.arn
}

Declaring relational database

As we want to store many video files, we need a database. Let us define a relational database with a MySQL engine. All the variables in the code below are defined in the variables.tf file.

# rds.tf

# Create a db instance
resource "aws_db_instance" "rds" {
allocated_storage = var.allocated_storage
engine_version = var.engine_version
multi_az = false
db_name = var.db_name
username = var.rds_db_admin
password = var.rds_db_password
instance_class = var.instance_class
engine = var.db_engine
}

Creating a Content Delivery Network (CloudFront)

As fetching videos from databases every time a user makes a request will be slow, we need a caching service. There is a content delivery network service in AWS called CloudFront. It provides a globally-distributed network of proxy servers that cache content. Thus, it will help us improve the speed of downloading content and thus reduce latency.

# cloud-front.tf

# Create an S3 bucket for storing cache
resource "aws_s3_bucket" "bucket" {
bucket = "mybucket"

tags = {
Name = "My bucket"
}
}

resource "aws_s3_bucket_acl" "bucket_acl" {
bucket = aws_s3_bucket.bucket.id
acl = "private"
}

locals {
s3_origin_id = "myS3Origin"
}

resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.default.id
origin_id = local.s3_origin_id
}

enabled = true
is_ipv6_enabled = true
comment = "Improve latency"
default_root_object = "./presentation-tier/scr/index.js"


default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = "none"
}
}

viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false
headers = ["Origin"]

cookies {
forward = "none"
}
}

min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "allow-all"
}

# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = "none"
}
}

min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "allow-all"
}

price_class = "PriceClass_200"

restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "CA", "GB", "DE"]
}
}

tags = {
Environment = "production"
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

Declaring the outputs

Finally, we need to declare the outputs that we want from terraform build. In our case, we only need the DNS url that has our app deployment running on EC2 instances.

# outputs.tf

output "lb_dns_url" {
value = aws_lb.front_end.dns_name
}

Deploying the stack

Before initiating Terraform, please make sure you successfully built and pushed the Docker images as demonstrated in the “Our Application” section.

Now, we should navigate to the Terraform folder and run terraform init. When initialization is successfully completed, you will see a confirmation as follows:

Successful terraform initialization confirmation

We should run terraform apply, and type yes to approve the changes. It might take a while since we are provisioning a couple of resources. If everything goes as planned, you will get the DNS url for the front-facing load balancer.

Destroying the infrastructure

Keep in mind that running these resources will incur some charges and make sure to delete all the resources.

Summary

In this tutorial, we deployed a highly available web application on AWS via Terraform. We deployed the complete infrastructure by provisioning the main VPC, with public and private subnets, an internet gateway, nat gateways, load balancers, autoscaling groups, RDS, and CloudFront.

You can access the complete code for this blog post here.

--

--