How to Create AWS EC2 by Terraform

2023/02/038 min read
bookmark this

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Configure AWS
  4. Architecture Overview
  5. Project Structure
  6. VPC and Networking
  7. Security Group
  8. Application Load Balancer
  9. HTTPS Certificate and Route 53
  10. EC2 Instance with Bootstrap Script
  11. Variables
  12. Terraform Commands
  13. Estimated Monthly Cost
  14. Conclusion

Introduction

This blog walks through a production-ready AWS setup for hosting a Next.js blog using Terraform. The infrastructure includes a VPC, Application Load Balancer (ALB), HTTPS via ACM, Route 53 DNS, and an EC2 instance bootstrapped with Node.js and PM2 — all managed as code.

Prerequisites

  • Terraform CLI (>= 1.2.0)
  • AWS CLI configured with credentials
  • AWS Account with an IAM user
  • A registered domain name pointed to Route 53
  • VS Code extension: HashiCorp Terraform (for IntelliSense)

Configure AWS

Install and verify AWS CLI

aws --version

Reference: AWS CLI installation guide

Set up AWS credentials

Create an IAM user with the following permissions:

  • AmazonEC2FullAccess
  • AmazonVPCFullAccess
  • ElasticLoadBalancingFullAccess
  • AmazonRoute53FullAccess
  • AWSCertificateManagerFullAccess
  • AmazonS3FullAccess

Then configure your credentials locally — never hardcode them in Terraform files:

aws configure

Terraform reads credentials from ~/.aws/credentials or environment variables:

export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"

Architecture Overview

Internet
   │
   ▼
Route 53 (DNS — technoapple.com → ALB)
   │
   ▼
Application Load Balancer (HTTP:80 → redirect, HTTPS:443 → forward)
   │         └── ACM Certificate (HTTPS)
   ▼
EC2 t3.micro (Next.js app via PM2, port 80)
   │
   └── VPC → Subnet (us-west-1) → Internet Gateway

The EC2 instance runs the Next.js app directly on port 80 via PM2. The ALB handles HTTPS termination and forwards traffic to the instance.

Project Structure

Split the Terraform code into two modules:

setup-vpc-network/     ← VPC, subnets, IGW, ALB, security group, Route 53, ACM cert
setup-ec2/             ← EC2 instance, key pair, bootstrap script

This separation lets you reprovision the EC2 (e.g. for a new year's environment) without touching the network/DNS layer.

VPC and Networking

# vpc.tf

resource "aws_vpc" "VPC" {
  cidr_block           = var.aws_ip_cidr_range
  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = { Name = "myVPC" }
}

resource "aws_internet_gateway" "IGW" {
  vpc_id = aws_vpc.VPC.id
  tags   = { Name = "myInternetGateway" }
}

resource "aws_subnet" "mySubnet" {
  cidr_block        = "10.0.0.0/28"
  vpc_id            = aws_vpc.VPC.id
  availability_zone = var.availability_zones["zone1"]
  tags              = { Name = "mySubnet" }
}

resource "aws_subnet" "mySubnet2" {
  cidr_block        = "10.0.0.16/28"
  vpc_id            = aws_vpc.VPC.id
  availability_zone = var.availability_zones["zone2"]
  tags              = { Name = "mySubnet2" }
}

resource "aws_route_table" "myRouteTable" {
  vpc_id = aws_vpc.VPC.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.IGW.id
  }
  tags = { Name = "myTable" }
}

resource "aws_route_table_association" "routeTableAssociate" {
  subnet_id      = aws_subnet.mySubnet.id
  route_table_id = aws_route_table.myRouteTable.id
}

resource "aws_route_table_association" "routeTableAssociate2" {
  subnet_id      = aws_subnet.mySubnet2.id
  route_table_id = aws_route_table.myRouteTable.id
}

Two subnets across two availability zones are required by the ALB.

Security Group

# security_group.tf

resource "aws_security_group" "mySecurityGroup" {
  name        = "mySecurity"
  description = "Security group for web app"
  vpc_id      = aws_vpc.VPC.id

  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTPS"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "SSH"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = { Name = "myWebAppSecurity" }
}

Tip: In production, restrict the SSH cidr_blocks to your office IP instead of 0.0.0.0/0.

Application Load Balancer

# elb.tf

resource "aws_lb" "myLb" {
  name               = "myLoadBalancer"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.mySecurityGroup.id]
  subnets            = [aws_subnet.mySubnet.id, aws_subnet.mySubnet2.id]
  tags               = { Environment = "prod" }
}

resource "aws_lb_target_group" "myTargetGroup" {
  name        = "my-target-group"
  port        = 80
  protocol    = "HTTP"
  target_type = "instance"
  vpc_id      = aws_vpc.VPC.id

  health_check {
    protocol            = "HTTP"
    path                = "/health.html"
    port                = "80"
    healthy_threshold   = 5
    interval            = 30
    unhealthy_threshold = 2
    timeout             = 5
  }
}

# Redirect HTTP → HTTPS
resource "aws_lb_listener" "lbListenerHttp" {
  load_balancer_arn = aws_lb.myLb.arn
  port              = "80"
  protocol          = "HTTP"
  default_action {
    type = "redirect"
    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }
}

# HTTPS listener
resource "aws_lb_listener" "lbListenerHttps" {
  load_balancer_arn = aws_lb.myLb.arn
  port              = "443"
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2016-08"
  certificate_arn   = aws_acm_certificate_validation.myCertificateValidation.certificate_arn
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.myTargetGroup.arn
  }
}

HTTPS Certificate and Route 53

# certificate.tf

resource "aws_acm_certificate" "myCertificate" {
  domain_name       = var.domainName
  validation_method = "DNS"
  tags              = { Environment = "Production" }
  lifecycle { create_before_destroy = true }
}

resource "aws_route53_zone" "primary" {
  name = var.domainName
}

# DNS validation records for ACM
resource "aws_route53_record" "my53Record" {
  for_each = {
    for dvo in aws_acm_certificate.myCertificate.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }
  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.primary.zone_id
}

# Point domain apex to ALB
resource "aws_route53_record" "myALBRecord" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = var.domainName
  type    = "A"
  alias {
    name                   = aws_lb.myLb.dns_name
    zone_id                = aws_lb.myLb.zone_id
    evaluate_target_health = true
  }
}

# Point www to ALB
resource "aws_route53_record" "myALBRecordWWW" {
  zone_id = aws_route53_zone.primary.zone_id
  name    = var.domainNameWWW
  type    = "A"
  alias {
    name                   = aws_lb.myLb.dns_name
    zone_id                = aws_lb.myLb.zone_id
    evaluate_target_health = true
  }
}

resource "aws_acm_certificate_validation" "myCertificateValidation" {
  certificate_arn         = aws_acm_certificate.myCertificate.arn
  validation_record_fqdns = [for record in aws_route53_record.my53Record : record.fqdn]
}

Important: After Terraform creates the Route 53 hosted zone, copy the NS records and update your domain registrar (e.g. GoDaddy) to point to Route 53 nameservers. ACM certificate validation will not complete until DNS is delegated.

EC2 Instance with Bootstrap Script

# ec2.tf

resource "tls_private_key" "privatekey" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "aws_key_pair" "myKeyPair" {
  key_name   = "ec2key"
  public_key = tls_private_key.privatekey.public_key_openssh
}

resource "local_file" "privatekey" {
  content         = tls_private_key.privatekey.private_key_pem
  filename        = "${path.module}/ec2key.pem"
  file_permission = "0600"
}

resource "aws_instance" "myInstance" {
  ami                         = "ami-051ed863837a0b1b6"  # Amazon Linux 2 us-west-1
  instance_type               = "t3.micro"
  key_name                    = aws_key_pair.myKeyPair.key_name
  subnet_id                   = data.aws_subnet.mySubnet.id
  vpc_security_group_ids      = [data.aws_security_group.mySecurity.id]
  associate_public_ip_address = true
  user_data                   = base64encode(data.template_file.ec2.rendered)

  root_block_device {
    volume_size = 8
    volume_type = "gp3"
  }

  tags = { Name = "myWebApp" }

  lifecycle { create_before_destroy = true }
}

resource "aws_lb_target_group_attachment" "myTargetGroupAttachment" {
  target_id        = aws_instance.myInstance.id
  target_group_arn = data.aws_lb_target_group.myTargetGroup.arn
  port             = 80
}

resource "aws_eip" "myEip" {
  instance = aws_instance.myInstance.id
  tags     = { Name = "myWebAppEIP" }
}

The bootstrap user_data script installs Node.js 18, PM2, clones the repo, builds, and starts the app:

#!/usr/bin/env bash
set -euo pipefail
exec > >(tee /var/log/user-data.log | logger -t user-data -s 2>/dev/console) 2>&1

curl -fsSL https://rpm.nodesource.com/setup_18.x | bash -
yum -y install nodejs git

npm i -g pm2

mkdir -p /app && cd /app
git clone https://github.com/YOUR_ORG/YOUR_REPO.git
cd YOUR_REPO

npm install
npm run build

export NODE_OPTIONS="--max-old-space-size=512"
pm2 start npm --name "myApp" --max-restarts 10 --restart-delay 5000 -- run start
pm2 startup systemd -u root --hp /root
pm2 save

Security note: Never hardcode GitHub credentials in the bootstrap script. Use a GitHub deploy key or a fine-grained PAT stored as a Terraform variable instead.

Variables

# variables.tf

variable "aws_ip_cidr_range" {
  default     = "10.0.0.0/24"
  type        = string
  description = "IP CIDR range for the VPC"
}

variable "availability_zones" {
  type = map(string)
  default = {
    zone1 = "us-west-1c"
    zone2 = "us-west-1b"
  }
}

variable "domainName" {
  type    = string
  default = "yourdomain.com"
}

variable "domainNameWWW" {
  type    = string
  default = "www.yourdomain.com"
}

Terraform Commands

# Initialise providers and modules
terraform init

# Preview changes without applying
terraform plan

# Apply infrastructure
terraform apply

# Destroy all resources
terraform destroy

# Inspect current state
terraform show
terraform state list

# Use workspaces to manage multiple environments
terraform workspace new prd-2025
terraform workspace select prd-2025

Estimated Monthly Cost

All prices are approximate for us-west-1 as of 2024.

Resource Spec Est. monthly cost
EC2 t3.micro 1 instance, 730 hrs ~$8
EBS gp3 8 GB ~$0.64
Elastic IP Associated to running instance Free
ALB ~1 LCU, low traffic ~$16–18
Route 53 hosted zone 1 zone $0.50
Route 53 queries ~1M queries/month ~$0.40
ACM certificate Public cert Free
Data transfer out ~1 GB/month ~$0.09
Total ~$26–28/month

The ALB is the biggest cost driver at ~$16/month minimum. For a personal blog on a budget, you can skip the ALB and use nginx + certbot directly on the EC2 to terminate HTTPS — this brings the total down to ~$10/month.

Conclusion

With this setup you get a production-ready, HTTPS-enabled blog hosting environment that is fully reproducible as code. The split between setup-vpc-network and setup-ec2 lets you tear down and reprovision just the compute layer each year without touching DNS or certificates. Run terraform workspace new prd-YYYY and terraform apply to spin up a fresh environment in minutes.