Terraform

Module: Terraform Fundamentals and Labs


Introduction: What is Terraform as Software

Theory:

  • Terraform is an Infrastructure as Code (IaC) tool by HashiCorp.

  • It allows you to define infrastructure in declarative language (HCL).

  • Terraform supports multiple providers (AWS, Azure, GCP, VMware, etc.)

Benefits:

  • Versioned infrastructure

  • Repeatable and auditable

  • Easily scalable and manageable

Pre-Requisite Setup and Installation

Requirements:

  • OS: Linux/Mac/Windows

  • Terminal + Text Editor (e.g., VS Code)

  • AWS CLI installed

  • AWS account created

Setup Free-Tier AWS Account

Steps:

  1. Go to https://aws.amazon.com/free

  2. Sign up with email + credit card (no charges if you stay in free tier)

  3. Create an IAM user with:

    • Programmatic access

    • AdminAccess policy (for demo)

  4. Store your Access Key ID and Secret Key securely

Create Cloud Machine for Terraform Execution

Options:

  • Local: Use your laptop with Terraform installed

  • Remote: Create EC2 machine and SSH into it

ssh -i "my-key.pem" ubuntu@<public_ip>

Terraform Installation & Verification

Install (Linux):

sudo apt-get update
sudo apt-get install -y gnupg software-properties-common curl
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install terraform
Verify:
 
terraform -v

Start with Terraform Basics

Workflow:

  1. terraform init – Initialize provider

  2. terraform plan – Preview changes

  3. terraform apply – Apply changes

  4. terraform destroy – Destroy resources

Terraform Configuration Language

Theory:

  • Files end with .tf

  • Main components:

    • provider: cloud platform

    • resource: what to deploy

    • variable: input

    • output: display result

Example:

provider "aws" {
region = "us-east-1"
}

resource "aws_instance" "web" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
}

AWS Setup for Terraform

Pre-reqs:

  • IAM user with access key

  • Create ~/.aws/credentials:

[default]
aws_access_key_id = YOUR_KEY
aws_secret_access_key = YOUR_SECRET

Or export directly:

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...

Create Machine Using Terraform

Steps:

  1. Create a .tf file

  2. Run the following:

terraform init
terraform plan
terraform apply

Provide Creds in Separate Centralized File

Best Practice:

  • Use variables.tf:

 
 
variable "access_key" {}
variable "secret_key" {}
  • Use terraform.tfvars:
access_key = "YOUR_KEY"
secret_key = "YOUR_SECRET"
  • Reference in provider:
provider "aws" {
access_key = var.access_key
secret_key = var.secret_key
region = "us-east-1"
}

Create Multiple Instances

 
resource "aws_instance" "web" {
count = 3
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
}

Terraform Variables Detailed Explanation

  • Defined with variable

  • Values from CLI, .tfvars, or environment variables

  • Types: string, number, bool, list, map, object

Variables in Terraform

Declare:

 
variable "region" {
default = "us-east-1"
}

Use:

provider "aws" {
region = var.region
}

Use of Variables in Conf File

Example:

variable "instance_type" {
default = "t2.micro"
}

resource "aws_instance" "web" {
instance_type = var.instance_type
}

LAB – Use of Variable in Conf File

  1. Create variables.tf

  2. Add terraform.tfvars

  3. Use terraform apply -var-file="terraform.tfvars"

LAB – List and Map Variables

variable "instance_types" {
type = list(string)
default = ["t2.micro", "t2.small"]
}

variable "ami_map" {
type = map(string)
default = {
us-east-1 = "ami-12345678"
us-west-2 = "ami-87654321"
}
}

Use:

instance_type = var.instance_types[0]
ami = var.ami_map["us-east-1"]

Terraform Concepts – Building Blocks

Key Concepts:

  • Providers: Interface to cloud (AWS, GCP)

  • Resources: Declare what to create

  • Modules: Reusable .tf blocks

  • State: .tfstate stores resource metadata

  • Outputs: Return values

Provision Software with Terraform

Use user_data to install software:

resource “aws_instance” “web” { ami = “ami-0c02fb55956c7d316”
instance_type = “t2.micro”
user_data = <<-EOF
#!/bin/bash
sudo apt update
sudo apt install -y nginx
EOF
}

LAB – Provision Software with Terraform

Deploy EC2 and auto-install Apache:

user_data = <<-EOF
#!/bin/bash
yum install -y httpd
systemctl enable httpd
systemctl start httpd
EOF

LAB – Data Source in Terraform

Theory: Data sources fetch info without creating resources.

Example:

data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]

filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
}

Use it:

ami = data.aws_ami.ubuntu.id

Output Attribute in Terraform

Use Outputs to return useful data:


 

output "instance_ip" {
value = aws_instance.web.public_ip
}

LAB – Output Attribute in Terraform

  1. Create outputs.tf

  2. Add:

output "instance_dns" {
value = aws_instance.web.public_dns
}

Run:

terraform output

Remote State in Terraform

Theory:

  • Stores .tfstate remotely (S3, GCS)

  • Enables team collaboration and prevents corruption

Example:

terraform {
backend "s3" {
bucket = "my-tf-state-bucket"
key = "state/terraform.tfstate"
region = "us-east-1"
}
}

LAB – Remote State in Terraform

Steps:

  1. Create S3 bucket:

     
  2. Create DynamoDB table for state locking (optional):

  3. Configure backend in main.tf and run:

aws s3 mb s3://my-tf-state-bucket
aws dynamodb create-table ...
terraform init

AWS VPC Introduction

Theory:

  • A VPC (Virtual Private Cloud) is your isolated network in AWS.

  • Includes subnets, route tables, internet gateways, NAT gateways, security groups, etc.

  • Terraform helps you define VPCs as code.

DEMO: AWS VPC & Security Group

Practical (VPC + SG):

resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = { Name = "main-vpc" }
}

resource "aws_security_group" "web_sg" {
name = "web-sg"
description = "Allow HTTP"
vpc_id = aws_vpc.main_vpc.id

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

LAB: Create AWS VPC & VPC Gateway

Add Internet Gateway:

resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main_vpc.id
}

Create Subnet:

resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
}

Launch EC2 Instance using Custom VPC

resource "aws_instance" "web" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet.id
vpc_security_group_ids = [aws_security_group.web_sg.id]
associate_public_ip_address = true
}

LAB: Launch EC2 Instance using Custom VPC

Full minimal configuration:

  • Create VPC, subnet, IGW, route table

  • Launch EC2 into subnet

  • Add SG to allow SSH & HTTP

  • Output public IP

Elastic Block Store (EBS) in AWS

Theory:

  • EBS: Persistent block storage for EC2

  • Types: gp2, gp3 (general), io1/io2 (high IOPS), etc.

  • Can attach, detach, snapshot, encrypt

DEMO: Elastic Block Store in AWS

resource "aws_ebs_volume" "example" {
availability_zone = "us-east-1a"
size = 8
type = "gp2"
tags = { Name = "demo-ebs" }
}

LAB: EBS in AWS

Attach EBS to EC2:

resource "aws_volume_attachment" "ebs_attach" {
device_name = "/dev/xvdh"
volume_id = aws_ebs_volume.example.id
instance_id = aws_instance.web.id
}

User Data in AWS

Theory:

  • User Data allows running scripts at instance boot.

  • Used to install software, configure services, etc.

LAB: User Data using Script

user_data = <<-EOF
#!/bin/bash
apt update
apt install -y apache2
systemctl enable apache2
systemctl start apache2
EOF

LAB: User Data using Cloud-Init

user_data = <<-EOF
#cloud-config
packages:
- nginx
runcmd:
- systemctl enable nginx
- systemctl start nginx
EOF

AWS RDS Basics

Theory:

  • RDS: Managed relational DB (MySQL, PostgreSQL, etc.)

  • Includes automated backups, failover, monitoring

  • Pricing depends on instance class, storage type, backups, etc.

LAB: Create RDS

resource "aws_db_instance" "default" {
allocated_storage = 20
engine = "mysql"
instance_class = "db.t3.micro"
name = "mydb"
username = "admin"
password = "MySecretPass123"
skip_final_snapshot = true
publicly_accessible = true
vpc_security_group_ids = [aws_security_group.web_sg.id]
db_subnet_group_name = aws_db_subnet_group.my_db_subnets.name
}

AWS Access and Identity Management (IAM)

Theory:

  • IAM controls AWS access via users, groups, and roles.

  • Supports policies (JSON) to define permissions.

  • Roles used by services or federated users.

LAB: IAM Users and Groups

resource "aws_iam_user" "developer" {
name = "dev-user"
}

resource "aws_iam_group" "devs" {
name = "developers"
}

resource "aws_iam_group_membership" "devs_membership" {
name = "devs"
users = [aws_iam_user.developer.name]
group = aws_iam_group.devs.name
}

LAB: AWS IAM Roles

resource "aws_iam_role" "ec2_role" {
name = "ec2-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
})
}

EC2 Instance Autoscaling

Theory:

  • Autoscaling automatically adjusts EC2 count based on demand.

  • Components: Launch Template, ASG, Target Groups, Policies

LAB: EC2 Instance Autoscaling

resource "aws_launch_template" "web_tpl" {
name_prefix = "web-"
image_id = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
}

resource "aws_autoscaling_group" "web_asg" {
desired_capacity = 2
max_size = 4
min_size = 1
vpc_zone_identifier = [aws_subnet.public_subnet.id]
launch_template {
id = aws_launch_template.web_tpl.id
version = "$Latest"
}
}

Load Balancing in AWS

Theory:

  • ELB distributes traffic across multiple EC2s

  • Types: ALB (Layer 7), NLB (Layer 4), CLB (legacy)

LAB: AWS Load Balancing

resource "aws_lb" "web_alb" {
name = "web-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.web_sg.id]
subnets = [aws_subnet.public_subnet.id]
}

resource "aws_lb_target_group" "web_tg" {
name = "web-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main_vpc.id
}

resource "aws_lb_listener" "web_listener" {
load_balancer_arn = aws_lb.web_alb.arn
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.web_tg.arn
}
}

resource "aws_autoscaling_attachment" "asg_attach" {
autoscaling_group_name = aws_autoscaling_group.web_asg.name
alb_target_group_arn = aws_lb_target_group.web_tg.arn
}

Terraform Modules Code Re-UseAbility

Theory:

  • Modules = reusable sets of Terraform resources.

  • You can call local or remote modules.

  • Best for DRY (Don’t Repeat Yourself) practices in production.

Syntax:

module "my_vpc" {
source = "./modules/vpc"
cidr_block = "10.0.0.0/16"
}

Terraform Module and Application

Theory:

  • Modules allow app-level separation: VPC, EC2, RDS, etc.

  • Useful for microservices or multi-environment setups (dev, prod)

Structure:

main.tf
modules/
vpc/
main.tf
variables.tf
outputs.tf

LAB: Terraform Source from GitHub

module "vpc" {
source = "git::https://github.com/user/terraform-aws-vpc.git"
cidr_block = "10.0.0.0/16"
}

Best practices:

  • Use version tags or branches.

source = "git::https://github.com/user/terraform-aws-vpc.git?ref=v1.0.0"

LAB: Local Path Module

Structure:

modules/
ec2/
main.tf
variables.tf
outputs.tf

Use:

module "web_ec2" {
source = "./modules/ec2"
instance_type = "t2.micro"
ami_id = "ami-0c02fb55956c7d316"
}

LAB: AWS VPC Module

module "custom_vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"

name = "my-vpc"
cidr = "10.0.0.0/16"

azs = ["us-east-1a", "us-east-1b"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
enable_nat_gateway = true
}

Conditions, Loops, and Functions


Condition Statement in Terraform

Theory:

  • Terraform uses ternary operators: condition ? true_val : false_val

Example:

resource "aws_instance" "web" {
instance_type = var.environment == "prod" ? "t3.medium" : "t2.micro"
}

LAB: Condition Statements in Terraform

variable "enable_monitoring" {
default = true
}

resource "aws_instance" "web" {
monitoring = var.enable_monitoring ? true : false
}

Terraform Built-in Functions

Theory:

  • Used to transform or extract values.

  • Examples: length(), join(), lookup(), contains(), element()

Example:

output "subnet_count" {
value = length(var.subnets)
}

LAB: Terraform Built-in Functions

variable "ports" {
default = [22, 80, 443]
}

output "first_port" {
value = element(var.ports, 0)
}

output "is_https" {
value = contains(var.ports, 443)
}

LAB: Terraform Built-in Functions

variable "ports" {
default = [22, 80, 443]
}

output "first_port" {
value = element(var.ports, 0)
}

output "is_https" {
value = contains(var.ports, 443)
}

Loops in Terraform HCL

Types:

  1. count

  2. for_each

  3. for expressions

Examples:

🔹 count:

resource "aws_instance" "web" {
count = 3
ami = var.ami
instance_type = "t2.micro"
}

🔹 for_each:

resource "aws_s3_bucket" "buckets" {
for_each = toset(["logs", "data", "backup"])
bucket = "my-${each.key}-bucket"
}

🔹 for expressions:

output "uppercase_tags" {
value = [for tag in var.tags : upper(tag)]
}

Terraform Project Structure

Recommended:

terraform-project/
│
├── main.tf
├── variables.tf
├── outputs.tf
├── terraform.tfvars
├── backend.tf
├── modules/
│ ├── vpc/
│ └── ec2/
├── env/
│ ├── dev/
│ └── prod/

LAB: Terraform Project Structure

Create a full project:

mkdir -p terraform-lab/modules/vpc
mkdir -p terraform-lab/modules/ec2

Structure your files:

  • main.tf calls module.vpc, module.ec2

  • Modules each have their main.tf, variables.tf, and outputs.tf

  • Create dev/prod environments using different .tfvars

Packer + Terraform Integration

Packer Introduction and Use

Theory:

  • Packer is a tool to create machine images (e.g., AWS AMIs).

  • Use it to bake images before using them in Terraform.

  • Avoids long boot time + software installation on EC2 launch.

Install Packer

Linux/macOS:

 
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install packer

Packer Demo Template

{
"builders": [{
"type": "amazon-ebs",
"region": "us-east-1",
"source_ami": "ami-0c02fb55956c7d316",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-nginx {{timestamp}}"
}],
"provisioners": [{
"type": "shell",
"inline": [
"sudo apt update",
"sudo apt install -y nginx"
]
}]
}

Build image:

packer init .
packer validate template.json
packer build template.json

Use AMI in Terraform:

variable "ami_id" {
default = "ami-xxxxxxxxx"
}

resource "aws_instance" "web" {
ami = var.ami_id
instance_type = "t2.micro"
}