
Introduction
Manually configuring servers and networks is slow, error-prone, and hard to maintain. Modern DevOps teams use Infrastructure as Code (IaC) to automate everything — from virtual machines to databases to entire cloud architectures.
Among IaC tools, Terraform stands out for being cloud-agnostic, powerful, and beginner-friendly. In this comprehensive guide, you’ll go from beginner to pro with Terraform, learning how to define, deploy, and manage production infrastructure with modules, remote state, and CI/CD integration.
What Is Infrastructure as Code?
Infrastructure as Code means writing configuration files that describe your cloud resources instead of creating them manually through consoles or CLI commands. Once written, these files can be versioned, reviewed, tested, and reused — just like application code.
Benefits of IaC
- Faster and repeatable deployments — spin up environments in minutes, not hours
- Fewer manual mistakes — no more clicking the wrong button in a web console
- Version control and collaboration — review infrastructure changes like code PRs
- Easier rollback and scaling — revert to previous versions or replicate across regions
- Documentation as code — your infrastructure is self-documenting
- Compliance and auditing — track every change in git history
Why Choose Terraform?
Terraform is developed by HashiCorp and supports almost every major cloud provider — AWS, Azure, GCP, DigitalOcean, Kubernetes, and hundreds more through providers.
| Feature | Terraform | CloudFormation | Pulumi |
|---|---|---|---|
| Multi-cloud | Yes | AWS only | Yes |
| Language | HCL | JSON/YAML | TypeScript/Python/Go |
| State Management | Built-in | Managed by AWS | Built-in |
| Learning Curve | Low | Medium | Medium |
| Community | Very large | Large | Growing |
Getting Started
Install Terraform
# macOS
brew install terraform
# Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Verify installation
terraform -version
Project Structure
terraform-project/
├── main.tf # Main resource definitions
├── variables.tf # Input variable declarations
├── outputs.tf # Output value declarations
├── providers.tf # Provider configurations
├── terraform.tfvars # Variable values (don't commit secrets!)
├── versions.tf # Terraform and provider version constraints
└── modules/
├── vpc/
├── ec2/
└── rds/
Your First Terraform Configuration
# versions.tf
terraform {
required_version = ">= 1.6.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# providers.tf
provider "aws" {
region = var.aws_region
default_tags {
tags = {
Environment = var.environment
Project = var.project_name
ManagedBy = "Terraform"
}
}
}
# variables.tf
variable "aws_region" {
description = "AWS region for resources"
type = string
default = "us-east-1"
}
variable "environment" {
description = "Environment name (dev, staging, prod)"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
variable "project_name" {
description = "Project name for tagging"
type = string
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}
# main.tf
# VPC
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "${var.project_name}-vpc"
}
}
# Public Subnets
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.project_name}-public-${count.index + 1}"
Type = "Public"
}
}
# Private Subnets
resource "aws_subnet" "private" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index + 10)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.project_name}-private-${count.index + 1}"
Type = "Private"
}
}
# Internet Gateway
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "${var.project_name}-igw"
}
}
# Route Table for Public Subnets
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "${var.project_name}-public-rt"
}
}
resource "aws_route_table_association" "public" {
count = length(aws_subnet.public)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
# Data source for AZs
data "aws_availability_zones" "available" {
state = "available"
}
# Security Group
resource "aws_security_group" "web" {
name = "${var.project_name}-web-sg"
description = "Security group for web servers"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.allowed_ssh_cidr]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.project_name}-web-sg"
}
}
# EC2 Instance
resource "aws_instance" "web" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = aws_subnet.public[0].id
vpc_security_group_ids = [aws_security_group.web.id]
user_data = <<-EOF
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello from Terraform!</h1>" > /var/www/html/index.html
EOF
tags = {
Name = "${var.project_name}-web"
}
}
# Latest Amazon Linux 2 AMI
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
# outputs.tf
output "vpc_id" {
description = "ID of the VPC"
value = aws_vpc.main.id
}
output "public_subnet_ids" {
description = "IDs of public subnets"
value = aws_subnet.public[*].id
}
output "instance_public_ip" {
description = "Public IP of the web instance"
value = aws_instance.web.public_ip
}
output "instance_public_dns" {
description = "Public DNS of the web instance"
value = aws_instance.web.public_dns
}
Terraform Workflow
# Initialize the project (download providers)
terraform init
# Format code
terraform fmt -recursive
# Validate configuration
terraform validate
# Preview changes
terraform plan -out=tfplan
# Apply changes
terraform apply tfplan
# Or apply directly (will prompt for confirmation)
terraform apply -var="environment=dev" -var="project_name=myapp"
# View current state
terraform show
# List resources in state
terraform state list
# Destroy all resources
terraform destroy
Reusable Modules
# modules/ec2/main.tf
variable "instance_name" {
type = string
}
variable "instance_type" {
type = string
default = "t3.micro"
}
variable "subnet_id" {
type = string
}
variable "security_group_ids" {
type = list(string)
}
variable "user_data" {
type = string
default = ""
}
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
resource "aws_instance" "this" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = var.subnet_id
vpc_security_group_ids = var.security_group_ids
user_data = var.user_data
tags = {
Name = var.instance_name
}
}
output "instance_id" {
value = aws_instance.this.id
}
output "public_ip" {
value = aws_instance.this.public_ip
}
# Using the module in main.tf
module "web_server" {
source = "./modules/ec2"
instance_name = "${var.project_name}-web"
instance_type = "t3.small"
subnet_id = aws_subnet.public[0].id
security_group_ids = [aws_security_group.web.id]
user_data = file("${path.module}/scripts/web-init.sh")
}
module "api_server" {
source = "./modules/ec2"
instance_name = "${var.project_name}-api"
instance_type = "t3.medium"
subnet_id = aws_subnet.private[0].id
security_group_ids = [aws_security_group.api.id]
}
Remote State Management
# backend.tf
terraform {
backend "s3" {
bucket = "mycompany-terraform-state"
key = "prod/infrastructure/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
# Create S3 bucket and DynamoDB table for state (run this first with local backend)
resource "aws_s3_bucket" "terraform_state" {
bucket = "mycompany-terraform-state"
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Workspaces for Environments
# Create workspaces
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod
# List workspaces
terraform workspace list
# Switch workspace
terraform workspace select prod
# Show current workspace
terraform workspace show
# Using workspace in configuration
locals {
environment = terraform.workspace
instance_type = {
dev = "t3.micro"
staging = "t3.small"
prod = "t3.medium"
}
}
resource "aws_instance" "web" {
instance_type = local.instance_type[local.environment]
tags = {
Environment = local.environment
}
}
CI/CD Integration
# .github/workflows/terraform.yml
name: Terraform
on:
push:
branches: [main]
paths:
- 'terraform/**'
pull_request:
branches: [main]
paths:
- 'terraform/**'
env:
TF_VERSION: 1.6.0
AWS_REGION: us-east-1
jobs:
terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: terraform
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Terraform Format
run: terraform fmt -check -recursive
- name: Terraform Init
run: terraform init
- name: Terraform Validate
run: terraform validate
- name: Terraform Plan
id: plan
run: terraform plan -no-color -out=tfplan
continue-on-error: true
- name: Update Pull Request
uses: actions/github-script@v7
if: github.event_name == 'pull_request'
with:
script: |
const output = `#### Terraform Plan \`${{ steps.plan.outcome }}\`
Show Plan
\`\`\`terraform
${{ steps.plan.outputs.stdout }}
\`\`\`
`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
});
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: terraform apply -auto-approve tfplan
Common Mistakes to Avoid
1. Committing State Files
# .gitignore
*.tfstate
*.tfstate.*
.terraform/
*.tfvars
!example.tfvars
.terraform.lock.hcl
2. Hardcoding Secrets
# BAD: Secrets in code
provider "aws" {
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
# GOOD: Use environment variables
# AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
# Or use IAM roles for EC2/ECS/Lambda
3. Not Using Version Constraints
# BAD: No version constraints
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
# GOOD: Pin versions
terraform {
required_version = ">= 1.6.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Allows 5.x but not 6.0
}
}
}
4. Skipping Plan Before Apply
# BAD: Direct apply
terraform apply -auto-approve
# GOOD: Review plan first
terraform plan -out=tfplan
# Review the plan output
terraform apply tfplan
5. Not Using Prevent Destroy
# Protect critical resources
resource "aws_rds_instance" "production" {
# ... configuration ...
lifecycle {
prevent_destroy = true
}
}
Advanced Patterns
Dynamic Blocks
variable "ingress_rules" {
type = list(object({
port = number
description = string
cidr_blocks = list(string)
}))
}
resource "aws_security_group" "example" {
name = "example"
vpc_id = aws_vpc.main.id
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ingress.value.cidr_blocks
description = ingress.value.description
}
}
}
For Each vs Count
# for_each - Better for named resources
variable "instances" {
type = map(object({
instance_type = string
subnet_id = string
}))
}
resource "aws_instance" "servers" {
for_each = var.instances
ami = data.aws_ami.amazon_linux.id
instance_type = each.value.instance_type
subnet_id = each.value.subnet_id
tags = {
Name = each.key
}
}
# Referencing: aws_instance.servers["web"].id
Best Practices Summary
- Use remote state with locking (S3 + DynamoDB or Terraform Cloud)
- Version control everything except state files and secrets
- Use modules for reusable components
- Pin provider versions to avoid unexpected changes
- Always run plan before apply
- Use workspaces or directories for environment separation
- Tag all resources for cost tracking and organization
- Use lifecycle rules to protect critical resources
- Integrate with CI/CD for automated deployments
Final Thoughts
Terraform is the backbone of modern Infrastructure as Code. It gives developers and DevOps teams the power to build, test, and scale infrastructure safely and consistently across any cloud provider.
Start small — deploy a single resource — then move toward reusable modules, remote state management, and CI/CD automation. The declarative approach means you define what you want, and Terraform figures out how to get there.
To integrate Terraform into your deployment pipeline, check out Continuous Deployment with GitLab CI/CD from Scratch. For deploying containerized applications, see Kubernetes 101: Deploying and Managing Containerised Apps. For official tutorials and advanced examples, explore the Terraform documentation.