
AWS offers over 200 services, and navigating that catalog as a developer is overwhelming. Most tutorials either cover a single service in isolation or try to explain the entire ecosystem at once. Neither approach helps you answer the question that actually matters: which services do you need to build, deploy, and run a production application?
The reality is that most applications use fewer than 15 AWS services. The rest are specialized tools for niche use cases, enterprise compliance, or problems you do not have yet. This guide cuts through the noise and focuses on the AWS services that developers encounter most frequently, organized by the problems they solve. Whether you are deploying your first application or evaluating services for a new project, this is the practical map of AWS for developers.
Compute: Where Your Code Runs
Compute is where most developers start with AWS. You need somewhere to run your application, and AWS gives you several options at different abstraction levels.
EC2 (Elastic Compute Cloud)
EC2 gives you virtual servers. You choose the operating system, instance size (CPU, memory), and region, then SSH in and run whatever you want. EC2 is the most flexible compute option because you control everything — but that flexibility means you also manage everything: OS updates, security patches, scaling, and availability.
Use EC2 when: You need full control over the server environment, you are running software that does not fit into containers or serverless (legacy applications, specific OS requirements), or you need GPU instances for machine learning workloads.
Skip EC2 when: You are building a new application and want to minimize operational overhead. Containers (ECS/Fargate) or serverless (Lambda) handle scaling and patching for you.
Cost awareness: EC2 charges by the hour (or second for Linux instances). A t3.medium instance (2 vCPU, 4 GB RAM) in us-east-1 costs roughly $30/month running 24/7. Reserved instances or savings plans cut costs by 30-60% for predictable workloads.
Lambda
Lambda runs your code without provisioning servers. You upload a function, define a trigger (HTTP request, queue message, schedule), and AWS handles everything else — scaling, patching, and availability. You pay only for the compute time your function actually uses, measured in milliseconds.
import json
def handler(event, context):
body = json.loads(event['body'])
order = process_order(body)
return {
'statusCode': 201,
'body': json.dumps({'orderId': order['id']})
}
Use Lambda when: Your workload is event-driven (API requests, queue processing, scheduled tasks), individual requests complete within 15 minutes, and traffic is spiky or unpredictable. Lambda excels at workloads with idle periods because you pay nothing when the function is not running.
Skip Lambda when: Your application needs persistent connections (WebSockets), processes take longer than 15 minutes, or you need predictable sub-10ms latency (cold starts add 100ms-1s depending on runtime and package size). For a deeper look at patterns and gotchas, see serverless Node.js on Lambda.
ECS and Fargate
ECS (Elastic Container Service) runs Docker containers on AWS. Fargate is the serverless option within ECS — you define your container image and resource requirements, and AWS manages the underlying servers. ECS with EC2 launch type gives you more control (and lower cost at scale) but requires managing the EC2 instances yourself.
Use ECS/Fargate when: Your application is containerized and needs to run continuously, you want more control than Lambda provides (longer execution, persistent processes, larger memory), or you are running multiple services that benefit from container orchestration.
Skip ECS/Fargate when: Lambda handles your workload — containers add operational overhead that is unnecessary for simple event-driven functions.
Which Compute Service to Start With
For most new applications, Lambda is the starting point. It minimizes operational overhead and costs nothing during idle periods. If your application outgrows Lambda’s constraints (execution time, cold starts, persistent connections), move to Fargate. Reserve EC2 for workloads that genuinely need full server control.
Storage: Where Your Data Lives
S3 (Simple Storage Service)
S3 is object storage — it stores files (images, videos, backups, logs, static assets) and serves them over HTTP. S3 is virtually unlimited in capacity, extremely durable (99.999999999% durability), and inexpensive for storage ($0.023/GB/month for standard storage).
import boto3
s3 = boto3.client('s3')
# Upload a file
s3.upload_file('report.pdf', 'my-app-bucket', 'reports/2026/april/report.pdf')
# Generate a pre-signed URL for temporary access
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-app-bucket', 'Key': 'reports/2026/april/report.pdf'},
ExpiresIn=3600 # 1 hour
)
Use S3 for: Static file hosting, user uploads, application backups, log archival, and serving static websites. Nearly every AWS application uses S3 in some capacity. For production usage patterns, S3 best practices around lifecycle policies and storage classes directly impact your AWS bill.
EBS (Elastic Block Store)
EBS provides persistent block storage for EC2 instances — think of it as a virtual hard drive attached to a server. EBS volumes persist independently of the EC2 instance, so your data survives instance stops and restarts.
Use EBS when: Your EC2 instance needs persistent disk storage for databases, application state, or file-system-dependent workloads.
You rarely interact with EBS directly unless you manage EC2 instances. If you use Fargate or Lambda, EBS is not part of your stack.
Databases: Managed vs Self-Managed
RDS (Relational Database Service)
RDS provides managed relational databases — PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server. AWS handles backups, patching, replication, and failover. You configure the instance size, storage, and multi-AZ deployment, then connect with a standard database client.
import psycopg2
conn = psycopg2.connect(
host='my-app-db.abc123.us-east-1.rds.amazonaws.com',
dbname='myapp',
user='app_user',
password='from-secrets-manager',
port=5432
)
Use RDS when: Your application needs a relational database and you do not want to manage backups, patching, or replication yourself. RDS PostgreSQL or MySQL covers the majority of application database needs.
Cost awareness: A db.t3.medium PostgreSQL instance (2 vCPU, 4 GB RAM) costs roughly $50/month. Multi-AZ deployment (recommended for production) doubles the instance cost but provides automatic failover.
DynamoDB
DynamoDB is a managed NoSQL database with single-digit millisecond response times at any scale. It is schemaless (key-value and document model), scales automatically, and charges based on read/write throughput and storage.
Use DynamoDB when: Your access patterns are simple key-value lookups or queries on a known partition key, you need consistent low-latency at scale, or your workload is serverless (DynamoDB integrates seamlessly with Lambda).
Skip DynamoDB when: Your queries require joins, aggregations, or ad-hoc filtering across multiple fields — these patterns fight DynamoDB’s data model. Use RDS or Aurora instead.
Aurora
Aurora is AWS’s cloud-native relational database, compatible with PostgreSQL and MySQL. It provides up to 5x throughput over standard MySQL and 3x over standard PostgreSQL, with automatic storage scaling up to 128 TB and up to 15 read replicas.
Use Aurora when: You need more performance or availability than standard RDS provides, or your database storage needs to scale beyond RDS limits. Aurora Serverless v2 automatically scales capacity based on demand, which works well for variable workloads.
Networking and Content Delivery
API Gateway
API Gateway creates and manages REST and WebSocket APIs. It handles request routing, authentication, throttling, and CORS, and integrates directly with Lambda for fully serverless APIs.
Use API Gateway when: You are building a serverless API with Lambda, you need managed authentication and rate limiting, or you want to expose internal services through a controlled API layer.
CloudFront
CloudFront is a CDN (Content Delivery Network) that caches content at edge locations worldwide. It serves static assets (images, CSS, JavaScript) and dynamic API responses from the location closest to the user, reducing latency.
Use CloudFront when: Your users are geographically distributed, you serve static assets that benefit from edge caching, or you want to reduce load on your origin servers. CloudFront pairs naturally with S3 for static asset delivery.
Route 53
Route 53 is DNS management. You register domains, create DNS records, and configure health checks and routing policies (latency-based, weighted, failover).
Use Route 53 when: You need DNS management for your AWS-hosted applications. Most developers interact with Route 53 once during initial setup and rarely after.
Monitoring and Logging
CloudWatch
CloudWatch collects metrics, logs, and traces from your AWS resources and applications. It is the default monitoring service for everything running on AWS.
import boto3
cloudwatch = boto3.client('cloudwatch')
# Publish a custom metric
cloudwatch.put_metric_data(
Namespace='MyApp',
MetricData=[{
'MetricName': 'OrdersProcessed',
'Value': 1,
'Unit': 'Count'
}]
)
Set up CloudWatch alarms and dashboards early in your project — retroactively adding monitoring after an incident is far more stressful than setting it up proactively.
Security and Access Management
IAM (Identity and Access Management)
IAM controls who can access which AWS resources. Every API call to AWS is authenticated and authorized through IAM. Understanding IAM policies is non-negotiable for any developer working with AWS.
Key concepts:
- Users: Human identities with long-lived credentials (for console access and local development)
- Roles: Temporary credentials assumed by services (Lambda functions, EC2 instances, ECS tasks)
- Policies: JSON documents that define permissions (allow or deny actions on resources)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-app-bucket/*"
}
]
}
The principle of least privilege: Grant only the permissions each service actually needs. A Lambda function that reads from S3 and writes to DynamoDB should have a role that allows exactly those actions on exactly those resources — nothing more.
Secrets Manager
Secrets Manager stores and rotates sensitive data — database passwords, API keys, and encryption keys. Your application retrieves secrets at runtime instead of storing them in environment variables or config files.
Use Secrets Manager when: You have any secrets that should not be hardcoded or committed to version control. The cost ($0.40/secret/month) is negligible compared to the risk of leaked credentials.
Cognito
Cognito provides user authentication and authorization. It handles sign-up, sign-in, password recovery, MFA, and social login (Google, Facebook, Apple). Cognito integrates with API Gateway to authenticate API requests without custom auth code.
Use Cognito when: You need user authentication for a web or mobile application and do not want to build it from scratch. Cognito’s free tier covers 50,000 monthly active users.
Infrastructure as Code
CloudFormation and CDK
CloudFormation defines AWS infrastructure in YAML or JSON templates. CDK (Cloud Development Kit) lets you define the same infrastructure in TypeScript, Python, Java, or Go — then synthesizes it to CloudFormation.
import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as apigateway from 'aws-cdk-lib/aws-apigateway';
const fn = new lambda.Function(this, 'OrderHandler', {
runtime: lambda.Runtime.NODEJS_20_X,
handler: 'index.handler',
code: lambda.Code.fromAsset('lambda'),
});
new apigateway.LambdaRestApi(this, 'OrderApi', {
handler: fn,
});
CDK is the recommended path for developers because it uses real programming languages with type checking, autocomplete, and reusable constructs. For a step-by-step guide, deploying to AWS with CDK covers the full setup-to-deployment workflow. Teams evaluating serverless deployment tools should also consider SST and SAM as CDK alternatives.
Message Queues and Event Processing
SQS (Simple Queue Service)
SQS is a managed message queue for decoupling services. Producers send messages, consumers process them asynchronously. SQS handles scaling, message retention (up to 14 days), and dead-letter queues for failed messages.
SNS (Simple Notification Service)
SNS is a pub/sub service. Publishers send messages to topics, and subscribers (Lambda functions, SQS queues, email addresses, HTTP endpoints) receive them. SNS is the glue that fans out events to multiple consumers.
EventBridge
EventBridge is a serverless event bus for routing events between AWS services, SaaS applications, and your own applications. It supports content-based filtering (route events based on their content) and scheduling (run events on a cron schedule).
Practical combination: SNS fans out events to multiple SQS queues, each consumed by a different service. EventBridge adds intelligent routing and filtering on top. For most applications, SQS alone handles basic async processing, and you add SNS or EventBridge when you need fan-out or content-based routing.
The Services You Can Ignore (For Now)
These services exist for specific use cases. Unless you have those use cases, skip them:
| Service | What It Does | When You Need It |
|---|---|---|
| Kinesis | Real-time data streaming | Processing millions of events/second (logs, clickstreams) |
| Redshift | Data warehouse | Running complex analytics queries across terabytes |
| Elastic Beanstalk | PaaS deployment | Want Heroku-like simplicity on AWS (consider Fargate instead) |
| Step Functions | Workflow orchestration | Complex multi-step workflows with branching and retries |
| AppSync | Managed GraphQL | Need managed GraphQL with real-time subscriptions |
| Glue | ETL service | Data pipeline transformations between sources |
| SageMaker | ML platform | Training and deploying machine learning models |
Real-World Scenario: Building a SaaS Application on AWS
A two-person startup builds a project management SaaS. They need user authentication, a REST API, a database, file uploads, and email notifications. Their AWS architecture uses exactly 8 services:
- Cognito handles user sign-up, sign-in, and JWT tokens
- API Gateway exposes REST endpoints with Cognito authorization
- Lambda runs the business logic for each endpoint
- RDS PostgreSQL stores projects, tasks, and user data
- S3 stores file attachments uploaded to tasks
- SES (Simple Email Service) sends notification emails
- CloudWatch collects logs and metrics from Lambda and API Gateway
- CDK defines all infrastructure as TypeScript code
Monthly cost at launch (under 1,000 users): approximately $50-80, dominated by the RDS instance. Lambda, API Gateway, S3, and Cognito stay within free tier limits at this scale. As the application grows to 10,000 users, they upgrade the RDS instance and add CloudFront for static assets, bringing costs to roughly $200-300/month.
The team avoids EC2 entirely, has no servers to patch, and deploys by running cdk deploy. When they need to scale a specific Lambda function, they adjust the reserved concurrency in the CDK stack and redeploy. The entire infrastructure is defined in roughly 200 lines of TypeScript.
When to Use AWS
- You need production-grade infrastructure with high availability and global reach
- Your application requirements map to managed services (databases, queues, auth) that reduce operational burden
- You want to start small and scale incrementally — AWS pricing is consumption-based for most services
- Your team or organization already has AWS accounts and expertise
When NOT to Choose AWS
- Your application is a simple static site or blog — platforms like Vercel, Netlify, or even shared hosting are simpler and cheaper
- Budget is extremely tight and the application is small — a single $5/month VPS on another provider runs a complete application stack
- Your team has no AWS experience and the project deadline is tight — the learning curve for IAM, networking, and service configuration is real
- Vendor lock-in is a primary concern — some AWS services (Lambda, DynamoDB, Cognito) create dependencies that are expensive to migrate away from
Common Mistakes with AWS for Developers
- Starting with EC2 when Lambda or Fargate would eliminate server management entirely — the default should be serverless unless you have a specific reason for servers
- Not setting up billing alerts before experimenting — a misconfigured NAT gateway or accidentally provisioned large instance can generate unexpected charges
- Over-permissioning IAM roles with
*actions and*resources to “make things work” — this creates security vulnerabilities that are painful to fix later - Running a single RDS instance without Multi-AZ for production workloads — the first database failure will cost more in downtime than the Multi-AZ surcharge
- Not using infrastructure as code from the start — clicking through the console is fine for learning, but production infrastructure should be reproducible and version-controlled
- Ignoring AWS’s free tier limits and being surprised by charges — understand what is free, what is cheap, and what gets expensive at scale
- Choosing services based on AWS marketing rather than actual requirements — not every application needs Kinesis, Step Functions, or a multi-region architecture
Getting Started with AWS for Developers
AWS for developers comes down to a manageable set of core services: Lambda or Fargate for compute, RDS or DynamoDB for databases, S3 for storage, API Gateway for APIs, CloudWatch for monitoring, IAM for security, and CDK for infrastructure as code. Master these first, and add specialized services only when a concrete requirement demands them.
Start with the smallest architecture that works — a Lambda function behind API Gateway talking to an RDS database. Deploy it with CDK so your infrastructure is reproducible from day one. Add services incrementally as your application’s needs evolve, and always set up billing alerts before experimenting with anything new.