
In today’s cloud-native world, deploying your Java backend to production goes far beyond just running java -jar. Whether you’re building SaaS apps, APIs, or microservices, knowing how to containerize your Spring Boot app with Docker and deploy it to Kubernetes (K8s) is an essential skill. Understanding the full Spring Boot Docker Kubernetes deployment process will empower you to create resilient and scalable applications.
In this comprehensive guide, you’ll learn how to go from a working Spring Boot application to a containerized service running on Kubernetes, including health checks, ConfigMaps, Secrets, resource management, and production-ready configurations.
Why Docker and Kubernetes?
Docker lets you package your app with all its dependencies into a portable image that runs consistently across environments. No more “it works on my machine” problems.
Kubernetes (K8s) orchestrates containers across clusters, handling:
- Horizontal scaling – automatically add or remove replicas based on load
- Self-healing – restart failed containers, replace unresponsive pods
- Load balancing – distribute traffic across healthy instances
- Rolling updates – deploy new versions with zero downtime
- Service discovery – containers find each other automatically
Together, they provide a powerful deployment solution for production-ready applications that can handle millions of requests.
Step 1: Prepare Your Spring Boot Application
Before containerizing, ensure your Spring Boot app is production-ready. Start by adding Spring Actuator for health endpoints that Kubernetes will use:
<!-- pom.xml -->
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
Configure actuator endpoints in application.yml:
# application.yml
server:
port: 8080
shutdown: graceful
spring:
application:
name: springboot-app
lifecycle:
timeout-per-shutdown-phase: 30s
datasource:
url: jdbc:postgresql://${DB_HOST:localhost}:${DB_PORT:5432}/${DB_NAME:appdb}
username: ${DB_USERNAME:postgres}
password: ${DB_PASSWORD:secret}
hikari:
maximum-pool-size: ${DB_POOL_SIZE:10}
minimum-idle: 2
idle-timeout: 30000
jpa:
hibernate:
ddl-auto: validate
open-in-view: false
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
endpoint:
health:
show-details: when_authorized
probes:
enabled: true
health:
livenessState:
enabled: true
readinessState:
enabled: true
logging:
level:
root: INFO
com.yourcompany: DEBUG
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
This configuration enables graceful shutdown, Kubernetes health probes, and externalized configuration through environment variables.
Step 2: Create a Production-Ready Dockerfile
A well-crafted Dockerfile is crucial for security and performance. Here’s an optimized multi-stage build:
# Dockerfile
# Stage 1: Build with Maven
FROM eclipse-temurin:17-jdk-alpine AS builder
WORKDIR /build
# Copy only pom.xml first to cache dependencies
COPY pom.xml .
COPY .mvn .mvn
COPY mvnw .
# Download dependencies (cached unless pom.xml changes)
RUN chmod +x mvnw && ./mvnw dependency:go-offline -B
# Copy source code
COPY src src
# Build the application
RUN ./mvnw clean package -DskipTests -Dspring-boot.build-image.skip=true
# Extract layers for better caching
RUN java -Djarmode=layertools -jar target/*.jar extract --destination extracted
# Stage 2: Production image
FROM eclipse-temurin:17-jre-alpine
# Security: Run as non-root user
RUN addgroup -g 1001 -S appgroup && \
adduser -u 1001 -S appuser -G appgroup
WORKDIR /app
# Copy layers in order of change frequency
COPY --from=builder /build/extracted/dependencies/ ./
COPY --from=builder /build/extracted/spring-boot-loader/ ./
COPY --from=builder /build/extracted/snapshot-dependencies/ ./
COPY --from=builder /build/extracted/application/ ./
# Set ownership
RUN chown -R appuser:appgroup /app
# Switch to non-root user
USER appuser
# JVM tuning for containers
ENV JAVA_OPTS="-XX:+UseContainerSupport \
-XX:MaxRAMPercentage=75.0 \
-XX:+UseG1GC \
-XX:+UseStringDeduplication \
-Djava.security.egd=file:/dev/./urandom"
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/actuator/health/liveness || exit 1
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS org.springframework.boot.loader.launch.JarLauncher"]
This Dockerfile incorporates several best practices:
- Multi-stage build reduces final image size
- Layer extraction improves Docker caching
- Non-root user enhances security
- Container-aware JVM settings optimize memory usage
- Health check provides container-level monitoring
Step 3: Build and Test Docker Image Locally
Build and test your containerized application:
# Build the image
docker build -t springboot-app:1.0.0 .
# Check image size (should be under 300MB with alpine)
docker images springboot-app
# Run locally with environment variables
docker run -d \
--name springboot-test \
-p 8080:8080 \
-e DB_HOST=host.docker.internal \
-e DB_PORT=5432 \
-e DB_NAME=appdb \
-e DB_USERNAME=postgres \
-e DB_PASSWORD=secret \
springboot-app:1.0.0
# Check logs
docker logs -f springboot-test
# Test health endpoint
curl http://localhost:8080/actuator/health
# Cleanup
docker stop springboot-test && docker rm springboot-test
Step 4: Push to Container Registry
Push your image to a registry accessible by your Kubernetes cluster:
# Docker Hub
docker tag springboot-app:1.0.0 yourusername/springboot-app:1.0.0
docker push yourusername/springboot-app:1.0.0
# GitHub Container Registry
docker tag springboot-app:1.0.0 ghcr.io/yourusername/springboot-app:1.0.0
echo $GITHUB_TOKEN | docker login ghcr.io -u yourusername --password-stdin
docker push ghcr.io/yourusername/springboot-app:1.0.0
# AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker tag springboot-app:1.0.0 123456789.dkr.ecr.us-east-1.amazonaws.com/springboot-app:1.0.0
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/springboot-app:1.0.0
Step 5: Create Kubernetes Manifests
Create a complete set of production-ready Kubernetes manifests. Organize them in a k8s/ directory:
k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: springboot-app
labels:
app: springboot-app
environment: production
k8s/configmap.yaml – Non-sensitive configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: springboot-app-config
namespace: springboot-app
data:
DB_HOST: "postgres-service"
DB_PORT: "5432"
DB_NAME: "appdb"
DB_POOL_SIZE: "20"
SPRING_PROFILES_ACTIVE: "production"
LOGGING_LEVEL_ROOT: "INFO"
k8s/secret.yaml – Sensitive data (use external secrets manager in production)
apiVersion: v1
kind: Secret
metadata:
name: springboot-app-secrets
namespace: springboot-app
type: Opaque
stringData:
DB_USERNAME: "app_user"
DB_PASSWORD: "super-secure-password-here"
k8s/deployment.yaml – Full production deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: springboot-app
namespace: springboot-app
labels:
app: springboot-app
version: "1.0.0"
spec:
replicas: 3
selector:
matchLabels:
app: springboot-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: springboot-app
version: "1.0.0"
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
serviceAccountName: springboot-app
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
terminationGracePeriodSeconds: 60
containers:
- name: springboot-app
image: yourusername/springboot-app:1.0.0
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
protocol: TCP
envFrom:
- configMapRef:
name: springboot-app-config
- secretRef:
name: springboot-app-secrets
env:
- name: JAVA_OPTS
value: "-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:+UseG1GC"
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1024Mi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: springboot-app
topologyKey: kubernetes.io/hostname
k8s/service.yaml – Internal service
apiVersion: v1
kind: Service
metadata:
name: springboot-app-service
namespace: springboot-app
labels:
app: springboot-app
spec:
type: ClusterIP
selector:
app: springboot-app
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
k8s/ingress.yaml – External access with TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: springboot-app-ingress
namespace: springboot-app
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
tls:
- hosts:
- api.yourcompany.com
secretName: springboot-app-tls
rules:
- host: api.yourcompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: springboot-app-service
port:
number: 80
k8s/hpa.yaml – Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: springboot-app-hpa
namespace: springboot-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: springboot-app
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
k8s/serviceaccount.yaml – RBAC setup
apiVersion: v1
kind: ServiceAccount
metadata:
name: springboot-app
namespace: springboot-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: springboot-app-role
namespace: springboot-app
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: springboot-app-rolebinding
namespace: springboot-app
subjects:
- kind: ServiceAccount
name: springboot-app
namespace: springboot-app
roleRef:
kind: Role
name: springboot-app-role
apiGroup: rbac.authorization.k8s.io
Step 6: Deploy to Kubernetes Cluster
Deploy your application with proper ordering:
# Start local cluster (for development)
minikube start --cpus=4 --memory=8192 --driver=docker
# Or use kind for a lightweight alternative
kind create cluster --name springboot-dev
# Apply manifests in order
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/serviceaccount.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/hpa.yaml
kubectl apply -f k8s/ingress.yaml
# Or apply all at once
kubectl apply -f k8s/
# Watch deployment progress
kubectl -n springboot-app rollout status deployment/springboot-app
# Check pods
kubectl -n springboot-app get pods -w
# View logs
kubectl -n springboot-app logs -f deployment/springboot-app
# Access locally (for minikube)
minikube -n springboot-app service springboot-app-service
# Or port-forward
kubectl -n springboot-app port-forward svc/springboot-app-service 8080:80
Step 7: Common Operations
Essential commands for managing your deployment:
# Scale manually
kubectl -n springboot-app scale deployment/springboot-app --replicas=5
# Update to new version (triggers rolling update)
kubectl -n springboot-app set image deployment/springboot-app springboot-app=yourusername/springboot-app:1.1.0
# Rollback to previous version
kubectl -n springboot-app rollout undo deployment/springboot-app
# Check rollout history
kubectl -n springboot-app rollout history deployment/springboot-app
# Restart all pods (force re-pull)
kubectl -n springboot-app rollout restart deployment/springboot-app
# Check HPA status
kubectl -n springboot-app get hpa
# Debug a pod
kubectl -n springboot-app exec -it deployment/springboot-app -- /bin/sh
# View resource usage
kubectl -n springboot-app top pods
# Describe pod for events/errors
kubectl -n springboot-app describe pod <pod-name>
Common Mistakes to Avoid
Learn from these frequent deployment pitfalls:
1. Missing or incorrect health probes
# Wrong: Using liveness probe for startup check (causes crash loops)
livenessProbe:
initialDelaySeconds: 5 # Too short for Spring Boot!
# Correct: Use startupProbe for slow-starting apps
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30 # Allows 5 minutes to start
2. No resource limits (causes node instability)
# Wrong: No limits defined
containers:
- name: app
image: myapp
# Correct: Always set both requests and limits
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1024Mi"
cpu: "1000m"
3. Running as root (security vulnerability)
# Wrong: Default runs as root
spec:
containers:
- name: app
# Correct: Explicitly run as non-root
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
4. Using latest tag (unpredictable deployments)
# Wrong: Never use latest in production
image: myapp:latest
# Correct: Use semantic versioning or commit SHA
image: myapp:1.2.3
image: myapp:abc123def
5. No graceful shutdown handling
# Wrong: Pod terminates immediately, drops requests
terminationGracePeriodSeconds: 30 # But app needs time to drain
# Correct: Allow time for connection draining
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"] # Wait for load balancer
terminationGracePeriodSeconds: 60 # Total shutdown time
Monitoring and Observability
Set up monitoring with Prometheus and Grafana:
# Install Prometheus stack with Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
# ServiceMonitor for your app
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: springboot-app-monitor
namespace: springboot-app
spec:
selector:
matchLabels:
app: springboot-app
endpoints:
- port: http
path: /actuator/prometheus
interval: 15s
Related
- Spring Boot + PostgreSQL Example: Complete CRUD Guide
- Spring Boot + WebSocket: Build a Real-Time Notification System
- Docker Compose for Local Development: Orchestrating Multi-Container Setups
Final Thoughts
Docker and Kubernetes are essential tools for deploying scalable, containerized Spring Boot applications. With the manifests and patterns in this guide, you can go from local development to production-grade deployment with scalability, resiliency, and zero-downtime updates.
The key to success is understanding that Kubernetes isn’t just about running containers – it’s about declaring your desired state and letting the platform maintain it. Your health probes tell Kubernetes when your app is truly ready, your resource limits ensure fair scheduling, and your HPA configuration enables automatic scaling.
As next steps, consider implementing Helm charts for templated deployments, GitOps with ArgoCD for automated deployments, and distributed tracing with Jaeger or Zipkin for debugging microservices. For secrets management, look into external solutions like HashiCorp Vault or AWS Secrets Manager integrated with Kubernetes through the External Secrets Operator.