
Introduction
Modern applications run in containers — lightweight, isolated environments that bundle everything an app needs. But when you have hundreds of containers running across multiple servers, managing them manually becomes impossible. That’s where Kubernetes comes in. It’s an open-source platform that automates deployment, scaling, and management of containerised applications. In this post, you’ll learn the basics of Kubernetes — what it is, how it works, and how to deploy your first app.
What Is Kubernetes?
Kubernetes (K8s) is a container orchestration system developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It helps developers run containers in clusters of machines, ensuring reliability, scalability, and zero downtime during updates.
Key Features
- Automatic scaling: Kubernetes adjusts container replicas based on demand.
- Self-healing: If a container fails, Kubernetes automatically replaces it.
- Load balancing: It distributes traffic across running pods evenly.
- Rolling updates: Deploy new versions without downtime.
In short, Kubernetes helps you run applications in production without worrying about individual servers or containers.
Core Components of Kubernetes
To understand how it works, let’s break it down into key parts:
- Pod: The smallest unit in Kubernetes — a single running instance of your app (often one container per pod).
- Node: A physical or virtual machine that runs pods.
- Deployment: Defines how many pod replicas to run and handles updates.
- Service: Exposes your pods to the network and provides stable endpoints.
- Namespace: Logical grouping to isolate workloads within a cluster.
These components work together to make your apps resilient and easy to manage.
Setting Up Kubernetes
You can try Kubernetes locally with Minikube or on the cloud via Google Kubernetes Engine (GKE), Amazon EKS, or Azure AKS.
Install Minikube
brew install minikube
minikube start
This command creates a local cluster with one node.
Verify Setup
kubectl get nodes
You should see one node listed as “Ready.”
Deploying Your First Application
Let’s deploy a simple NGINX web server.
1: Create a Deployment
kubectl create deployment nginx-deployment --image=nginx
2: Expose It as a Service
kubectl expose deployment nginx-deployment --type=NodePort --port=80
3: Access the App
Run:
minikube service nginx-deployment
Kubernetes launches a pod running NGINX and makes it available on your local network.
Managing and Scaling
You can scale your app with a single command:
kubectl scale deployment nginx-deployment --replicas=3
Kubernetes automatically creates additional pods and balances traffic between them.
To update the app version:
kubectl set image deployment/nginx-deployment nginx=nginx:latest
Kubernetes performs a rolling update, replacing pods gradually to avoid downtime.
Monitoring Your Cluster
Use these commands to track performance and activity:
kubectl get pods— list all pods.kubectl describe pod <pod-name>— inspect a specific pod.kubectl logs <pod-name>— view application logs.
You can integrate tools like Prometheus and Grafana for advanced monitoring and alerting.
Best Practices
- Keep configurations in version control using YAML files.
- Use namespaces for separation between environments (dev, staging, prod).
- Apply resource limits (CPU, memory) to avoid overloads.
- Automate deployments using CI/CD tools like GitLab or GitHub Actions.
- Regularly back up your cluster state and configuration.
Final Thoughts
Kubernetes simplifies running and scaling containerised applications. Once you master its basics — pods, deployments, and services — you can automate complex workflows and scale effortlessly. It’s the foundation for modern DevOps and cloud-native systems. To extend your setup with monitoring, check out Monitoring & Logging Microservices Using Prometheus and Grafana. For deeper documentation, visit the Kubernetes Official Docs.



