Devops Day 32 — Deploying Your First Application in Kubernetes
Welcome to Day 32 of the DevOps Series!
In previous sessions, we explored Kubernetes architecture, pods, and deployments. Today’s focus is on deploying your first application in Kubernetes, an important step for anyone transitioning from container-based workflows to container orchestration.
This session introduces the essential tools and concepts needed to run applications inside a Kubernetes cluster.
☸️ Why Kubernetes?
Modern applications require systems that can handle:
-
Large-scale traffic
-
High availability
-
Automatic recovery from failures
-
Efficient resource utilization
Kubernetes solves these challenges by providing:
✅ Auto-scaling applications
✅ Auto-healing of failed workloads
✅ Cluster-level resource management
✅ Automated deployment and orchestration
This makes Kubernetes the standard platform for running containerized applications in production.
📦 Pods vs Containers
If you are coming from Docker, the biggest conceptual shift is understanding pods.
🧱 Containers
A container is a packaged application with:
-
Code
-
Dependencies
-
Runtime environment
Containers ensure applications run consistently across environments.
☸️ Pods — The Kubernetes Deployment Unit
In Kubernetes, applications are not deployed directly as containers.
Instead, Kubernetes deploys Pods.
A Pod is a wrapper around one or more containers.
Pods provide:
✅ Shared networking
✅ Shared storage
✅ Communication through localhost
This allows tightly coupled services to work together inside the same pod.
🛠️ Essential Kubernetes Tools
To run Kubernetes locally and interact with clusters, two tools are required.
🔧 Kubectl
Kubectl is the command-line tool used to interact with Kubernetes clusters.
It allows you to:
-
Deploy applications
-
Manage pods and services
-
Inspect cluster resources
-
Debug workloads
Example commands include:
kubectl get pods
kubectl describe pod <pod-name>
kubectl logs <pod-name>
These commands help monitor and troubleshoot applications running inside the cluster.
🧪 Minikube — Local Kubernetes Cluster
For learning purposes, running Kubernetes on the cloud can become expensive.
Instead, developers use Minikube, which creates a single-node Kubernetes cluster locally.
Benefits of Minikube:
✅ Safe learning environment
✅ No cloud infrastructure cost
✅ Full Kubernetes functionality for testing
✅ Ideal for beginners
Minikube allows you to experiment with Kubernetes without provisioning real cloud resources.
🚀 Deploying Your First Application
The process of deploying an application in Kubernetes usually starts with defining a pod configuration file.
Step 1 — Create a Pod Configuration File
A YAML file defines the application container.
Example: pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: first-app
spec:
containers:
- name: web-container
image: nginx
ports:
- containerPort: 80
This file tells Kubernetes:
-
Which container image to run
-
What the pod name should be
-
Which ports should be exposed
Step 2 — Deploy the Pod
Use Kubectl to deploy the application.
kubectl create -f pod.yaml
Kubernetes reads the configuration file and creates the pod automatically.
Step 3 — Verify the Deployment
You can check whether the application is running using:
kubectl get pods
For more details:
kubectl get pods -o wide
This command shows the pod status and the node where it is running.
🔍 Debugging Kubernetes Applications
When working with Kubernetes, debugging tools are extremely important.
Useful commands include:
View Pod Details
kubectl describe pod <pod-name>
This shows:
-
Pod events
-
Container status
-
Deployment issues
View Application Logs
kubectl logs <pod-name>
Logs help identify runtime errors or configuration issues.
Check Running Pods
kubectl get pods
This confirms whether the pod is running successfully.
⚠️ Limitations of Pods
While pods are useful for learning and simple deployments, they have limitations.
Pods do not provide:
-
Auto-healing
-
Auto-scaling
-
Rolling updates
-
High availability
For production systems, Kubernetes uses Deployments, which manage pods automatically.
This will be explored in the next session.
🧠 Key Takeaways — Day 32
Today we learned:
✅ Why Kubernetes is used for container orchestration
✅ Difference between containers and pods
✅ How pods wrap containers in Kubernetes
✅ Installing and using Kubectl
✅ Running a local Kubernetes cluster with Minikube
✅ Deploying the first application using a YAML file
✅ Debugging applications using Kubectl commands

💡 Up Next — Day 33
We will explore Kubernetes Deployments, which enable:
-
Auto-healing
-
Auto-scaling
-
Rolling updates
-
Production-grade application management
Deployments are the foundation of running reliable applications in Kubernetes.
Comments
Post a Comment