K3d is a lightweight in-docker Kubernetes solution ideal for rapid iteration, testing and learning without needing heavy local resources or a complicated installation procedure. π οΈ Install K3d
Follow the detailed Installation Guide to get started.
On Mac:
brew install k3d
If you already have a KUBECONFIG
set, back it up to restore it later:
echo $KUBECONFIG
Create a cluster named my-cluster
:
`-p "8089:80@loadbalancer"` maps external port `8089` to internal port `80`
`--agents 3` creates 3 additional worker nodes
k3d cluster create my-cluster -p "8089:80@loadbalancer" --agents 3
Once created, the cluster configuration will be saved as: ./my-cluster-kubeconfig.yaml
.
Update the kubeconfig to point to your new cluster:
export KUBECONFIG=./my-cluster-kubeconfig.yaml
Lets use the following manifest:
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api-service
spec:
ports:
- port: 5000
targetPort: 5000
protocol: TCP
selector:
app: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: joelmatajunk/my-api
ports:
- containerPort: 5000
name: api-port
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: default
annotations:
spec.ingressClassName: traefik
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
“What does the above mean?”
To explain this Kubernetes deployment, letβs imagine a restaurant setup:
The Chef and Kitchen Staff π³ - Deployment (`api-deployment`):
The Deployment is like the kitchen staff and chef in a restaurant.
Each container in the deployment is a chef responsible for cooking a specific dish (in this case, the “API”).
The replicas specify how many chefs are cooking the same dish. Right now, we have only 1 chef (replicas: 1
), but we can hire more if needed (e.g., setting replicas to 2 or more).
The joelmatajunk/my-api is like a recipe that each chef uses to prepare the dish β this is the Docker image that tells each container (chef) what ingredients and steps to follow.
The Waiter πβοΈ - Service (api-service
):
The Service is like a waiter who takes orders from customers and delivers them to the chefs in the kitchen.
The Service knows which chefs are responsible for a particular dish (using the selector field app: api) and forwards customer requests to the right chef.
When a customer asks for a dish, the waiter directs the order to any of the chefs cooking that dish.
In the YAML, the Service specifies port 5000. This is like saying, βHey waiter, when a customer wants to order a dish on port 5000, go to the chef cooking on target port 5000.β
The Front Door or the Hostess πͺ - Ingress (api-ingress
):
The Ingress is like the front door or the hostess desk of the restaurant.
When a customer comes to the restaurant and asks to be seated, the hostess (Ingress) checks where to send them. Depending on the order, the hostess knows to forward customers to the correct waiter (Service).
In this case, when a customer comes to the root path (/
) of the restaurant (i.e., / path in the URL), the hostess sends the customer to the api-service on port 5000.
Here’s a shorter version:
Chef (Container): Cooks the dishes using a specified recipe (Docker image).
Waiter (Service): Takes the orders from customers and passes them to the appropriate chef.
Hostess (Ingress): Directs customers to the right waiter depending on the order.
So, if a customer visits the restaurant by typing the address (e.g., http://restaurant.com/), the hostess sees that they want something from the menu served at the / path and directs them to the waiter (api-service
). The waiter then sends the request to the chef (api-deployment
), who serves the dish on port 5000.
Enough talking, lets get back to it. Apply your Kubernetes YAML manifest to deploy the service:
kubectl apply -f ./my-api.yml
π Check the Pod status
Monitor your pod status and wait until the pod reaches Running
:
kubectl get pods
Verify if the deployment works by making an HTTP request:
curl http://localhost:8089/env
You should get a reply with a list of the Pod environment variables.
Delete the cluster to free up resources:
k3d cluster delete my-cluster
(optional) Restore previous KUBECONFIG:
export KUBECONFIG=~/.kube/config
Check the Pod logs:
kubectl describe pod/$(kubectl get pods | awk '{print $1}' | grep api)
Explore the running pod using an interactive shell:
kubectl exec -it $(kubectl get pods | awk '{print $1}' | grep api) -- bash