GitHub Classroom Link: https://classroom.github.com/a/BEW1HoEz
This is another stand-alone lab and does not rely on Jarvis.
In this lab, we’ll be working with Kubernetes and Helm to deploy microservices. If you recall from lecture, Docker Compose is great for local development, but when we want to run applications in production across multiple machines in a manageable way, we need Kubernetes. And while we could define all our Kubernetes YAML files manually, that quickly becomes a mess when you have multiple services and multiple environments. That’s where Helm comes in.
Prerequisites
You’ll need to install the following:
- kubectl (Kubernets command line tool)
- Minikube
- Helm (If you have homebrew:
brew install helm)
Provided Code
We have two simple FastAPI microservices in this repository:
-
greeting-service: A simple service that greets people by name
GET /greet/{name}→ returns{"message": "Hello, {name}!"}
-
name-service: A service that picks a random name and gets a greeting for it
GET /random→ picks a random name, calls greeting-service, returns both
The name-service depends on the greeting-service. This is a simplified version of what you might see in a real microservices architecture - services calling other services. Don’t worry about reading the code or Dockerfiles, just knowing these endpoints is enough.
Getting Started
First, make sure you have the required tools installed (see above):
# Check if minikube is installed
minikube version
# Check if kubectl is installed
kubectl version
# Check if helm is installed
helm version
Once you have everything installed, let’s start up a local Kubernetes cluster:
minikube start
This will create a single-node Kubernetes cluster running on your machine. You can verify it’s running with:
kubectl get nodes
You should see one node called minikube with a STATUS of “Ready”.
Building and Loading Images
Before we can deploy our services to Kubernetes, we need to build the Docker images and make them available to minikube. Normally, you’d push images to a container registry (like Docker Hub or Google Container Registry), but for local development, we can load images directly into minikube.
# build greeting service
docker build -t greeting-service:latest ./greeting-service
# build name service
docker build -t name-service:latest ./name-service
Now let’s load these images into minikube so Kubernetes can use them (this might take a minute or so):
# load docker images into minikube
minikube image load greeting-service:latest
minikube image load name-service:latest
Creating Your Helm Chart
We have two services to deploy, and they’re pretty similar - they both run a FastAPI app on port 8080, they both need a Deployment and a Service in Kubernetes. Instead of writing separate Kubernetes manifests for each one, let’s create a generic Helm chart that can deploy either service based on configuration values.
Helm provides a command to scaffold a new chart:
helm create microservice
This creates a directory called microservice/ with a bunch of files:
microservice/
├── Chart.yaml # Metadata about the chart
├── charts/ # Dependencies (we won't use this)
├── templates/ # Kubernetes manifest templates
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ └── ...
└── values.yaml # Default configuration values
The generated chart includes some templates we don’t need for this lab. Let’s remove them to keep things simple:
rm -r microservice/charts/
rm microservice/templates/ingress.yaml
rm microservice/templates/hpa.yaml
rm microservice/templates/serviceaccount.yaml
Creating Service-Specific Values
If you open microservice/values.yaml, you’ll see a lot of configuration for an nginx deployment. We’re not going to use this default values file directly. Instead, we’ll create a values/ directory where we’ll keep service-specific configuration files. This is a cleaner approach - the chart stays generic, and all our configuration lives in one place. Delete this file if you want to keep things clean.
The whole point of Helm is that we can use the same chart with different values to deploy different services. Instead of modifying the default microservice/values.yaml, we’ll create separate values files for each service.
First, create a values/ directory at the root of the repository:
mkdir values
Now let’s create values files for each of our services. The structure we’ll use can be applied to any microservice - just change the image name, replica count, and environment variables as needed.
Create a file values/greeting.yaml and put the following in it:
# Values for deploying greeting-service
replicaCount: 2
image:
repository: greeting-service
tag: latest
pullPolicy: Never # Use local images from minikube
service:
type: ClusterIP
port: 8080
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
Create another file values/name.yaml and put the following in it:
# Values for deploying name-service
replicaCount: 1
image:
repository: name-service
tag: latest
pullPolicy: Never # Use local images from minikube
service:
type: ClusterIP
port: 8080
env:
- name: GREETING_SERVICE_URL
value: http://greeting-service-microservice:8080
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
You might already have an idea of how we’ll be using these values, but we’ll come back to these in a bit.
Updating the Deployment Template
Now we need to update the Deployment template to use our values. If you recall, a Deployment in Kubernetes defines a group of pods (i.e. in our case, a service). Open microservice/templates/deployment.yaml and replace its contents with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "microservice.fullname" . }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "microservice.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "microservice.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
{{- if .Values.env }}
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 10 }}
Understanding Helm’s Templating Language
This template uses Helm’s templating language (based on Go templates) to inject values and generate Kubernetes YAML dynamically. Let’s break down what’s happening here, because there’s a lot of new syntax!
Accessing Values: .Values and .Chart
When you see {{ .Values.something }}, Helm is pulling data from the values file you provide to it. For example:
{{ .Values.replicaCount }}→ gets thereplicaCountfield{{ .Values.image.repository }}→ navigates toimage:thenrepository:{{ .Values.service.port }}→ gets the port from the service section
Similarly, {{ .Chart.Name }} pulls metadata from the Chart.yaml file (which was generated by helm create). This file contains information like the chart name, version, and description.
The . at the beginning represents the “root context” - it’s how you access all the data Helm makes available to your templates.
The include Keyword
You’ll see lines like:
{{- include "microservice.fullname" . | nindent 4 }}
The include keyword lets you call reusable template snippets that are defined in _helpers.tpl (another file that helm create generated). These helpers generate consistent names and labels across all your resources. For example:
"microservice.fullname"generates a full name for your resources (likegreeting-service-microservice)"microservice.labels"generates a standard set of labels"microservice.selectorLabels"generates labels used for pod selectors
The . after the template name passes the root context to the helper, so it can access .Values, .Chart, etc.
The nindent Function: Formatting YAML
YAML is whitespace-sensitive, so indentation matters! The nindent function (short for “newline + indent”) helps format multi-line output correctly.
labels:
{{- include "microservice.labels" . | nindent 4 }}
This means: “Take the output of the include, add a newline, then indent it 4 spaces.” Without nindent, the labels would be on the wrong indentation level and the YAML would be invalid.
The | is the pipe operator - it passes the output of the left side to the function on the right side, just like Unix pipes!
The toYaml Function: Converting Data
Look at this line:
resources:
{{- toYaml .Values.resources | nindent 10 }}
The toYaml function takes a complex data structure (like the entire resources: section from values.yaml) and converts it to properly formatted YAML. This is useful when you want to copy an entire block of configuration without manually templating each field.
So if values.yaml has:
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
Then toYaml .Values.resources converts that whole structure to YAML text, and nindent 10 indents it properly.
Conditional Statements: if
The {{- if .Values.env }} statement is a conditional - it only renders the enclosed content if env is defined and non-empty:
{{- if .Values.env }}
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
{{- end }}
If values.yaml has env: [] (an empty list), this entire section won’t be rendered at all. This is useful because we don’t want an empty env: section in our Deployment when there are no environment variables to inject.
The {{- (with the dash) trims whitespace to keep the generated YAML clean.
Looping: range
The {{- range .Values.env }} statement loops over each item in the env list:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
For each item in the loop, .name and .value refer to the fields of the current item (not the root context anymore!). So if values.yaml has:
env:
- name: GREETING_SERVICE_URL
value: http://greeting-service:8080
- name: DEBUG
value: "true"
The range loop will generate:
env:
- name: GREETING_SERVICE_URL
value: "http://greeting-service:8080"
- name: DEBUG
value: "true"
The | quote function ensures the value is properly quoted as a string in YAML.
Pretty powerful, right? This is what makes Helm charts reusable - you can use conditionals and loops to handle different configurations without duplicating templates!
Updating the Service Template
The Service template is simpler. Open microservice/templates/service.yaml and replace its contents with (this might be exactly the same as what was generated by default):
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservice.fullname" . }}
labels:
{{- include "microservice.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservice.selectorLabels" . | nindent 4 }}
Deploying the Greeting Service
Now for the moment of truth - let’s deploy our first service! Refer back to values/greeting.yaml that we made a bit ago. Notice we’re setting replicaCount: 2, so Kubernetes will create 2 pods running the greeting service (take a look at how this value is used in the Deployment template). This demonstrates how Kubernetes handles scaling automatically. The pullPolicy: Never tells Kubernetes to use local images instead of trying to pull from a registry.
Now let’s install the chart:
helm install greeting-service ./microservice -f values/greeting.yaml
This command tells Helm to:
- Create a release named
greeting-service - Use the chart in the
./microservicedirectory - Override default values with those in
values/greeting.yaml
You should see output like:
NAME: greeting-service
LAST DEPLOYED: ...
NAMESPACE: default
STATUS: deployed
Let’s verify it’s running:
kubectl get pods
You should see 2 pods with names like greeting-service-microservice-xxxxx with STATUS “Running”. If they’re still “ContainerCreating”, wait a few seconds and run the command again.
Also check the service and deployment:
kubectl get services
You should see a service called greeting-service-microservice with TYPE “ClusterIP” and PORT 8080.
kubectl get deployments
You should see a deployment called greeting-service-microservice with 2/2 ready pods.
Testing the Greeting Service
To test our service, we need to access it. Since it’s using ClusterIP (internal-only), we’ll use port-forwarding. The reason we need to do this is because the Kubernetes cluster has its own internal network that’s separate from your local machine’s network - similar to how docker-compose containers have their own bridge network. Port-forwarding creates a tunnel from your local machine into the cluster so you can access the service.
kubectl port-forward service/greeting-service-microservice 8080:8080
This forwards port 8080 on your local machine to port 8080 of the greeting service. Leave this running and open a new terminal. Now try:
curl http://localhost:8080/greet/Alice
You should get back:
{"message":"Hello, Alice!","service":"greeting-service"}
Try different names! When you’re done testing, press Ctrl+C to stop the port-forward.
Deploying the Name Service
Now let’s deploy the name-service. This service needs to call the greeting-service, so we need to tell it where to find it. Remember that in Kubernetes, services can discover each other by name - just like in docker-compose where you could reference services by their service name (like db or auth-service)!
Take a look at values/name.yaml and notice the GREETING_SERVICE_URL environment variable - it points to greeting-service-microservice, which is the name of the Kubernetes Service we created for greeting-service. Kubernetes has a built-in DNS that allows services to find each other by name, similar to how docker-compose creates a network where services can talk to each other using their service names as hostnames. The difference is that in Kubernetes, we’re using the Service resource name rather than just the container name.
Now deploy it:
helm install name-service ./microservice -f values/name.yaml
Verify it’s running:
kubectl get pods
You should now see 3 pods total - 2 for greeting-service and 1 for name-service.
Testing the Full System
Let’s test that name-service can call greeting-service. Set up port-forwarding for name-service:
kubectl port-forward service/name-service-microservice 8081:8080
Note we’re using port 8081 locally (since 8080 might still be in use). In a new terminal, try executing this a couple different times:
curl http://localhost:8081/random
You should get back something like:
{"name":"Charlie","greeting":"Hello, Charlie!","service":"name-service"}
The name will be random each time! This means:
- name-service picked a random name
- name-service called greeting-service over the Kubernetes network
- greeting-service returned a greeting
- name-service returned the combined result
Helm Operations
Now that we have services running, let’s practice some common Helm operations.
Viewing Releases
List all Helm releases:
helm list
You should see both greeting-service and name-service.
Checking Release Status
Get detailed status of a release:
helm status greeting-service
Viewing Values
See what values were used for a release:
helm get values greeting-service
This shows only the values you overrode. To see all values, including defaults:
helm get values greeting-service --all
Upgrading a Release
Let’s scale up the greeting-service. Edit values/greeting.yaml and change replicaCount to 3:
replicaCount: 3 # changed from 2
Now upgrade the release:
helm upgrade greeting-service ./microservice -f values/greeting.yaml
Check the pods:
kubectl get pods
You should see a new pod being created (or already created) for a total of 3 greeting service pods.
Viewing Release History
Helm keeps track of every release:
helm history greeting-service
You should see two revisions - the initial install and the upgrade.
Rolling Back
Made a mistake? You can rollback to a previous revision:
helm rollback greeting-service 1
This rolls back to revision 1 (the initial deployment with 2 replicas). Verify:
kubectl get pods
You should see it scaling back down to 2 greeting-service pods.
Check the history again:
helm history greeting-service
Notice there’s now a revision 3, which is the rollback. Helm tracks everything!
Cleaning Up
When you’re done, you can uninstall the releases:
helm uninstall greeting-service
helm uninstall name-service
Verify everything is cleaned up:
kubectl get pods
kubectl get deployments
kubectl get services
You should only see the default kubernetes service.
To stop minikube:
minikube stop
What You Learned
In this lab, you:
- Created a generic Helm chart that can deploy multiple microservices
- Used values files to customize deployments without changing the templates
- Deployed services to a Kubernetes cluster running in minikube
- Practiced service discovery and inter-service communication in Kubernetes
- Used Helm operations like install, upgrade, rollback, and uninstall
- Saw how Helm tracks release history for easy rollbacks
This is exactly how you’d deploy microservices in production, just with more services and more complex configurations. The key insight is that Helm lets you reuse charts across services and environments - the same chart can deploy to dev, staging, and prod with different values files.
We’ve provided a QUICK_REFERNCE.md that you can use in the future to recall what Kubernetes/Helm/minikube commands we used.
Submission
Simply push your changes to your GitHub classroom repository. These changes should be:
- The
values/folder with values for the name and greeting services - The
microservice/folder with the Helm templates we create