Write helm charts for python flask app

Augustas Berneckas
12 min readApr 27, 2021

This post is part of the series Prepare and Deploy python app to Kubernetes

👈 Previous post: Write Kubernetes manifests for python flask app

At the end of this series, we will have a fully working flask app in Kubernetes. The app itself that we are using, can be found here: https://github.com/brnck/k8s-python-demo-app/tree/manifests

Prerequisites

We are using minikube to deploy our application to Kubernetes. If you are not familiar with it, head over to this post to learn more, as we are going to cover topics such as how to use Minikube here.

Moving forward

Until this moment, we have containerized the application, write Kubernetes manifests and deployed it to the Minikube cluster. As time goes by, we decide to deploy our application Kubernetes production cluster and even maybe to the staging cluster as well. Every cluster (Minikube, staging, production) is a little different in terms of resources and load that it needs to be able to process. For example, you want to deploy only one web pod to your Minikube and staging clusters, but it must be scaled to 10 in production. Also, Gunicorn must boot up 10 workers instead of 2.

You have probably noticed that all these changes will require modifying Kubernetes manifests hence three different versions of manifests must be maintained. That is where templating comes in. It can change the values to placeholders and replace them with your provided values when you deploy the application. Some known tools are:

We are going to concentrate on helm here.

What is helm?

While helm positions itself as a package manager and indeed, Redis, Sentry, etc. environments are usually only one helm install <...> command away, it is also a very neat tool when it comes to deploying your own application across different environments. Not only, helm templates your manifests, but it also helps with rollbacks. Bad code accidentally slipped to production? Execute helm rollback and you can calmly investigate issues without affecting your clients.

Whenever you are thinking about deploying some third-party service to Kubernetes, head over to the Helm charts hub where you will definitely find helm-charts and instructions on how to deploy that service to Kubernetes.

We are not going to deploy 3rd party helm-charts on this post. Instead, we will write our own helm-charts. For more information on how to deploy public helm-charts check the Helm quickstart guide

First things first

Create a directory for helm-charts according to the Charts guide. Note, that we are not going to use all the features that helm provides. Instead, we will create a basic chart to only fulfill our goal — deploy differently configured applications across environments. Start by creating initial files and folders:

mkdir -p helm-charts/templates
touch helm-charts/Chart.yaml
touch helm-charts/values.yaml

Add helm-charts directory to the .dockerignore file.

The templates directory is where we are going to put templated manifests. Helm will combine it with provided values in the values.yaml file and generate valid Kubernetes manifests. Chart.yaml acts as a metadata file. It stores information about helm-charts like name, version, and so on.

Finally, let’s copy files from k8s-manifests/ to helm-charts/templates/:

cp -r k8s-manifests/ helm-charts/templates/

At this moment, helm-charts directory should look like this:

ls helm-charts/*
helm-charts/Chart.yaml helm-charts/values.yaml
helm-charts/templates:
deployment.yaml ingress.yaml job.yaml namespace.yaml service.yaml

Chart.yaml file

According to the documentation, Chart.yaml is required for a chart. It must contain few required fields:

apiVersion: The chart API version (required)
name: The name of the chart (required)
version: A SemVer 2 version (required)

apiVersion should be v1 or v2 depending on the Helm version. Since we are using helm 3, it should be v2. As for the name, we will name it python-demo-app. Every chart must have a version number. It might not be relevant if you are not going to publish your chart, however, it will be our first helm-charts version, we can set it to 1.0.0.

This is how chart.yaml should look after the changes:

apiVersion: v2
name: demo-python-app
version: 1.0.0

Templating the manifests

Helm uses Go templates for templating resource files. If you are not familiar with it, I strongly recommend checking that out, otherwise, it might be hard to catch up with what we will be doing here.

As you might remember from the previous guide, namespace.yaml creates Kubernetes namespace for us. Luckily, Helm can create the namespace for us just by adding a flag for install or upgrade command:

helm install --help | grep namespace
--create-namespace create the release namespace if not present
-n, --namespace string namespace scope for this request
helm upgrade --help | grep namespace
--create-namespace if --install is set, create the release namespace if not present
-n, --namespace string namespace scope for this request

We can safely remove helm-charts/templates/namespace.yaml file:

rm helm-charts/templates/namespace.yaml

As for the other files… We are not going to template everything we can, as the main goal of this guide is to understand what helm is and what it does.

Templating deployment.yaml

Let’s start by defining what values we want to be able to change dynamically. I would like to:

  • Use <release_name>-web as deployment resource name;
  • Provide namespace on deploy (passed through --namespace flag when installing/upgrading helm-chart);
  • Provide different replicas number;
  • Use <release_name>-web as container name;
  • Provide different image name and tag;
  • Provide different args;
  • Provide different resources (requests and limits) for the container.

Start by changing hardcoded values to placeholders in the metadata key:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
<...>

Moving to .spec, let's make replicas configurable:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.web.replicas }}

You have probably noticed that .Release was used in the metadata while for the replicas we have used .Values. Both of these are built-in objects which Helm passes into a template. The difference between them is that the Release describes the release itself while Values are the values passed into the template from the values.yaml file. You can find more info about all the built-in objects here

This is how the deployment file looks after all the values have been changes with placeholders:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.web.replicas }}
selector:
matchLabels:
app: python-demo-app
role: web
template:
metadata:
labels:
app: python-demo-app
role: web
spec:
securityContext:
runAsGroup: 1000
runAsUser: 1000
containers:
- name: {{ .Release.Name }}-web
image: {{ .Values.web.image }}:{{ .Values.web.tag }}
args:
{{- range .Values.web.args }}
- {{ quote . }}
{{- end }}
ports:
- name: gunicorn
containerPort: 8000
resources: {{ toYaml .Values.web.resources | nindent 12 }}
readinessProbe:
initialDelaySeconds: 10
httpGet:
port: gunicorn
path: /
livenessProbe:
initialDelaySeconds: 10
exec:
command:
- /bin/sh
- -c
- 'pidof -x gunicorn'

This thing probably raised your eyebrow:

args:
{{- range .Values.web.args }}
- {{ quote . }}
{{- end }}

It might look scary, but everything we do here is just iterating through .Values.web.args key and adding values to args array. The quote function wraps values in double-quotes.

Another thing that might be new for you is:

resources: {{ toYaml .Values.web.resources | nindent 12 }}

Basically, we are telling to convert the value, provided in .Values.web.resources to YAML and using another function (piping the output of the first function result to another function) to add new line to the beginning and indent every line by additional 12 spaces. You will soon see that in action. Let's add values to values.yaml:

web:
replicas: 2
image: python-demo-app
tag: init
args:
- '--bind'
- '0.0.0.0'
- 'app:app'
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m

We can generate the manifest without deploying to check if everything is okay:

cd helm-charts
helm template -s templates/deployment.yaml --namespace namespace-to-be app-name .
---
# Source: demo-python-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-name-web
namespace: namespace-to-be
spec:
replicas: 2
selector:
matchLabels:
app: python-demo-app
role: web
template:
metadata:
labels:
app: python-demo-app
role: web
spec:
securityContext:
runAsGroup: 1000
runAsUser: 1000
containers:
- name: app-name-web
image: python-demo-app:init
args:
- "--bind"
- "0.0.0.0"
- "app:app"
ports:
- name: gunicorn
containerPort: 8000
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
readinessProbe:
initialDelaySeconds: 10
httpGet:
port: gunicorn
path: /
livenessProbe:
initialDelaySeconds: 10
exec:
command:
- /bin/sh
- -c
- 'pidof -x gunicorn'

Helm took our values.yaml file by default and generated deployment manifest with it. Let's create templates for the rest of the resources

Templating job.yaml

I am not going to dive deep into this one. As well as for deployment, let’s define what we want to do here:

  • Use <release_name>-job as deployment resource name;
  • Provide namespace on deploy (passed through --namespace flag when installing/upgrading helm-chart);
  • Use <release_name>-job as container name;
  • Provide different image name and tag;
  • Provide different command;
  • Provide different args;
  • Provide different resources (requests and limits) for the container.

Try to do that by yourself. Take deployment.yaml template as an example. I will share a solution as it almost the same:

apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
labels:
app: python-demo-app
role: hello-world-job
spec:
restartPolicy: Never
securityContext:
runAsGroup: 1000
runAsUser: 1000
containers:
- name: {{ .Release.Name }}-job
image: {{ .Values.job.image }}:{{ .Values.job.tag }}
command:
{{- range .Values.job.command }}
- {{ quote . }}
{{- end }}
args:
{{- range .Values.job.args }}
- {{ quote . }}
{{- end }}
resources: {{ toYaml .Values.job.resources | nindent 12 }}

After adding job values, our values.yaml file now looks like this:

web:
replicas: 2
image: python-demo-app
tag: init
args:
- '--bind'
- '0.0.0.0'
- 'app:app'
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
job:
image: python-demo-app
tag: init
command:
- python3
args:
- cli.py
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m

Finally, the rendered job manifest looks like this:

helm template -s templates/job.yaml --namespace namespace-to-be app-name .
---
# Source: demo-python-app/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: app-name-job
namespace: namespace-to-be
spec:
template:
metadata:
labels:
app: python-demo-app
role: hello-world-job
spec:
restartPolicy: Never
securityContext:
runAsGroup: 1000
runAsUser: 1000
containers:
- name: app-name-job
image: python-demo-app:init
command:
- "python3"
args:
- "cli.py"
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi

Templating service.yaml

The application has only one service with only one port to forward traffic to so let’s not spend a lot of time here and only change .metadata.name and .metadata.namespace to the placeholders:

apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
spec:
selector:
app: python-demo-app
role: web
ports:
- name: http
port: 80
targetPort: gunicorn

Check generated manifest:

helm template -s templates/service.yaml --namespace namespace-to-be app-name .
---
# Source: demo-python-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-name-web
namespace: namespace-to-be
spec:
selector:
app: python-demo-app
role: web
ports:
- name: http
port: 80
targetPort: gunicorn

Templating ingress.yaml

As for all resources, change .metadata.name and .metadata.namespace to the placeholders. We can also support multiple hosts for this application. To do that, multiple rules must be supported therefore .spec.rules value must be wrapped to {{- range .Values.ingress.rules }}. Also, service name should be templated according to the name in the service.yaml template:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-web
namespace: {{ .Release.Namespace }}
spec:
rules:
{{- range .Values.ingress.rules }}
- host: {{ .host }}
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: {{ $.Release.Name }}-web
servicePort: http
{{- end }}

Let’s add ingress key to the values.yaml file:

ingress:
rules:
- host: python-app.demo.com
- host: second-host-python-app.demo.com

Test how helm generates manifest:

helm template -s templates/ingress.yaml --namespace namespace-to-be app-name .
---
# Source: demo-python-app/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-name-web
namespace: namespace-to-be
spec:
rules:
- host: python-app.demo.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: app-name-web
servicePort: http
- host: second-host-python-app.demo.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: app-name-web
servicePort: http

All templates are now ready. Final values.yaml look like this:

web:
replicas: 2
image: python-demo-app
tag: init
args:
- '--bind'
- '0.0.0.0'
- 'app:app'
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
job:
image: python-demo-app
tag: init
command:
- python3
args:
- cli.py
resources:
requests:
memory: 128Mi
cpu: 100m
limits:
memory: 256Mi
cpu: 200m
ingress:
rules:
- host: python-app.demo.com
- host: second-host-python-app.demo.com

Deploy helm-charts

Start Minikube cluster:

minikube start --driver=virtualbox --kubernetes-version=1.20.5 --addons ingress
<...>
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

We are not using any public nor private docker registry, so let’s build image in the Minikube virtual machine. Make sure you are in the root directory of you application:

eval $(minikube docker-env)
docker build -t python-demo-app:init .
minikube ssh docker images | grep python-demo-app
python-demo-app init c18672872873 About a minute ago 125MB

Deploy helm-charts to Kubernetes:

helm upgrade --install --create-namespace --namespace python-demo-app demo-app helm-charts/
Release "demo-app" does not exist. Installing it now.
W0426 20:50:05.625960 70845 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0426 20:50:05.721498 70845 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME: demo-app
LAST DEPLOYED: Mon Apr 26 20:50:05 2021
NAMESPACE: python-demo-app
STATUS: deployed
REVISION: 1
TEST SUITE: None

We have already covered why we are using old apiVersion of the Ingress in the previous guide. As for the command, let's breakdown it into parts. You have probably noticed that I am using upgrade instead of install, but I am using it with the --install flag which tells helm to upgrade or install release if it does not exist. It is one command to rule them all. --create-namespace - well, creates namespace if it does not exist. Next, I am passing the namespace name, release name, and charts folder. Additionally, we could add --file helm-charts/values.yaml to explicitly tell helm to that file for values. Luckily, helm does that by default. Let's check how Helm deployment went:

helm list --namespace python-demo-app
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
demo-app python-demo-app 1 2021-04-26 20:50:05.446109 +0300 EEST deployed demo-python-app-1.0.0

Helm itself deployed successfully. How about all the resources:

kubectl get pods,jobs,service,ingress -n python-demo-app
NAME READY STATUS RESTARTS AGE
pod/demo-app-job-cttcx 0/1 Completed 0 6m58s
pod/demo-app-web-5d64656cd-g8d9d 1/1 Running 0 6m58s
pod/demo-app-web-5d64656cd-s6qg4 1/1 Running 0 6m58s
NAME COMPLETIONS DURATION AGE
job.batch/demo-app-job 1/1 3s 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo-app-web ClusterIP 10.105.213.47 <none> 80/TCP 6m58s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/demo-app-web <none> python-app.demo.com,second-host-python-app.demo.com 80 6m58s

It’s running without any issues. Let’s try to access our app using both hosts:

curl -H "Host: python-app.demo.com" $(minikube ip)/
Hello, World!
curl -H "Host: second-host-python-app.demo.com" $(minikube ip)/
Hello, World!

Deploy another application using the same charts

Imagine we are now deploying our app to the staging cluster. Simulate different deployment by deploying 4 web replicas, with each requesting cpu: 200m and memory: 256Mi. Also, each web server must boot up 6 Gunicorn workers. The application should use staging-app.demo.com as its' ingress host. Finally, the application must be deployed to staging-python-app namespace.

We will start by copying our values.yaml so we could modify it:

cp helm-charts/values.yaml helm-charts/staging.yaml

Let’s clear up some values and modify others according to our requirements. The final staging.yaml file should look like this:

web:
replicas: 4
args:
- '--bind'
- '0.0.0.0'
- '--workers'
- '6'
- 'app:app'
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 256Mi
cpu: 200m
ingress:
rules:
- host: staging-app.demo.com

Some values that are present in the values.yaml are missing here. Ignore it for now. I will get back to it. Let's deploy the application using staging.yaml file:

helm upgrade --install --create-namespace \
--namespace staging-python-app \
-f helm-charts/staging.yaml \
staging-python-app \
helm-charts/
Release "staging-python-app" does not exist. Installing it now.
W0426 21:15:06.059332 72411 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0426 21:15:06.255467 72411 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME: staging-python-app
LAST DEPLOYED: Mon Apr 26 21:15:05 2021
NAMESPACE: staging-python-app
STATUS: deployed
REVISION: 1
TEST SUITE: None

Check resources in the namespace:

kubectl get pods,jobs,service,ingress -n staging-python-app
NAME READY STATUS RESTARTS AGE
pod/staging-python-app-job-c9m65 0/1 Completed 0 4m41s
pod/staging-python-app-web-57b85c68b4-9jjml 1/1 Running 0 4m41s
pod/staging-python-app-web-57b85c68b4-fs9pl 1/1 Running 0 4m41s
pod/staging-python-app-web-57b85c68b4-lpgj8 1/1 Running 0 4m41s
pod/staging-python-app-web-57b85c68b4-qr27p 1/1 Running 0 4m41s
NAME COMPLETIONS DURATION AGE
job.batch/staging-python-app-job 1/1 2s 4m41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/staging-python-app-web ClusterIP 10.106.155.106 <none> 80/TCP 4m41s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/staging-python-app-web <none> staging-app.demo.com 192.168.99.107 80 4m41s

Check ingress access:

curl -H "Host: staging-app.demo.com" $(minikube ip)/
Hello, World!

Check how many workers Gunicorn boots up:

kubectl logs staging-python-app-web-57b85c68b4-9jjml -n staging-python-app | grep Booting | wc -l
6

As you can see, we have successfully used the same manifests to deploy applications with a different configuration. However, staging.yaml has fewer values provided than values.yaml. That is because values.yaml acts as a default values file. If there are no values for the specific key provided in the -f <file>, then the default value takes place.

Clean up

Destroy Minikube cluster:

minikube delete

Also, k8s-manifests/ directory is not needed since we have helm-charts. Let's remove it:

rm -r k8s-manifests

Do not forget to remove the directory from .dockerignore as well.

Conclusion

Congratulations on finishing this guide! If you wish to move forward by yourself, you can:

  • Implement configurable ports in service.yaml, deployment.yaml, ingress.yaml
  • Implement support for multiple jobs
  • Implement option to disable one-off jobs
  • Write unit tests for your templates using helm-unittest

A fully finished application can be found here: https://github.com/brnck/k8s-python-demo-app/tree/helm

--

--