Use helm to deploy and manage k8s files is very convenience. But helm upgrade will not recreate pods automatically. Someone will add “--recreate-pods” to force pods recreate.

helm upgrade --recreate-pods -i k8s-dashboard stable/k8s-dashboard

That means you will not have zero downtime when you are doing upgrade.

Fortunately we find a solution. According to this issue report https://github.com/helm/helm/issues/5218?fbclid=IwAR2Dw3873pbj5TpOrcFIPdLWw6H1-5cRC7lwJ-uu705EEVEFrxYtS2Nt29U and this article Deploying on Kubernetes #11: Annotations. We can add some annotations like timestamp of configmap/sercret checksum to spec.template.metadata.annotations of deployment.yaml.

kind: Deployment
metadata:
...
spec:
template:
metadata:
labels:
app: k8s-dashboard
annotations:
timestamp: "{{ date "20060102150405" .Release.Time }}"
...

Kubernetes will notice your pods are updated and rollout new pods without downtime.

If using helm v3+, use {{ now | date "20060102150405" }} instead, because .Release.Time has been removed from latest helm versions.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app