helm upgrade does not re-creat pods
Use helm to deploy and manage k8s files is very convenience. But helm upgrade will not recreate pods automatically. Someone will add “--recreate-pods
” to force pods recreate.
helm upgrade --recreate-pods -i k8s-dashboard stable/k8s-dashboard
That means you will not have zero downtime when you are doing upgrade.
Fortunately we find a solution. According to this issue report https://github.com/helm/helm/issues/5218?fbclid=IwAR2Dw3873pbj5TpOrcFIPdLWw6H1-5cRC7lwJ-uu705EEVEFrxYtS2Nt29U and this article Deploying on Kubernetes #11: Annotations. We can add some annotations like timestamp of configmap/sercret checksum to spec.template.metadata.annotations of deployment.yaml.
kind: Deployment
metadata:
...
spec:
template:
metadata:
labels:
app: k8s-dashboard
annotations:
timestamp: "{{ date "20060102150405" .Release.Time }}"
...
Kubernetes will notice your pods are updated and rollout new pods without downtime.
If using helm v3+, use {{ now | date "20060102150405" }}
instead, because .Release.Time
has been removed from latest helm versions.