INFRA

Helm: Kubernetes Package Management

A complete guide to Helm — the package manager for Kubernetes. Charts, values, Go templates, repositories, hooks, tests, secrets management, umbrella charts for microservice fleets, Helmfile, and battle-tested rollback strategies.

By Jose Nobile | Updated 2026-04-23 | 18 min read

Charts, Values, and Templates

A Helm chart is a directory containing Kubernetes manifests templatized with Go templates, a values.yaml file with defaults, and metadata in Chart.yaml. Charts package everything needed to deploy an application: Deployments, Services, ConfigMaps, Ingress, HPA, RBAC — all parameterized through values. Instead of maintaining 15 raw YAML files per microservice, you maintain one chart with configurable values.

The values.yaml file defines every configurable parameter with sensible defaults. At install time, users override specific values with --set key=value or -f custom-values.yaml. This separation of template and configuration is Helm's core power: the same chart deploys to development, staging, and production with different values files for each environment.

Chart structure follows a strict convention: templates/ for Kubernetes manifests, charts/ for sub-chart dependencies, crds/ for Custom Resource Definitions, and Chart.yaml for metadata (name, version, appVersion, dependencies). The _helpers.tpl file inside templates/ defines reusable template snippets (labels, selectors, names) that keep your manifests DRY.

my-service/
  Chart.yaml        # name, version, appVersion, dependencies
  values.yaml       # default configuration
  templates/
    _helpers.tpl     # reusable template snippets
    deployment.yaml  # Deployment manifest
    service.yaml     # Service manifest
    hpa.yaml         # HPA manifest
    ingress.yaml     # Ingress/Gateway rules
    configmap.yaml   # ConfigMap
    NOTES.txt        # Post-install message

Go Template Engine Deep Dive

Helm uses Go's text/template package enhanced with Sprig functions. Templates access values via {{ .Values.key }}, release metadata via {{ .Release.Name }}, and chart metadata via {{ .Chart.Name }}. Control structures include conditionals ({{ if }}), loops ({{ range }}), scoped context ({{ with }}), and pipelines ({{ .Values.name | default "app" | quote }}). The {{ with }} action rebinds the dot (.) to a new scope — {{ with .Values.ingress }}{{ .host }}{{ end }} accesses .Values.ingress.host without repeating the full path, and skips the block entirely if the value is nil.

Named templates defined with {{ define "chart.labels" }} and invoked with {{ include "chart.labels" . }} are the building blocks of maintainable charts. Define standard Kubernetes labels, selector labels, and resource names once in _helpers.tpl and include them everywhere. Use include (not the built-in template action) because it returns a string you can pipe through functions like nindent for proper YAML indentation. The tpl function renders a string as a template at runtime — {{ tpl .Values.annotations . }} lets users pass Go template expressions inside values files, useful for dynamic annotations or labels that reference other values.

Key functions: toYaml converts a value to YAML format ({{ toYaml .Values.resources | nindent 12 }}), essential for rendering nested maps and lists. include calls a named template and returns its output as a string. tpl evaluates a string as a Go template. Common pitfalls: whitespace control is critical ({{- trims left, -}} trims right), nested values need null checks ({{ if .Values.ingress.enabled }} fails if .Values.ingress is nil — use {{ with .Values.ingress }}{{ if .enabled }}{{ end }}{{ end }} or the dig function), and YAML indentation must be explicit. Always validate output with helm template before deploying.

# templates/_helpers.tpl — define reusable named templates
{{- define "app.labels" -}}
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

# templates/deployment.yaml — include, with, range, toYaml, tpl
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}-{{ .Chart.Name }}
  labels:
    {{- include "app.labels" . | nindent 4 }}
  annotations:
    {{- with .Values.annotations }}
    {{- tpl (toYaml .) $ | nindent 4 }}
    {{- end }}
spec:
  replicas: {{ .Values.replicaCount | default 2 }}
  template:
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        {{- with .Values.resources }}
        resources:
          {{- toYaml . | nindent 12 }}
        {{- end }}
        env:
        {{- range .Values.env }}
        - name: {{ .name }}
          value: {{ .value | quote }}
        {{- end }}

Chart Repositories

Chart repositories host packaged charts for distribution. Public repos (Bitnami, Prometheus Community, Jetstack) provide production-ready charts for common software. Private repos (Google Artifact Registry, AWS ECR, ChartMuseum, Harbor) host your organization's internal charts. Helm 3 supports OCI registries natively, letting you push charts to the same registry that hosts your Docker images.

Add a repo with helm repo add bitnami https://charts.bitnami.com/bitnami, then install charts with helm install my-redis bitnami/redis -f values.yaml. For OCI registries, push with helm push chart.tgz oci://gcr.io/my-project/charts and install with helm install my-app oci://gcr.io/my-project/charts/my-app --version 1.2.3.

In production, internal charts are published to Google Artifact Registry as OCI artifacts during CI. Each microservice's chart is versioned independently (chart version tracks the chart structure, appVersion tracks the application version). A shared base chart contains common templates that all service charts inherit as a dependency, ensuring consistency across 20+ services.

Hooks and Tests

Helm hooks are special resources that execute at specific points in a release lifecycle: pre-install, post-install, pre-upgrade, post-upgrade, pre-delete, post-delete, and pre-rollback. Annotate a template with helm.sh/hook: pre-upgrade to make it a hook. Common uses: database migrations before upgrade, cache warming after install, and cleanup on delete.

Hook weight (helm.sh/hook-weight) controls execution order when multiple hooks exist at the same lifecycle point. Hook deletion policy (helm.sh/hook-delete-policy) determines whether the hook resource is deleted after execution: before-hook-creation (delete old before creating new), hook-succeeded (delete on success), or hook-failed (delete on failure).

Helm tests are special hooks annotated with helm.sh/hook: test. They run Pods that validate the release is working correctly. Run tests with helm test release-name. A typical test suite includes: connectivity tests (can the Service be reached), dependency tests (can the app connect to its database), and smoke tests (does the health endpoint return 200). In production, every chart includes a connectivity test that runs automatically after each deployment via CI.

# templates/pre-upgrade-migration.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ .Release.Name }}-migration
  annotations:
    "helm.sh/hook": pre-upgrade
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": before-hook-creation
spec:
  template:
    spec:
      containers:
      - name: migrate
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        command: ["node", "dist/migrate.js"]
      restartPolicy: Never
  backoffLimit: 3

Secrets Management with Helm

Helm does not encrypt secrets. Values files containing sensitive data must never be committed to Git in plain text. The primary solutions are: inject secrets at deploy time via CI/CD variables (helm upgrade --set db.password=$DB_PASSWORD), use helm-secrets plugin with SOPS/age encryption, or use External Secrets Operator to sync secrets from a cloud provider's secret manager into Kubernetes.

The helm-secrets plugin integrates with Mozilla SOPS to encrypt values files using AWS KMS, GCP KMS, Azure Key Vault, or age keys. Encrypted files are safe to commit to Git. Decryption happens at deploy time: helm secrets upgrade release ./chart -f secrets.yaml. This gives you Git-tracked secrets with encryption at rest and key rotation through the cloud KMS.

In production, the preferred approach is External Secrets Operator (ESO). Secrets live in Google Secret Manager and ESO syncs them into Kubernetes Secret objects. Helm charts reference the Secret by name but never contain the actual sensitive values. This completely separates secret management from deployment tooling — security teams manage secrets in the cloud console, and developers reference them by name in Helm values.

Never pass secrets via values.yaml committed to Git. Even with .gitignore, secrets can leak through Git history, CI logs, or Helm release metadata (which stores values in the cluster). Use ESO, helm-secrets, or CI/CD variable injection. In production, zero secrets exist in the Git repository — all flow from Google Secret Manager through External Secrets Operator.

Umbrella Charts for Microservices

An umbrella chart is a parent chart that depends on multiple sub-charts, deploying an entire microservice platform in a single helm install. Each sub-chart is an independent service chart (api, payment, notification, etc.) listed as a dependency in the parent's Chart.yaml. The parent's values.yaml passes configuration down to each sub-chart under its name key.

This approach has trade-offs. Advantages: atomic deploys of the entire platform, consistent versioning across services, simplified CI/CD that deploys everything at once. Disadvantages: a single failure blocks the entire release, independent service teams lose deployment autonomy, and large umbrella charts become unwieldy. The platform started with an umbrella chart but evolved to individual chart releases per service for deployment independence.

The hybrid approach works best: a shared base chart as a library (type: library in Chart.yaml) that all service charts depend on, plus individual releases per service. The base chart provides common templates, labels, and resource patterns. Service charts import the base chart and add service-specific templates. This gives consistency without coupling deployment lifecycles.

# Umbrella Chart.yaml
apiVersion: v2
name: the-platform
version: 1.0.0
dependencies:
  - name: api-service
    version: ~2.1.0
    repository: "oci://gcr.io/myproject/charts"
  - name: payment-service
    version: ~3.0.0
    repository: "oci://gcr.io/myproject/charts"
  - name: notification-service
    version: ~1.5.0
    repository: "oci://gcr.io/myproject/charts"

# Umbrella values.yaml
api-service:
  replicaCount: 3
  image:
    tag: v2.4.1
payment-service:
  replicaCount: 2
  image:
    tag: v3.1.0

Rollback and Versioning

Every helm upgrade creates a new revision. helm history release-name shows all revisions with timestamps and status. helm rollback release-name 3 reverts to revision 3. Helm stores release state (including rendered manifests and values) as Secrets in the Kubernetes cluster, so rollback is fast and does not require access to the chart repository.

Version two numbers independently: the chart version (tracks template/structure changes) and the appVersion (tracks the application build). Bump the chart version when you modify templates, add new resources, or change default values. Bump the appVersion when you deploy a new application build. This distinction lets you update app versions without changing the chart, and vice versa.

Set --max-history to control how many revisions Helm retains (default is 10). In production, 10 is usually sufficient. More history means more Secrets stored in the cluster and more data for helm diff to compare against. Use the helm-diff plugin (helm diff upgrade) to preview changes before applying them — this catches accidental value changes and template regressions before they hit the cluster.

# View release history
helm history api-service -n production

# Rollback to specific revision
helm rollback api-service 5 -n production

# Preview changes before upgrade
helm diff upgrade api-service ./chart -f prod-values.yaml -n production

# Upgrade with max history
helm upgrade api-service ./chart \
  -f prod-values.yaml \
  --max-history 10 \
  -n production

Helmfile and helm-diff Plugin

Helmfile is a declarative spec for deploying multiple Helm releases. Instead of scripting dozens of helm upgrade commands, you define all releases in a single helmfile.yaml with environments, values layering, and dependency ordering. Run helmfile sync to reconcile every release to the desired state, or helmfile apply to show a diff first and apply only if you confirm. Helmfile brings the GitOps mindset to Helm: your entire cluster state is declared in one file.

Helmfile supports environment-specific values (environments: staging: values: [...]), Go template expressions in the spec itself, and selectors (helmfile -l app=api sync) to target specific releases. It can pull charts from any repository including OCI registries. For large platforms with 20+ services, Helmfile eliminates the shell scripts that typically glue Helm commands together and makes multi-release deployments reproducible.

The helm-diff plugin (helm plugin install https://github.com/databus23/helm-diff) adds a helm diff upgrade command that shows a color-coded diff of what would change before you apply an upgrade. It compares the currently deployed manifests against the new rendered templates and displays additions, removals, and modifications. Helmfile integrates helm-diff natively — helmfile apply runs a diff first and only proceeds if changes are detected. This is the single most important safety net for production Helm operations.

# helmfile.yaml
repositories:
  - name: bitnami
    url: https://charts.bitnami.com/bitnami

environments:
  staging:
    values: [environments/staging.yaml]
  production:
    values: [environments/production.yaml]

releases:
  - name: api-service
    chart: ./charts/api-service
    namespace: {{ .Environment.Name }}
    values:
      - values/api-service/common.yaml
      - values/api-service/{{ .Environment.Name }}.yaml

  - name: redis
    chart: bitnami/redis
    version: 18.6.1
    namespace: {{ .Environment.Name }}
    values:
      - values/redis/{{ .Environment.Name }}.yaml

# Preview changes then apply
# helmfile -e production diff
# helmfile -e production apply

Helm in CI/CD Pipelines

In CI/CD pipelines, Helm replaces raw kubectl apply with versioned, parameterized, rollback-capable deployments. The typical pipeline flow: build the Docker image, push to registry, run helm upgrade --install with the new image tag passed via --set image.tag=$CI_COMMIT_SHA. The --install flag makes the command idempotent — it creates the release on first run and upgrades on subsequent runs.

Use --atomic to automatically roll back on failure. If any resource fails to reach ready state within the timeout, Helm reverts the entire release to the previous version. Combine with --timeout 5m to set how long Helm waits for readiness. This is essential in CI/CD: a failed deployment should never leave the cluster in a broken state.

In production, GitLab CI pipelines use Helm for every deployment. The deploy job runs helm upgrade --install --atomic --timeout 5m with environment-specific values files (values-staging.yaml, values-production.yaml). After deployment, the pipeline runs helm test to validate connectivity. If tests fail, the pipeline triggers a rollback and alerts the team via Slack.

Best Practices

Always run helm template or helm lint in CI before deploying. Template renders locally without cluster access, catching syntax errors and nil pointer panics. Lint validates chart metadata and structure. Combine both: helm lint ./chart && helm template test ./chart -f values.yaml > /dev/null. This single line prevents most deployment failures.

Pin chart dependency versions with exact or tilde ranges (~2.1.0), never use * or caret ranges in production. Always specify --atomic and --timeout in CI deployments. Use namespaced releases and avoid default namespace. Set resource requests and limits in values.yaml defaults so no deployment runs without them. Include NOTES.txt to give users post-install instructions.

Keep values files flat when possible — deeply nested values are harder to override with --set. Use helm diff upgrade before every production upgrade. Store environment-specific values in separate files (values-staging.yaml, values-production.yaml) rather than relying on --set overrides that are hard to audit. Version your charts in a chart repository (OCI preferred) rather than deploying from local directories in production. Document every value in values.yaml with comments explaining what it controls.

Validate Before Deploy

Run helm lint, helm template, and helm diff in CI. Catch errors before they reach the cluster. Use schema validation (values.schema.json) to enforce value types and required fields.

Version and Pin Everything

Pin dependency versions. Tag chart releases in OCI registries. Use --atomic in production. Track chart version and appVersion independently. Never deploy unversioned charts to production.

Values Hygiene

Comment every value. Keep nesting shallow. Use separate files per environment. Never commit secrets to values files. Set resource limits as defaults. Use values.schema.json for validation.

Helm 4: What Changed

Helm 4 was released on November 12, 2025. Helm 3 is now in security-fix-only mode until November 2026. This is the most significant Helm release since the Tiller removal in Helm 3, with fundamental changes to how Helm applies resources to the cluster.

Server-Side Apply (SSA) replaces the three-way merge patch strategy. Instead of Helm computing diffs client-side and sending patches, Kubernetes itself handles conflict detection and field ownership. Conflicts now produce explicit errors instead of silent overrides -- if another controller owns a field, Helm will not silently overwrite it. This eliminates an entire class of deployment bugs where Helm and other tools (operators, kubectl edits) would silently fight over the same fields.

kstatus integration replaces the simplistic --wait polling with proper Kubernetes readiness evaluation. Instead of checking if Pods are Running, kstatus understands the actual readiness semantics of each resource type (Deployments, StatefulSets, Jobs, CRDs). This makes --wait reliable for production CI/CD pipelines where the previous implementation would sometimes report success prematurely.

Wasm-based plugins replace the old shell-script plugin system with WebAssembly modules. Three plugin types are supported: CLI plugins (extend Helm commands), getter plugins (fetch charts from custom sources), and post-renderer plugins (transform manifests before apply). Post-renderers must now be a plugin name, not an arbitrary executable path -- this is a breaking change.

OCI digest support allows installing charts by digest (helm install app oci://registry/chart@sha256:abc...) for supply chain security. Multi-doc values allow splitting complex values across multiple YAML files that are deep-merged in order. The new Chart v3 format adds structured metadata while maintaining backward compatibility with v2 charts.

Helm 4.1.x (current stable, April 2026): The current release is Helm v4.1.4, a patch that fixes a crash when empty CRD resources were encountered. The 4.1.x line has focused on stabilizing the Helm 4 foundation with bug fixes and hardening. OCI registry adoption is now mainstream -- Docker Hub, GitHub Container Registry, Amazon ECR, and Azure Container Registry all natively host Helm charts, and OCI is the recommended distribution method over legacy chart repositories. The next releases, Helm 4.2.0 and 3.21.0, are scheduled for May 13, 2026 -- the 4.2.0 release is expected to include the Chart v3 format and content-based caching. Helm 3 remains in security-fix-only mode (Helm 3.20.x) until November 2026.

Migration note: Existing releases default to client-side apply; migration to server-side apply is opt-in via --server-side-apply. CLI flag renaming affects some automation scripts -- review your CI/CD pipelines before upgrading. Test in staging first: SSA conflict detection may surface previously-hidden ownership conflicts between Helm and other controllers.

Real-World: Production Helm Strategy

The platform manages 20+ Kubernetes deployments entirely through Helm charts. The chart strategy evolved from a monolithic umbrella chart to independent per-service charts sharing a common base library, enabling each team to deploy independently while maintaining consistency.

Shared Base Chart

A library chart providing common templates for Deployments, Services, HPA, RBAC, and ConfigMaps. All 20+ service charts depend on it. Template changes propagate to every service on next release.

Per-Service Releases

Each microservice has its own Helm release with independent versioning. Services deploy independently via GitLab CI. Rollback is per-service, not platform-wide. Zero coordination needed between teams.

Atomic Deploys + Tests

Every deployment uses --atomic for automatic rollback on failure. Post-deploy Helm tests validate connectivity and health. Failed tests trigger automated rollback and Slack alerts in under 60 seconds.

More Guides