Skip to content

Week 13 - Templating & Infrastructure as Code

Welcome to the start of phase 4!

Up until now, you have been deploying to Kubernetes using static YAML manifests. While this works well for a single environment, it introduces issues with duplication, configuration drift, and complexity when managing multiple environments (dev, test, prod).

This phase focuses on production readiness: automatic scaling, failure tolerance, observability, and templated deployments.

Introduction

In this lab, we will migrate from static YAML files to a templating tool. This allows us to separate the logic (the deployment structure) from the configuration (the specific values for an environment).

You must choose one of the following tools:

  1. Helm: A package manager for Kubernetes that uses templates and values files.
  2. Kustomize: A native configuration management tool that uses bases and overlays.

Operational requirements

Installing the templating tools

You will need the tools installed locally to develop and lint your templates.

Linux/macOS

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Windows
choco install kubernetes-helm
# OR
winget install Helm.Helm

Kustomize is built into kubectl (version 1.14+). You can check if it is available:

kubectl kustomize --help

Templating Strategy

You must convert your existing Kubernetes manifests into a templated format.

Complete

Choose one tool (Helm or Kustomize) and migrate your deployment logic.

Helm uses a Chart to package your application. You define templates with placeholders (e.g., {{ .Values.image }}) and supply values via values.yaml.

Example Structure:

This structure should be either in your existing repository, or in a new repository that only deals with Helm.

my-chart/
├── Chart.yaml          # Metadata
├── values.yaml         # Default configuration 
└── templates/          # Logic
    ├── deployment.yaml
    └── service.yaml

A great starting guide is Helm's website, Getting Started section: https://helm.sh/docs/v3/chart_template_guide/getting_started

Working through this will give you basic understanding of how Helm operates.

Example of Helm chart:

templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: {{ .Release.Name }}
    labels:
        app: {{ .Release.Name }}
spec:
    replicas: {{ .Values.replicaCount }}
    selector:
        matchLabels:
          app: {{ .Release.Name }}
    template:
        metadata:
          labels:
            app: {{ .Release.Name }}
        spec:
        containers:
            - name: {{ .Release.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              ports:
                - containerPort: {{ .Values.service.port }}
values.yaml
replicaCount: 1
image:
    repository: nginx
    tag: stable
service:
    port: 80

When you run helm template, the template tool replaces {{ .Values.replicaCount }} with 1, {{ .Values.image.repository }} with nginx, etc., generating a standard Kubernetes manifest, which can be directly installed to cluster with helm install.

Kustomize uses a Base (common YAMLs) and Overlays (patches for specific environments).

Example structure:

manifests/
├── base/                   # Logic (Common)
│   ├── deployment.yaml
│   ├── service.yaml
│   └── kustomization.yaml
└── overlays/               # Configuration
    └── prod/
        ├── patch.yaml
        └── kustomization.yaml
base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: my-nginx
spec:
    replicas: 1
    template:
        spec:
            containers:
            - name: nginx
              image: nginx:latest
base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml

Where overlays/prod folder defines all the necessary resources, and is something like:

overlays/prod/kustomization.yaml
resources:
- ../../base
patches:
- path: patch.yaml
overlays/prod/patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 3

When you run kubectl kustomize, it merges the overlay into the base. The resulting manifest will have replicas: 3 instead of 1.

Safe Deployments (Atomic Rollback)

In production, deployments fail. A bad configuration or a broken image shouldn't leave your workloads in a broken state.

Configure your deployment to be Atomic.

If the deployment fails (e.g., pods crash or readiness probes fail), the system must automatically rollback to the previous working version.

Helm: Use the --atomic (old) or --rollback-on-failure flag. This waits for Readiness Probes; if they timeout, it reverts the release.

Kustomize: Write custom logic in the deploy step to detect health of deployment, rollback on failure. For this lab, Helm is significantly easier for this requirement.

Secret Management

Helm charts and Kustomize overlays are code. Code lives in Git.

Security Rule

  • NEVER write secrets (passwords, API keys) into values.yaml.
  • NEVER write secrets into Kustomize patch files committed to the repo.

Inject secrets via CI/CD.

You should inject secrets only during the pipeline execution using environment variables stored in GitLab CI/CD settings.

Example (Helm):

helm upgrade --install my-app ./chart \
  --set secret.password=$DB_VAR \ # Sets a specific values file only for this execution
  --rollback-on-failure \
  --timeout=5m

Development requirements

Statelessness

To prepare for chaos engineering (which starts soon), your application must be able to survive random pod failures. This requires your application to be as stateless as possible.

The Problem with Local State

If your application saves uploaded files (images, logs, etc.) to a local folder inside the container, this will break in Phase 4.

When a pod is killed/restarted, its local filesystem is destroyed. If you run 2 replicas, User A might upload a file to Pod 1, but User B (connected to Pod 2) won't see it.

Complete

Externalize your state.

  • Files/Images: Must be stored in an object store (like S3/MinIO) or a ReadWriteMany Longhorn volume.
  • Data: Must be stored in the database.
  • Sessions: Must be stored in a replicated store (Redis/Valkey) or the database.

You can do this by either deploying a replicated/clustered database solution yourself, or using one of the many preinstalled operators in the cluster:

Operator Purpose Examples Implementation Notes
CloudNativePG Operator for starting a PostgreSQL cluster Basic Example
Full Example
Straightforward HA setup.
mariadb-operator Operator for starting a MariaDB cluster Basic Provisioning
Full Manifest
Warning: Automatic primary failover requires extra configuration.
minio-operator Operator for starting a Minio cluster (S3) Tiny Tenant (Basic)
Full Base Config
Note: The tiny tenant example relies on Kustomize. You must run kubectl kustomize to render the configuration.

If your system so far uses disk-based solutions (files written directly to disk, SQLite) you may get away with using ReadWriteMany volumes, instead of having to rewrite large parts of your code, but make sure to test everything.

Documentation requirements

What we expect your documentation to include?

Templating choice

  • Which tool did you choose (Helm or Kustomize) and why?
  • Describe the template tool's usage with your configuration.

Secret Management

  • Explain your strategy for handling secrets.

Statelessness

  • Confirm that your application is stateless.
  • Where are images/files stored?
  • Where and how does the database run?

Tasks

  • Install Helm or Kustomize locally.
  • Refactor your Kubernetes manifests into a Chart (Helm) or Base/Overlay (Kustomize).
  • Update your .gitlab-ci.yml to deploy using the templating tool.
  • Configure "Atomic" deployment (automated rollback on failure).
  • Ensure strict separation of configuration and logic.
  • Inject secrets securely via the pipeline (no hardcoded secrets in git).
  • Verify your application is stateless (ready for chaos).