Save time and increase reliability with customisable blueprint unit test templates for Helm charts
DevOps engineers and developers, discover how you can reduce testing time and improve consistency across your projects
In our OPA policy-based testing of Helm charts blog post, we explored how to enforce standardised rules and best practices in Kubernetes deployments. While a policy-based testing approach significantly enhances security and compliance in Kubernetes environments, this article shifts the spotlight to unit testing for Helm charts.
Unit testing complements policy-based testing by verifying the functional correctness of individual components within Helm charts. Unlike policy testing, which ensures adherence to predefined rules, unit testing validates that each part of a chart behaves as expected under various conditions.
This blog post introduces a solution designed to speed up testing and boost reliability using standardised, customisable blueprint templates that adapt to any chart.
Leveraging the Helm unit test framework
We leveraged the Helm unit test framework as the base of the test templates. This tool allows developers to write unit tests for Helm in YAML, a familiar format within the Kubernetes world. The framework simplifies testing by checking if the rendered YAML output from a chart matches the expected results. For example, a simple test to check if a deployment exists might look like this:
This example test ensures that the chart generates a Kubernetes deployment resource. Let’s explore a more comprehensive example that showcases various testing techniques.
This comprehensive test suite demonstrates a wide range of Helm unit testing capabilities:
Basic structure validation:
Checks for the correct kind of action (deployment), API version (apps/v1) and document count.
Deployment specifications:
Uses “set” key to override chart values for testing specific scenarios.
Verifies replica count, image details and naming conventions.
Checks for the presence of resource specifications and correct port configuration.
Metadata and labelling:
Ensures the correct naming convention (combining release name and app name) is used.
Verifies the presence and content of labels.
Resource management:
Checks for specific CPU and memory limits, crucial for resource allocation in Kubernetes.
Security configurations:
Validates security context settings, ensuring the container runs as a non-root user and isn't privileged.
Health checks:
Verifies the presence and configuration of a liveness probe.
You can find more information on how to define tests here.
While this test suite is suitable for a specific deployment, it is tightly coupled to the “myapp” application and the “production” release. How can we efficiently reuse this test or a set of test suites across various charts without duplicating effort? To make this more versatile, we could introduce placeholders like {{ CHART_NAME }}, {{ RELEASE_NAME }}, {{ CPU_LIMIT }}, {{ HEALTH_CHECK_PATH }}, {{ REPLICA_COUNT }} etc., to make this specific test suite into a reusable blueprint that can be easily adapted to different charts and configurations.
Blueprint unit test templates
A “templated” Helm test suite allows a test to be reused with different charts, aligning with the best practices of modularity and reusability. In other words, we consume blueprint, standardised, pre-written test suites that can be used to suit any project’s needs. For instance, if we have a test that needs to verify the chart name, release name, deployment count and common metadata, the template includes the checks and can be adjusted per project requirements.
The above test suite can now serve as a versatile blueprint unit test template, which DevOps practitioners can then fine-tune based on their chart’s specific set-up. This approach enables uniform and streamlined testing across diverse Helm charts, guaranteeing that each deployment fulfils the necessary criteria while reducing the labour required to develop and upkeep test suites.
To further streamline this process and fully leverage the benefits of this method, we wrote a custom Python script that converts the blueprints into customised test suites for each unique chart.
Processing the blueprints
To automate the creation of the final unit tests, we use a Python script that processes the blueprint test suites. This script replaces placeholders in the test suites with actual values from a configuration file. This step transforms our generic blueprints into chart-specific test suites. To demonstrate how to use this solution, let’s walk through a practical scenario using the blueprint deployment and ConfigMap test suites. Reference the sample code in the public repository here to follow along.
Preparing the environment
Before we begin, we need to have the following prerequisites installed:
Helm > v3.x
Python > 3.10
Git
Local Kubernetes cluster: Although not strictly required for testing purposes, setting up a local Kubernetes cluster with tools like minikube is advisable. This enables validation of the Helm charts in a Kubernetes environment.
You can verify installations by running:
Clone the repository:
This is the folder structure:
Mars is the Helm chart we will use for the demo. The test suite templates are located in the mars/tests/ folder. We’ll focus on deployment_test.yml
and configmap_test.yml
to illustrate how these test suite templates work. Let’s take a look at the configmap_test.yml
file:
To customise the tests for a specific chart, we’ll need to modify the sample-code/values.json
file. This file provides the actual values for the placeholders in the test templates. Let’s update this file with appropriate values for our Helm chart. You’ll notice that placeholders will be replaced when the tests are rendered.
With the values.json file configured, we’re ready to initiate the testing process. To do this, we’ll invoke the run_tests.sh script, which orchestrates the entire testing workflow as follows:
This command invokes the script with two arguments: the path to the chart we are testing (sample-code/mars) and the values.json
file we just modified i.e ./run_tests.sh mars values.json
.
Behind the scenes, the script sets in motion a series of operations:
It calls
renderer.py
, a Python script that acts as the engine for our testing framework.The Python script performs several key tasks:
a. It generates a temporary directory named <chart-name>_rendered, which serves as a “sandbox” for our testing.
b. The script then duplicates the contents of the original chart into this new directory.
c. Next, it scans the test folder within mars_rendered for the templated YAML test suites.
d. For each discovered test file,
renderer.py
it applies the Jinja2 templating engine, substituting the placeholders we discussed earlier with actual values from the values.json file.e. The resulting fully formed test files are then saved back into the mars_rendered/tests directory.
With the tests now prepared,
run-tests.sh
invokes the Helm unit test command directing it to evaluate the chart in the “sandbox” directory.As the tests execute, you’ll see real-time feedback in the terminal, highlighting any successes or failures.
Upon completion of the tests, the script makes a decision:
a. If all tests pass successfully, it tidies things up by removing the temporary mars_rendered directory.
b. However, if any tests fail, the script preserves the mars_rendered directory, allowing for inspection of the generated files and diagnosis of any issues.
The above snapshot shows the results of the tests. Overall, out of 47 total tests across five test suites, 45 passed and two failed, with the entire process being completed in about 146 milliseconds. Scrutinising the test suites in the mars_rendered directory allows you to compare the rendered test expectations against the actual chart output, helping pinpoint the exact source of the discrepancy.
To resolve the ConfigMap test failure, trace the issue by examining the rendered ConfigMap in mars_rendered/templates/configmap.yaml
. Compare this with the test expectations in marsrendered/tests/configmaptest.yaml
. This investigation will lead you to the env section in the chart's values.yaml
file, where you'll need to add the missing JOHN: "DOE" entry under the “plain” subsection to match the test's expectations. This adjustment will ensure the ConfigMap test passes by providing all expected environment variables.
The deployment test failed — it expected two deployment documents but found only one. This could indicate an issue with the chart's deployment configuration or a mismatch between the expected and actual number of deployments. To address it, we'll modify the values.json
file to set DEPLOYMENT_COUNT to 1, aligning with the actual number of deployments in our chart. This change will resolve the deployment test failure.
After making these changes, we re-run the test script (./run_tests.sh mars values.json) to verify the fixes. By aligning the test expectations with your chart's configuration, both the deployment and ConfigMap tests should pass successfully.
Conclusion
DevOps teams can attain substantial efficiency gains and enhanced reliability by embracing these blueprint Helm unit test templates. Complementing policy-based testing, they validate that Helm charts generate Kubernetes resources with expected structures and properties, verifying compliance from a security and organisation perspective. The templates are easy to adapt for specific chart requirements, significantly reduce testing time through reusable templates, ensure consistency with standardised yet customisable structures and improve deployment reliability.
Insight, imagination and expertly engineered solutions to accelerate and sustain progress.
Contact