When helm meets Azure Devops and Jfrog

Yves Callaert
5 min readNov 6, 2021

And together they began a fantastic story in the CICD world. If only …

After spending a couple of hours integrating these technologies it was clear that there was some missing information on how this can be achieved. So to alleviate some of the headache for others, I present to you the following solution: Helm in Azure Devops pipelines, storing the helm artifacts in an onsite Jfrog system.

The beginning

Before we start, these are the ground rules of the game:

  • Azure pipelines running on Azure Devops (not the on-site version TFS)
  • Jfrog on-prem
  • Pipelines with limited internet access

The reason I put in these requirements is because they made the solution a bit more difficult and allows you to understand why certain solutions were chosen.

So with these rules in play, let’s get started. First we will construct a docker image that will hold some packages we need in our pipeline. Important to note is that we already uploaded the helm binary in our artifactory. The binary can be found here.

FROM alpine:3.11RUN echo "===> Adding packages..."  && \
apk --update --no-cache add protobuf zip curl python3-dev py3-pip dos2unix file gettext && \
rm -rf /var/cache/apk/*
ENV PIP_INDEX_URL https://mypersonalartifactory.com/artifactory/api/pypi/python/simple
RUN pip3 install -U pip setuptools wheel pyyaml
RUN echo "===> Installing Helm ..." && \
BIN="/usr/local/bin" && \
BINARY_NAME="helm" && \
curl -O "https://mypersonalartifactory.com/helm-v3.7.0-linux-amd64.tar.gz" && \
tar -zxvf helm-v3.7.0-linux-amd64.tar.gz && \
mv linux-amd64/helm /usr/local/bin/helm && \
chmod +x "${BIN}/${BINARY_NAME}"

We will build this docker image and store the result in our Jfrog as base-helm:1.0.0.

The Azure Pipeline

Since I am a strong believer that CICD pipelines should be code (so no clicking in the GUI 😏) the pipeline will be constructed as a yaml file, so that it can live as part of the solution in your repo.

As a first step in our pipeline we define the image, which we built previously. This is the container we will be using on our hosts, in the pool linux-containers. If you are completely new to pools, have a look at the Azure Pool Documentation.

- job: BaffleHelmPrepare
displayName: baffle helm preparation steps
pool:
name: linux-containers
demands:
- docker
- linux-containers
container: mypersonalartifactory.com/docker/base-helm:1.0.0
steps:
- template: install-deps.yml

So we have defined that we want to run our pipeline inside the base helm image, next we need to install some dependencies which we have defined in the template install-deps.yml.

steps:
- task: KubectlInstaller@0
displayName: 'Install Kubectl 1.19.9'
inputs:
kubectlVersion: 1.19.9
enabled: true
- bash: |
echo "Start download Jfrog cli from artifactory"
curl -O -u myuser:$MAVEN_JFROG_TOKEN -X GET https://mypersonalartifactory.com/artifactory/artifacts-internal/jfrog
mkdir -p $(Agent.ToolsDirectory)/_jfrog/current/
sudo mv jfrog $(Agent.ToolsDirectory)/_jfrog/current/
sudo chmod -R 755 $(Agent.ToolsDirectory)/_jfrog/current/jfrog
displayName: Jfrog CLI install
env:
MAVEN_JFROG_TOKEN: $(jfrog-pwd-ci)
- bash: |
helm repo add helm-local https://mypersonalartifactory.com/artifactory/helm-local
helm repo update
failOnStderr: true
displayName: "Helm Repo Assignment"
env:
MAVEN_JFROG_TOKEN: $(jfrog-pwd-ci)
- bash: |
curl -O -u myuser:$MAVEN_JFROG_TOKEN -X GET https://mypersonalartifactory.com/artifactory/artifacts-internal/helm-diff-linux.tgz
mkdir -p /home/<usr>/.local/share/helm/plugins/helm-diff
tar -zxvf helm-diff-linux.tgz -C /home/<usr>/.local/share/helm/plugins/helm-diff --strip-components=1
failOnStderr: false
displayName: "Installation helm Diff"
env:
MAVEN_JFROG_TOKEN: $(jfrog-pwd-ci)

So let’s break this down a bit. The first task in this template is easy, install kubectl with a specific version. Next we need to install Jfrog cli, however this is a bit more complicated than just adding it to the bin folder. It needs to be in an exact location as stated in the script, the location was found from this original article on the Jfrog website. Next we add the helm repo, which points to our Jfrog instance, where are charts are stored. Lastly we are adding the plugin “helm diff” as part of our solution.

To continue in our original pipeline we have a couple of tasks we want to execute. The full pipeline is listed in the code block but we will highlight a few.

The “helm template” part has been added as a sanity test. We want to make sure that the supplied values file for the specific environment actually works.

- task: CmdLine@2
displayName: "Helm: List repo content"
inputs:
script: |
helm search repo demo
- task: CmdLine@2
displayName: "Helm: Template output"
inputs:
script: |
helm template helm-local/demo -f src/helm/demo-api/values-${{ parameters.environment }}-template.yml
- task: Kubernetes@1
displayName: "Kubectl login"
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceEndpoint: 'euwest-aks-dev'
namespace: 'demo'
command: 'login'
- task: CmdLine@2
displayName: "Helm Diff"
inputs:
script: |
helm diff upgrade demo-api-${{ parameters.environment }} helm-local/demo --values src/helm/demo-api/values-${{ parameters.environment }}.yml --allow-unreleased -n demo

The last part is optional but “helm diff” allows us to see the changes between our release and the one currently deployed. This is a sanity check but allows the person doing the release to have an additional reflection point (“Is this really what I want to release?”). In order to do the helm diff you need to be logged in to a cluster, which in our case is an AKS cluster with authentication through a service connection.

The Reflection Point

As previously stated, the helm diff allows us to see the differences, but there is little merit to it if the pipeline just continues. In order to have this reflection point in the pipeline, we added a manual approval step.

- stage: Baffle_manual_validate
dependsOn: Baffle_helm_prep
jobs:
- job: ManualValidation
displayName: Validate if you want to continue
pool:
name: server
steps:
- task: ManualValidation@0
condition: in('${{ parameters.environment }}', 'tst', 'prod')
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
myuser@myawesomecompany.com
instructions: 'Please validate the helm diff.'
onTimeout: 'reject'

This stage will wait for 1 day until someone manually approves this task. The goal is not to blindly approve but to have the person doing the release check the diff output. If it looks good, press the approve button.

Release Step

Finally, we arrived at the part of our pipeline that will do the actual release. First we install the dependencies we have already used in the first part of the pipeline, then we download converted helm values file and deploy it to our kubernetes cluster.

steps:
- template: install-deps.yml
- task: DownloadPipelineArtifact@2
inputs:
artifact: helm-template-file
path: src/helm/demo-api/
- task: HelmDeploy@0
displayName: "Deploy helm chart for environment ${{ parameters.environment }}"
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceConnection: 'euwest-aks-dev'
namespace: 'demo'
command: 'upgrade'
chartName: 'helm-local/demo'
valueFile: src/helm/demo-api/values-${{ parameters.environment }}.yml
releaseName: demo-api-${{ parameters.environment }}
version: ${{ parameters.chart_version }}
install: true

If all is well, you should now have a working pipeline with extended helm functions and a blue print on how additional helm plugins can be added.

Conclusion

This project hopefully saves others a bit of time when trying to incorporate the same tech stack in Azure Devops. As usual, the entire solution can be found on github, with documentation in the README.md and in the code. Hope you enjoyed this short story and learned a bit along the way.

--

--

Yves Callaert

Senior Data Engineer who sometimes tries his hand at writing :)