Continuous Delivery for Kubernetes Applications

Continuous Delivery for Kubernetes Applications

Table of Contents

Introduction

This is part 3 of our 3 part Kubernetes CI/CD series. In the first part, we learnt at a high level about the overall CI/CD strategy. Then in the second part, we discussed in detail the continuous integration workflow. In this blog we will go in detail into the Continuous Delivery pipeline for deploying your applications to Kubernetes. 

While developing your CI/CD strategy, it is important to consider how you will monitor your application stack. A robust monitoring stack provides deep insight into the application stack and helps identify issues early on. MetricFire specializes in monitoring systems and you can use our product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign on to the free trial today.

Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.

Our goal is to make deployments—whether of a large-scale distributed system, a complex production environment, an embedded system, or an app—predictable, routine affairs that can be performed on demand. 

You’re doing continuous delivery when:

  • Your software is deployable throughout its lifecycle
  • Your team prioritizes keeping the software deployable over working on new features
  • Anybody can get fast, automated feedback on the production readiness of their systems any time somebody makes a change to them
  • You can perform push-button deployments of any version of the software to any environment on demand 

In the previous blog we established that the hand-off from Continuous Integration to Continuous Delivery takes place when the CI pipeline pushes the docker image to the docker repository, and then pushes the updated helm chart to a helm repository or artifact store. Let’s go further today.  

Some CD tools

Some widely used CD tools are:

  1. Jenkins
  2. Spinnaker.io
  3. Harness.io
  4. Drone.io
  5. Weave Flux
  6. ArgoCD

You must have noticed that Jenkins can be used as both a Continuous Integration and Continuous Delivery tool, and this is primarily because of its rich feature set and flexibility. 

Spinnaker and Harness follow a more traditional continuous delivery approach where we fetch the deployable artifact, bake it and deploy to a desired environment. 

Weave Flux and ArgoCD use a GitOps based approach where both the application source code and helm charts are a part of a git repository and are continuously synced to a desired environment. However, we will learn more about the GitOps approach in a later post. 

Artifact Management

The most important artifact for the Continuous Delivery pipeline is the Helm Chart. It is extremely important that the helm charts are properly versioned and stored. Along with that we should ensure that our pipeline should have access to the artifact store. The artifacts could be fetched over HTTP using appropriate authorization. Some options for artifact management are:

  1. Jfrog Artifactory: We can use this as a docker repository and helm repository. Additionally, helm charts can be packaged as tar.gz files and stored  here. Access to Jfrog can be managed using a user/token and helm packages can be fetched over HTTPS
  2. GCS Bucket: This is the traditional Google Compute storage and is great for storing packaged helm charts. Make sure to enable bucket archive policy in order to reduce costs. The access to GCS buckets should be managed using service account key files. 
  3. S3 Bucket: This is the object store service provided by AWS. If your continuous delivery pipeline infrastructure is running in AWS then it is a good choice to use S3 to store packaged helm charts. It is highly recommended to use IAM roles to manage access to S3 buckets. 

Deployment Strategies

As we previously learned that Deployment strategies can be broadly classified as:

  1. Blue/Green Deployment
  2. Rolling Upgrade
  3. Canary Deployment 

Kubernetes inherently supports only 2 rollout strategies, Rolling Upgrade and Recreate: 

With Rolling Upgrade we decide a minimum number of unavailable replicas and max surge for the available replicas during a new version rollout. For example, imagine the currently running deployment is having 4 replicas running, and minUnavailable parameter is set to 1, and maxSurge parameter is also set to 1. Then during the rollout, one replica of the currently running version will be terminated, and at the same time a new replica with the new version will be created and so on. This is a zero downtime deployment method. It is important to understand that this method should be used when both new and old versions of the application are backward compatible. The configuration looks something like this:

‍ 

replicas: 3  
strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

‍ 

In case of Recreate, all the pods of the existing deployment will be terminated and new pods of the new version are created. We should use this strategy when: 

  • Your application can withstand a short amount of downtime. 
  • When your application does not support having new and old versions of your application code running at the same time.

Workflow

A sample CD pipeline looks something like the following:

  1. Fetch Artifact
    As soon as there is a hand-off from the CI system to the CD system, the CD system will fetch the artifact. This artifact is a helm package whose location is relayed by the payload delivered by the CI system. We also pre-configure the location of these artifacts in the CD system. That location could be a GCS bucket or JFrog Artifactory. In the previous blog we established a convention to name and uploaded the helm package to a particular path. We should ensure that the CD system has the fetch URL preconfigured. 
  1. Bake Artifact
    This is one of the most crucial parts of the deployment pipeline. Helm packages are designed to use override values during the deployment process. However, before the actual deployment occurs we can bake the artifact and inspect the actual manifest which will be deployed to the cluster. Consider it using the helm template command. For example:
    ‍  
$ helm template -f myapp/values.yaml myapp/
---
# Source: myapp/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: RELEASE-NAME-myapp
  labels:
    helm.sh/chart: myapp-0.1.0
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
---
# Source: myapp/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: RELEASE-NAME-myapp
  labels:
    helm.sh/chart: myapp-0.1.0
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: RELEASE-NAME
---
# Source: myapp/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: RELEASE-NAME-myapp
  labels:
    helm.sh/chart: myapp-0.1.0
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: myapp
      app.kubernetes.io/instance: RELEASE-NAME
  template:
    metadata:
      labels:
        app.kubernetes.io/name: myapp
        app.kubernetes.io/instance: RELEASE-NAME
    spec:
      serviceAccountName: RELEASE-NAME-myapp
      securityContext:
        {}
      containers:
        - name: myapp
          securityContext:
            {}
          image: "nginx:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}

  ‍

The manifest above will be the one which is deployed to the cluster with all the values overridden.   

  1. Choose Environment
    The decision to deploy to a particular environment is made depending upon the information relayed by the trigger payload. This payload is generally delivered by our CI system to the CD system. A travisCI payload looks something like this:
     
{
  "id": 667130320,
  "number": "71",
  "config": {
    "language": "ruby",
    "os": [
      "linux"
    ],
    "dist": "xenial",
    "branches": {
      "only": [
        "master"
      ]
    },
    "jobs": {
      "include": [
        {
          "name": "build site",
          "language": "python",
          "python": "3.5.2",
          "env": [
            {
              "global": "PATH=$HOME/.local/user/bin:$PATH"
            }
          ],
          "cache": {
            "pip": true,
            "directories": [
              "vendor/bundle",
              "node_modules",
              "$TRAVIS_BUILD_DIR/tmp/.htmlproofer"
            ]
          },
          "addons": {
            "apt": {
              "packages": [
                "libxml2-utils"
              ]
            }
          },
          "install": [
            "rvm use 2.6.3 --install",
            "bundle install --deployment",
            "sudo apt-get install libcurl4-openssl-dev"
          ],
          "script": [
            "bundle exec rake test",
            "xmllint --noout _site/feed.build-env-updates.xml"
          ],
          "deploy": [
            {
              "provider": "heroku",
              "strategy": "api",
              "api_key": {
                "secure": "hylw2GIHMvZKOKX3uPSaLEzVrUGEA9mzGEA0s4zK37W9HJCTnvAcmgRCwOkRuC4L7R4Zshdh/CGORNnBBgh1xx5JGYwkdnqtjHuUQmWEXCusrIURu/iEBNSsZZEPK7zBuwqMHj2yRm64JfbTDJsku3xdoA5Z8XJG5AMJGKLFgUQ="
              },
              "app": "docs-travis-ci-com",
              "on": {
                "branch": [
                  "master"
                ]
              },
              "edge": true
            }
          ]
        },
        {
          "name": "update dpl v2 docs",
          "if": "type = cron || commit_message =~ /ci:dpl/",
          "language": "shell",
          "cache": false,
          "install": [
            "rvm use 2.5.3"
          ],
          "script": [
            "git clone https://github.com/travis-ci/dpl.git",
            "cd dpl",
            "gem build dpl.gemspec",
            "gem install dpl-*.gem",
            "cd ..",
            "rm -rf dpl",
            "bin/dpl"
          ],
          "deploy": [
            {
              "pull_request": true,
              "provider": "git_push",
              "token": {
                "secure": "YHuTjIGKpG0A8QJ4kmdLfOW1n+62uLakXv0KjCzWExl22qLSn2frip3j8JsaeMfndsmNZBUfGoONVHvDS+PHnkbRMYf21SjgctpVfHRYZQ3pulexOViEQ6azRgCBWuPO8A+vAyxvjlV4e3UDGnt2x/0X/Tdg9iVf/zzBGjM0YX0="
              },
              "branch": "auto-dpl-v2-update-docs",
              "edge": {
                "branch": "master"
              }
            }
          ]
        }
      ]
    },
    "notifications": {
      "slack": [
        {
          "rooms": [
            {
              "secure": "LPNgf0Ra6Vu6I7XuK7tcnyFWJg+becx1RfAR35feWK81sru8TyuldQIt7uAKMA8tqFTP8j1Af7iz7UDokbCCfDNCX1GxdAWgXs+UKpwhO89nsidHAsCkW2lWSEM0E3xtOJDyNFoauiHxBKGKUsApJTnf39H+EW9tWrqN5W2sZg8="
            }
          ],
          "on_success": "never"
        }
      ],
      "webhooks": [
        {
          "urls": [
            "https://docs.travis-ci.com/update_webhook_payload_doc"
          ]
        }
      ]
    }
  },
  "type": "pull_request",
  "state": "passed",
  "status": 0,
  "result": 0,
  "status_message": "Passed",
  "result_message": "Passed",
  "started_at": "2020-03-26T07:26:53Z",
  "finished_at": "2020-03-26T07:29:42Z",
  "duration": 169,
  "build_url": "https://travis-ci.org/meunice/docs-travis-ci-com/builds/667130320",
  "commit_id": 203599355,
  "commit": "da90df05b6ac67c324c869edad4877770df7454d",
  "base_commit": "a3aca76e9ac40621d0f12b9200b0aa08adbab88d",
  "head_commit": "a8aa97bf51f2a30f48c26bc6133cf52d06f65dfc",
  "branch": "master",
  "message": "added link to bitbucket (#2735)",
  "compare_url": "https://github.com/meunice/docs-travis-ci-com/pull/34",
  "committed_at": "2020-03-26T06:51:01Z",
  "author_name": "Petra",
  "author_email": "52408528+Pezi777@users.noreply.github.com",
  "committer_name": "GitHub",
  "committer_email": "noreply@github.com",
  "pull_request": true,
  "pull_request_number": 34,
  "pull_request_title": "[pull] master from travis-ci:master",
  "tag": null,
  "repository": {
    "id": 26544309,
    "name": "docs-travis-ci-com",
    "owner_name": "meunice",
    "url": null
  },
  "matrix": [
    {
      "id": 667130321,
      "repository_id": 26544309,
      "parent_id": 667130320,
      "number": "71.1",
      "state": "passed",
      "config": {
        "os": "linux",
        "language": "python",
        "dist": "xenial",
        "branches": {
          "only": [
            "master"
          ]
        },
        "name": "build site",
        "python": "3.5.2",
        "env": [

        ],
        "cache": {
          "pip": true,
          "directories": [
            "vendor/bundle",
            "node_modules",
            "$TRAVIS_BUILD_DIR/tmp/.htmlproofer"
          ]
        },
        "addons": {
          "apt": {
            "packages": [
              "libxml2-utils"
            ]
          },
          "deploy": [
            {
              "provider": "heroku",
              "strategy": "api",
              "api_key": {
                "secure": "hylw2GIHMvZKOKX3uPSaLEzVrUGEA9mzGEA0s4zK37W9HJCTnvAcmgRCwOkRuC4L7R4Zshdh/CGORNnBBgh1xx5JGYwkdnqtjHuUQmWEXCusrIURu/iEBNSsZZEPK7zBuwqMHj2yRm64JfbTDJsku3xdoA5Z8XJG5AMJGKLFgUQ="
              },
              "app": "docs-travis-ci-com",
              "on": {
                "branch": [
                  "master"
                ]
              },
              "edge": true
            }
          ]
        },
        "install": [
          "rvm use 2.6.3 --install",
          "bundle install --deployment",
          "sudo apt-get install libcurl4-openssl-dev"
        ],
        "script": [
          "bundle exec rake test",
          "xmllint --noout _site/feed.build-env-updates.xml"
        ]
      },
      "status": 0,
      "result": 0,
      "commit": "da90df05b6ac67c324c869edad4877770df7454d",
      "branch": "master",
      "message": "added link to bitbucket (#2735)",
      "compare_url": "https://github.com/meunice/docs-travis-ci-com/pull/34",
      "started_at": "2020-03-26T07:26:53Z",
      "finished_at": "2020-03-26T07:29:42Z",
      "committed_at": "2020-03-26T06:51:01Z",
      "author_name": "Petra",
      "author_email": "52408528+Pezi777@users.noreply.github.com",
      "committer_name": "GitHub",
      "committer_email": "noreply@github.com",
      "allow_failure": null
    }
  ]
}

 ‍

The payload contains very crucial information which is consumed by our CD system to deploy to different environments. For example  "branch": "master", indicates that the master branch of the source code repo was built. The rule of thumb is that master branch should always be deployable, and whenever master is built the artifact is deployed to the production environment. Similarly, if the branch is staging we deploy to the staging environment. If we trigger CI builds for tags then that information is also relayed by this payload which can be processed by the CD system
 

  1. Deploy and Notify
    Once the artifact has been fetched, baked, and we have decided which env to deploy to, the actual deployment triggers. Depending upon the deployment strategy (as discussed above) new pods are created and old pods are deleted.
    An important aspect of the deployment pipeline is Notifications. Using the same payload we discussed above we can get committer_name and committer_email to deliver notifications. Additionally, every CD tool has integrations with Slack and Hipchat which can be used to deliver notifications to a broader audience. 

Conclusion

In this blog we went over in detail about the Continuous Delivery system and the various stages involved. This concludes our 3 part Kubernetes CI/CD series. We tried to delve into as much detail as possible and provide production ready configuration. However, it is important to understand that each environment is different and so is every application stack. Therefore, our CI/CD pipeline should also be curated around it. 

If you need help designing a custom CI/CD pipeline feel free to reach out to myself through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments and different stages during the CI/CD process. Monitoring is extremely essential for any application stack, and you can get started with your monitoring using MetricFire’s free trial. Robust monitoring and a well designed CI/CD system will not only help you meet SLA’s for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it please book a demo with us.    

You might also like other posts...
kubernetes Oct 03, 2024 · 4 min read

Top 3 Command Line Tools for K8s

This article explores a few popular CLI tools for managing k8s environments. CLI tools... Continue Reading

kubernetes Sep 13, 2024 · 4 min read

Top K8s Metrics and How to Monitor Them

In this article, we will examine Kubernetes metrics every team should monitor and how... Continue Reading

kubernetes Aug 30, 2024 · 4 min read

How to Monitor K8s and Linode - Real Use Case

In this article, we'll explore how MetricFire uses its own platform for monitoring k8s!... Continue Reading

header image

We strive for 99.999% uptime

Because our system is your system.

14-day trial 14-day trial
No Credit Card Required No Credit Card Required