Deploy stateful workloads on Kubernetes using GitOps with Ondat and FluxCD

25/11/2022
demo
GitOps_Blog_Banner

With the ever-increasing growth in Kubernetes adoption, enterprises are increasingly adopting GitOps to deploy their workloads to Kubernetes. Deploying secure, cloud-native stateful applications requires a high level of performance across hybrid and multi-cloud environments. Using the scalable, highly performant storage provided by Ondat in combination with FluxCD, you can shift left security and accelerate software development. Using Weave GitOps Trusted Delivery to deploy Ondat, platform teams can easily provide consistency and speed up the time it takes to deploy Kubernetes clusters.

GitOps allows users to use git as their single source of truth for their Kubernetes manifests, thus preventing configuration drift, allowing easy rollbacks in case of issues, and also empowering any engineer to make changes to production in a controlled, safe manner.

In this tutorial, I am going to use FluxCD to manage Kubernetes workloads and will be focussing on how we achieve this using this toolset, but this approach would easily adapt to other solutions.

For more information regarding setting up FluxCD and its capabilities, which are many, feel free to take a look at the following links:

We follow a similar approach to repository structure as that suggested in the final link above, so all examples will be based around this.

FluxCD supports Helm deployments out of the box. This is also the recommended approach for deploying Ondat, so we will be using this method.

Getting started with Ondat

The first step is to sign up to Ondat and create a cluster object in the portal:

Once completed, you will obtain a Helm command to install Ondat on your cluster. This is what we will base the next step upon.

First, we create a namespace, as we like to keep the Helm objects in the same place as Ondat itself:

---
apiVersion: v1
kind: Namespace
metadata:
labels:
name: storageos
name: storageos

As part of our `foundation` directory, we define the following HelmRepository object

---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: ondat
namespace: storageos
spec:
interval: 1m
url: https://ondat.github.io/charts

As well as the following `HelmRelease` object

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
 name: ondat
 namespace: storageos
spec:
 releaseName: ondat
 chart:
   spec:
     chart: ondat
     version: 0.2.1
     sourceRef:
       kind: HelmRepository
       name: ondat
       namespace: storageos
 interval: 5m
 install:
   remediation:
     retries: 3
 valuesFrom:
   - kind: Secret
     name: ondat-secret
     valuesKey: client-secret
     targetPath: ondat-operator.cluster.portalManager.secret
 values:
   ondat-operator:
     cluster:
       portalManager:
         enabled: true
         clientId: clientID
         apiUrl: "https://portal-setup-7dy4neexbq-ew.a.run.app"
         tenantId: tenantID
   etcd-cluster-operator:
     cluster:
       replicas: 3
       storage: 6Gi
       resources:
         requests:
           cpu: 100m
           memory: 300Mi

The values section is where we can define any parameters we may need to changes as available in the values.yaml file belonging to the Helm chart. (https://github.com/ondat/charts/blob/main/charts/umbrella-charts/ondat/values.yaml)

Note that we have defined the value for ondat-operator.cluster.portalManager.secret as a reference to a Kubernetes secret in the example above. Kubernetes secrets are a vast topic in themselves, far too large for discussion in this blog post, but it is highly recommended to store this information in a secure manner.

You will need to copy the values for `tenantID` and `clientID` from the Helm command you see in your portal to ensure the portal manager is able to register itself correctly.

Also note that we have pinned the chart version, due to the nature of what Ondat does, upgrades should be performed carefully and thus we recommend pinning the version at this time.

Once this is applied, we can see the status of the HelmRelease using `flux get all -n storageos`.

NAME                    READY   MESSAGE                                                                                 REVISION                                                                SUSPENDED
helmrepository/ondat    True    Fetched revision: b2b49a5abead5f29bb7622a460c888d64fc8f351dcd5dac60eebd95e2e0a7122      b2b49a5abead5f29bb7622a460c888d64fc8f351dcd5dac60eebd95e2e0a7122        False

NAME                            READY   MESSAGE                                         REVISION        SUSPENDED
helmchart/storageos-ondat       True    Pulled 'ondat' chart with version '0.2.1'.      0.2.1           False

NAME                    READY   MESSAGE                                 REVISION        SUSPENDED
helmrelease/ondat       True    Release reconciliation succeeded        0.2.1           False

We can also perform an additional check to ensure we’re ready to use Ondat by running `kubectl get storageClass` and checking there is now an entry for `storageos`.

Finally, we need to apply a license to the newly provisioned cluster by returning to the portal and selecting the license we desire.

At this point we are ready to deploy our stateful workloads in a GitOps-controlled manner!

 
written by:
Lewis Edginton
A versatile engineer, having worked at a number of UK startups including Starling Bank, Lewis Edington has years of experience in the cloud, software development and delivery industry.

Register for our SaaS Platform

Learn how Ondat can help you scale persistent workloads on Kubernetes.