3 Reasons to Shift Left your Data with Kubernetes and Ondat

30/11/2021
Nic Vermandé
demo

There was a time when improving part of a code-base was only possible when releasing a new version of the so-called monolith. Iterating over short development cycles didn’t exist, and clouds were just known to be the result of a chemical reaction involving water and a few other elements. If you’re old enough, you can also remember IT was composed of many siloes during this era. They undoubtedly still exist, but boundaries become blurred. Today, the network team is doing much more than assigning VLANs and creating ACLs on routers. The same is true for the Virtualization and Storage teams, who deal with more than spinning VMs, creating LUNs, or deploying new VSAN disk groups for the hyper-converged aficionados. These teams have now to compose with hybrid and often complex environments, be proficient with Infrastructure-as-Code and automation paradigms, and sometimes even contribute to CI/CD efforts.

Talking about virtualization, I remember a detail that struck me during VMworld 2010. This is not only the year I defended and passed my first VCDX (time flies, I know), but the year I started to wonder what Cloud meant. Paul Maritz got on stage and talked a lot about Cloud and on-demand services. My problem was that I couldn’t tie these concepts to any of the VMware products as none had these capabilities. vCloud Director 1.0 had been released that year, but directly consuming vCloud Director as an end-user or a developer was very hard, and not the type of “Cloud” experience I was expecting. It was more infrastructure-centric than an end-to-end experience covering application and platform dependencies, but still a good first step. Then I realized that Cloud was not necessarily about technology. It was a direction set to optimize the delivery of IT resources to people who needed them. In my experience with private Enterprise clouds, nothing has ever been 100% on-demand, and it is still true today. Whether users have to create a ticket or wait for various approvals, the experience is rarely instantaneous. With Public Cloud, it’s different. The longest waiting time is probably when you try to remember where you’ve seen this damn credit card for the last time…in the pocket or the car?

Fast forward to the 2020s

In today's world everyone's expectations of services are:

  • Available anywhere around the world on-demand, at the tip of your fingers
  • Provisioned and scaled in minutes
  • Accessible through programmatic interfaces
  • Easy to consume
But the story is slightly different if you need to build an application designed to leverage CSP-specific services (notice that I don’t use the term cloud-native, which has a different meaning) or refactor your Enterprise application for the Public Cloud. You need glue for that. A lot of glue. This is where the on-demand part is once more challenged. Deploying API Gateways, using application packaging and delivery tools, managing Service Discovery, and adding extra features such as circuit-breakers and mutual TLS may not be straightforward. Especially if your boss tells you that the company needs a multi-cloud strategy. So how can we provide a consistent way to deliver on-demand infrastructure and software resources during the application development lifecycle? Shift left is a key component of the answer, but…
 

What is Shift Left?

The idea of shift left is to identify and resolve software issues very early in the development lifecycle. It was initially a software development paradigm. Studies show that bugs discovered late during integration tests or QA tasks have a cost that is an order of magnitude higher than during early development. Shift left testing can typically be achieved by using static code analysis tools, increasing code coverage, deploying more automated tests in CI pipelines, and maintaining a closed loop between developers, DevOps engineers, and QA teams.

Modern applications are distributed over the network and often rely on micro-services patterns. Compared to monoliths, they are owned by smaller groups of software engineers, and their building blocks are decentralized. Individual teams manage their data, middleware, and databases, and every micro-service has distinct authentication, observability, resilience, performance, network, and storage requirements. They must be satisfied by deploying resources (ideally) on-demand, as the application needs them. It is why the “shift left” paradigm has now broadened into the infrastructure space.

Another way to detect issues early in the development process is to provide a consistent environment across the various developments phases. If developers can’t test the application they’re developing in an environment similar to the target platform, chances are that non-functional requirements can’t be verified during the development phase. Performance, resilience, and security become theoretical goals that can’t be guaranteed until very late in the cycle. Therefore, it is paramount to shift left target infrastructure components to minimize the deviation of the non-functional metrics.

In a nutshell, shift left defines the ability to include production infrastructure components into the CI process. Automation, elasticity, and immutable infrastructure are key to realizing this objective. But let’s not forget that security, compliance, authentication, and authorization are also essential components that need to be considered.

Shift Left with Kubernetes

With the advent of Multi-Cloud architectures, repeatable patterns are becoming the cornerstone for building reliable and scalable applications and infrastructures. HashiCorp introduced Terraform as a standardized framework to deploy and manage resources lifecycle on any computing platform. Developers and cloud engineers can leverage a single language to manage application and infrastructure resources, whether on-premises or in Public Clouds. In the same way, Tekton and Argo Workflows are Continuous Integration (CI) tools native to Kubernetes and can be used across any underlying platforms. They provide a common framework to deploy automated workflows, operating as standardized Kube-native CI processes. The automation is self-contained within the cluster. The cluster can live on any computing platform, but the programmed workflows stay identical. The integration points may vary depending on the target platform, provided the CI needs to interact with cloud-specific APIs. However, the foundations for a standard Cloud Operating System have been laid.

Kubernetes already provides most of the application infrastructure dependencies as software plugins. As its primary role is to orchestrate containers, compute resources management is a given. In addition, Kubernetes provides a standard interface to consume the following resources:

  • Network: The Container Network Interface (CNI) allows you to easily connect the existing network infrastructure and exchange packets while keeping the main configuration in Kubernetes. As an add-on, you can also manage security rules using network policies. It essentially turns your physical network into a set of dumb pipes while providing a full-fledged SDN solution on top that takes advantage of the Kubernetes ecosystem around instrumentation and observability.
  • Storage: The Container Storage Interface (CSI) defines primitives that storage vendors can implement to enable persistent and dynamic storage provisioning within Kubernetes. It is a must-have for anyone looking to run Stateful Application in Kubernetes, as the implementation usually provides additional Enterprise-ready features such as replication and volume failover. As for networking, this turns on SDS in the cluster.
  • Application Configuration: Many patterns can be used to configure applications running in Kubernetes in a consistent and repeatable way. Options include environment variables, ConfigMaps, Secrets, init containers, etc. They all are first-class citizens in Kubernetes.
  • Service: Service discovery is one of the critical out-of-the-box benefits of Kubernetes. In a micro-services architecture, you want to avoid hardcoded and static IPs to reach upstream services. Kubernetes solves this problem by making every Pod aware of available services.
  • Security: Containers involve multiple layers of abstraction. It can be challenging to follow security best practices and constantly keep them up-to-date. Kubernetes is not secure by default but provides primitives such as RBAC, admission controllers, webhooks, audit logs, and SECCOMP to reduce the cluster attack surface and implement additional controls on container images and user permissions.

Of course, the beauty of these Kubernetes functions is that all configuration elements are expressed in a declarative format. There are various tools in the Kubernetes ecosystem that can streamline these resources by building configuration templates and adding specific customization. Kustomize or Helm are good examples highlighting these capabilities, which enable the codification of infrastructure requirements within Kubernetes. Shifting left becomes a matter of integrating these components into the CI pipeline. By adding resources requirements as additional steps in the pipeline, developers can now build and test their software very early in the development process and deploy relevant platform self-service resources without manual intervention.

Another benefit of Kubernetes is its ability to run anywhere. You can use Kind or Minikube on your laptop, while your staging and production clusters may be running on-premises or in multiple Public Clouds. It is a critical feature for shifting left continuous testing. However, it is hard to reproduce all Kubernetes features required by a fully functioning application during development. Smoke tests are only possible once the application has been deployed in the cluster. But if the developing environment is already running on Kubernetes, it becomes easier to iterate automated tests during the development phase. Also, using Ondat will take this approach one step further by facilitating the integration of stateful services, databases, or message queuing applications in CI pipelines. By leveraging the CSI and other native Kubernetes concepts such as labels, defining persistent data requirements such as encryption or replication becomes trivial. Regression and even integration tests can now be realized at any point in time during the software development lifecycle (SDLC).

The last benefit of shifting left with Kubernetes is the ability to define any infrastructure requirement as versioned data, which helps shift left compliance. Collaboration between teams is more straightforward, configuration history is maintained by default, and compliance enforcement can be driven by policy-as-code. So you may ask yourself, what is the difference between infrastructure-as-code with Kubernetes and a tool like Terraform? Well, although Terraform has providers for multiple managed Kubernetes distributions, it is not an application runtime. It is a tool to manage composable infrastructure. Infrastructure-as-Code with Kubernetes means that:

  • The infrastructure AND the application requirements are merged in a single manifest or set of manifests that can directly be deployed in Kubernetes.
  • The infrastructure and the application requirements are stored in a version-controlled repository and managed by the platform engineering team. This is an important feature to enable shift left paradigms for the application code and its environment.

Now that you better understand the different moving parts, here are the 3 reasons why you should shift left your Data with Kubernetes.

1. Reduce Costs

I’ve previously mentioned that the cost of delaying code testing is exponentially increasing over time in the SDLC. It is also essential to include part of the backend while shifting left continuous testing, namely enterprise databases or middleware components. But adding Stateful Applications and data sets into the picture has an impact on operational expenditures. Cloning data sets typically involves the creation of new persistent cloud disks and volumes, and performance may not be guaranteed. Without saying that if you need reserved IOps, the bill will dramatically increase. The good news is that Ondat provides multiple options that can help reduce these costs:

  • you can deploy cloud disks (think Google Persistent Disk, AWS EBS, or Azure Managed Disk) and aggregate their performance and size to match your specific requirements. Ondat creates multiple blob files per block device and writes in parallel to them, increasing scalability and performance.
  • Or you can use the local ephemeral storage of the Kubernetes nodes (e.g., local NVMe drives) for increased performance at a fraction of the cost. Ondat will make sure that all Kubernetes persistent volumes are replicated across nodes in the cluster and spread across availability zones so that you effectively don’t lose data in case of a node failure…that would be a bummer!

2. Accelerate Software Delivery

There are 2 dimensions to this aspect. First, detecting and fixing code issues early in the development process is simpler as there are fewer interactions between modules, packages, and libraries. It thus leads to a reduced “domino” effect. As developers build layers on top of their code, detecting an issue may lead to more lines to change. At scale, it can substantially stretch the release deadline.

Also, shifting left the backend databases in Kubernetes allows developers to broaden the scope of issues that can be detected early while not impeding automation capabilities. These issues don’t need to be dealt with later during integration tests, which globally accelerate development iterations. In addition, Ondat provides a consistent way to expose production-grade storage features regardless of the development lifecycle stage, as long as the target testing environment is Kubernetes. It doesn’t matter if the CI runs locally on the developer laptop or in a 100-node Kubernetes cluster in a Datacenter or in the Public Cloud. All features are configured in the same way and can be changed on the fly in a declarative or imperative way. Thus developers can run the tests designed for the production environment much earlier. This capability further accelerates software delivery.

3. Better Code Quality

There is a reason why test coverage is one of the most crucial areas in software development. More tests mean fewer bugs, which increases the code quality. By adopting the principles I’ve described in this article, you will inevitably increase the amount of (automated) tests performed and the quality of the application code. The additional capability that Ondat brings to the picture is “la cerise sur le gâteau”, as we say in French. The ability to mirror your production data (or a part of) as StatefulSets in Kubernetes and to programmatically change persistent volume features on the fly make new testing patterns possible. These elements contribute to better test coverage and ultimately lead to better code quality.

To sum it up…

In conclusion, shift left data is necessary to reduce application delivery time. It takes shift left continuous testing to the next level by integrating stateful components in a self-service fashion without compromising on automation capabilities. Because Kubernetes enables a declarative model for infrastructure resources, and Ondat extends this capability to the hyper-converged storage layer, developers and DevOps engineers can define repeatable, Kube-native CI/CD pipelines broadening the scope of automated integration tests. In addition, Ondat optimizes the cost related to cloud disks provisioning, increases the overall performance, and provides Enterprise-ready features for your persistent volumes.

You don’t necessarily need to migrate all your databases and Stateful Applications in Kubernetes at once. A common practice is to identify “quick wins” that will allow you to experience new approaches leading to a measurable outcome gradually. Starting by including infrastructure components in your testing processes is a good idea, as it is low-risk but potentially comes with great benefits.

 

Quick heads-up: We are actively recruiting talented Kubernetes/Golang engineers. Other positions are also available, check out our job openings here:https://www.ondat.io/company

Sign up for the Tech Preview

Learn how Ondat can help you scale persistent workloads on Kubernetes.