Ever since the concept of DevOps began to take hold in the industry around 2008, infrastructure managers have worked to find the best way to bring the requirements of the production environment into the CI/CD pipeline. When applying DevOps principles to infrastructure, the goal has always been to automate as much as possible to roll out rapid updates while maintaining high availability.
But as the ideal production environment changes, so too must our approaches to infrastructure automation. Today, many infrastructure managers support containerized applications in a multicloud environment. This situation poses three key challenges:
- Automating tasks across each service provider’s platform
- Keeping the costs associated with cloud services to a minimum
- Managing stateful workloads at all, let alone with multiple providers
Fortunately, these aren’t insurmountable challenges, and with the right approach, infrastructure managers can shift left and automate more. The key is setting up a hyper-converged Kubernetes infrastructure and standardizing using the Terraform software tool.
In this blog, we’ll explain what hyper-convergence means for Kubernetes, how you can standardize your infrastructure across multiple cloud providers and how all of this enables infrastructure managers to bring infrastructure into the CI/CD pipeline at their organization.
First: What Is Hyper-Convergence?
In a traditional, non-converged infrastructure, you would have separate hardware components handling your compute, networking and storage. Often, entire IT teams could be dedicated to maintaining each separate set of components, making this data center approach less desirable for modern agile organizations.
Converged infrastructure also uses a hardware-based approach. But rather than IT departments selecting, configuring and connecting individual components like servers, network switches, storage arrays and so on, the vendor provides a single prepackaged unit. This lowers costs and simplifies deployment, but it also reduces flexibility and contributes to vendor lock-in.
In contrast, a hyper-converged infrastructure virtualizes as much of this system as possible. Hyper-converged infrastructure typically includes a hypervisor, a software-defined storage component and software-defined networking, all of which are run from a single server. Since everything is software-defined, this vastly reduces the amount of management required. Everything can be configured from a single interface; the system can be easily extended through APIs and there’s a significantly reduced likelihood of hardware issues.
What About Hyper-Converged Kubernetes?
The concept of hyper-convergence was first developed with virtual machines in mind. So, what about containers?
A hyper-converged Kubernetes infrastructure follows many principles, except for two key differences.
The first difference is that hyper-converged Kubernetes infrastructure uses Kubernetes in place of a hypervisor. Rather than manage a set of virtual machines and their resource-intensive operating systems with a hypervisor, you’d have a container orchestrator like Kubernetes managing a set of lightweight containers.
The second is a bit more involved. Because containers and Kubernetes were designed with stateless workloads in mind, you’ll need to find a way to handle persistent storage in a hyper-converged way. Connecting to a separate storage system would, by definition, mean your system isn’t exactly hyper-converged anymore; any automating you do will have to take into account the separate system. As a result, your availability will depend on how much work it takes to keep your storage up to date.
One option is to consume database services directly from your cloud provider, but this can be prohibitively costly and is challenging to manage when you’re relying on multiple cloud providers.
That’s why infrastructure managers should seek out a Kubernetes-native, software-defined storage solution like Ondat. Ondat pools storage resources across multiple clusters and cloud providers and automatically provisions resources in a cost-optimized way. Because Ondat pools resources, any burst demand would be spread evenly across drives, reducing overall cost. Furthermore, you can add as many disks as you want on-demand, enabling horizontal scalability.
With, for example, Amazon’s Elastic Block Storage (EBS), your only option is to scale vertically on a general-purpose SSD and hope that your I/O credits cover the spike in demand, or to purchase provisioned IOPS, which is also associated with significantly higher costs.
Creating a Single, Standard Framework
As a Kubernetes-native solution, Ondat is, like Kubernetes, cloud-agnostic and declarative. That means that Kubernetes and Ondat are straightforward to set up in a single cloud environment, but a multicloud environment requires a few additional steps.
To automate as much of your infrastructure management as possible, you need a single, standardized framework that allows you to speak the same language to each of your cloud provider’s platforms. That’s where Terraform comes in.
Terraform is an open source software tool from HashiCorp that essentially solves the issue of multicloud standardization and also provides limited automation capabilities.
Using a low-code, descriptive language, Terraform enables infrastructure managers to declare their application’s configuration regardless of the cloud provider in question. Because Kubernetes and Ondat are declarative themselves, these tools are highly complementary.
The overall process of deploying Kubernetes and Ondat or a comparable cloud storage solution across multiple clouds is relatively straightforward. Terraform includes prebuilt resources to help you deploy base clusters for each major cloud service provider. Terraform also includes prebuilt resources for Kubernetes that let you deploy Kubernetes-specific objects into your cluster.
Among these resources will be a few lines relating to the storage orchestrator — Ondat in this case — that enable you to declare your storage requirements and configure your features, such as:
- The number of volume replicas you want to create (between one and five)
- Encryption in transit and at rest
- Intelligent, topology-aware placement for volume and replicas, taking into account availability zones
Then, this can be adjusted very easily for your requirements in a development, testing or production environment as needed.
To Bring Infrastructure into the CI/CD Pipeline, Centralize and Standardize
If your goal is to align your infrastructure with DevOps principles in a multicloud environment, then you need to bring all the different components of your data center under the same roof, so to speak.
Terraform enables you to use a single framework to manage your cloud providers; hyper-convergence enables you to configure all your data center components in the same place; and Kubernetes and Ondat allow you to orchestrate everything. Once you’ve got these elements in the same place and speaking to one another, you’ll be better able to automate infrastructure tasks, and your team can focus on scaling, not firefighting.
If you want to talk about what infrastructure automation and cloud-agnostic storage might look like when put into practice at your organization, why not get in touch with the experts at Ondat?