Identifying a Kube-Native Approach to Data Protection

12/08/2022
demo

Data protection isn’t usually top of mind for developers. Historically, developers had the benefit of working solely on functionality and could leave the less exciting but absolutely crucial aspects of infrastructure management to their team members in production.

But with DevOps principles transforming the industry, it’s no longer possible for developers to leave production concerns to other teams. Thinking about software development in a holistic, end-to-end way also includes the traditionally less compelling aspects of the software development life cycle, things like testing, availability, failover — and data protection.

Let’s dive into data protection for Kubernetes to identify some of the best practices DevOps professionals should follow when thinking about this essential software development issue.

Data Protection for Traditional Environments

When virtual machines (VMs) were all the rage for application deployment, data protection was a relatively straightforward task. To back up a VM, infrastructure managers turned to solutions like IBM’s Tivoli Storage Manager (TSM) to essentially take a snapshot of the VM, its operating system, the application installed in them and the application’s state. When there was a failure and a backup needed to be restored, it was relatively straightforward to return the right data to the right place since everything lived inside a monolithic VM.

Then Kubernetes came on the scene. At first, there weren’t any major changes. Kubernetes and containers were primarily reserved for stateless workloads. Any data that did require backing up lived inside traditional relational databases, where they could still be easily backed up with traditional storage manager solutions. Up until this point, the division between development and operations was still firmly in place. Developers didn’t need to worry about storage or backups or anything infrastructure-related — those were problems for people further down the line in production.

Of course, this paradigm is rapidly falling out of fashion in modern enterprises.

 

What Makes Modern Data Protection Different

Today, DevOps is at the very least the goal, if not the norm, for many organizations.

In practice, that means shifting traditionally infrastructure-related concerns left in the software development life cycle, implementing microservice-based architectures and adopting a cloud native approach to app delivery. And most importantly for the question of data protection, modern DevOps often means managing more of the infrastructure and more stateful workloads within Kubernetes.

Working within Kubernetes as much as possible makes life significantly easier for DevOps professionals in a multitude of ways. Most significantly, they can use Kubernetes as a layer of abstraction that permits them to work mostly the same way and use mostly the same skill sets regardless of their cloud provider, deployment approach or underlying infrastructure.

But this also means that the traditional approach to data protection no longer applies.

It’s no longer a straightforward task to take a snapshot of a system because systems are distributed and decomposed into several hundred microservices. Before, you could capture state at a point in time in a relational database, but now stateful workloads are being distributed among these different microservices, making it challenging to:

  • Identify what data needs backing up and what data doesn’t.
  • Identify where essential data lives.
  • Pull that data into a data protection solution.
  • Consistently and accurately restore data across the microservice architecture.
  • Do all this the same way in every environment.

Data-Protection Solutions Exist

There are open source and commercial solutions, but they still need extending to solve all these challenges.

Velero (previously Heptio Ark) is a notable open source solution founded by Kubernetes co-founders Craig McLuckie and Joe Beda. On the commercial side, there are solutions like those by Veeam, which acquired the Kubernetes backup solution Kasten.

Generally, these solutions function by explicitly not controlling how to start and stop an app or get a consistent snapshot of what’s going on in a container. Instead, they permit you to impose pre- and post-hooks on certain events. This might look like pausing your PostgreSQL database. The data-protection solution could then take different backup actions: It could call the Kubernetes container storage interface (CSI), take a snapshot, write to an S3- compatible bucket or take a variety of different approaches that function more or less ideal for your application. Afterward, you can define a post-hook to carry on operations.

There are two main approaches to getting consistent snapshots, the first being a logical backup. A logical backup is used when you’re working to restore to a timestamp and use the application-level intelligence (plugging into transaction logs), which is then covered in the text. The second approach is a physical-level backup, where you can use the CSI plugin to create a snapshot at the storage block level. This can then be streamed to a remote S3-compatible target, as before.

DevOps professionals interested in shifting data protection left might spin one of these solutions up on their cluster, become familiar with its operations and discover that it isn’t quite ideal.

Yes, these modern data solutions effectively back up and restore data in a distributed, microservices-based architecture. But they don’t function the same way in every environment. Thus, if you depend on multiple cloud providers, have a hybrid environment or intend to scale to multiple environments in the future, they can be burdensome to work with, requiring teams to spend additional time on management and configuration, and reducing your ability to automate.

Think about Data Protection in a Kube-Native Way

Even though centralizing application infrastructure in Kubernetes forces us to find new ways to deliver best-in-class data protection, many DevOps professionals still choose to use Kubernetes because it makes the development of business applications so much easier. It makes working with multiple environments significantly simpler. What we need is a data-protection approach that works like and with Kubernetes such that it functions identically regardless of the underlying infrastructure.

As more stateful applications are deployed into Kubernetes, the need for Kubernetes-native data-protection solutions increases. Whether this data be a message queue, database or AI/ML workload, it needs to be underpinned by a reliable, performant and secure storage layer that is integrated into the Kubernetes ecosystem.

This is where Ondat comes, a software-defined storage solution delivered through the Container Storage Interface to deliver block- and file-persistent volumes to your workloads. With its support for CSI snapshots, it can be used to build data disaster recovery and business continuity strategy for your Kubernetes workloads.

Find out more about Ondat and Kubernetes’s persistent storage here. Or, if you’d rather try Ondat out for yourself, request a demo here.

written by:
Chris Milsted
Chris is a customer success architect at Ondat. He has spent more than 20 years working for large and small companies across technologies including UNIX, LINUX, Kubernetes, cloud computing, networking, virtualization, and many other open-source projects. Chris has a proven track record of helping organizations work out how these technologies can be applied to solve business challenges and deliver value.

Select the best plan for you

Compare the features and pricing of Community, Standard and Enterprise Editions of Ondat.