With Kubernetes being the largest container orchestration platform and Amazon being the largest cloud service provider, the development of Amazon’s Elastic Kubernetes Service (EKS) was inevitable.
The more businesses using Amazon’s cloud services, the more revenue Amazon can earn. Companies need to acquire and retain talent for managing container-based cloud applications. Cue EKS, a managed service for streamlining the lifecycle of Kubernetes clusters on Amazon Web Services (AWS). It serves as an on-ramp to AWS, enabling more businesses to join Amazon’s cloud.
AWS EKS manages Kubernetes’s controller nodes, etcd, and other backend infrastructure for its users, enabling them to focus more on development and revenue generation rather than re-inventing the wheel.
EKS has since developed into an ecosystem of AWS-specific Kubernetes add-ons and enhancements. Beyond simplified deployment, EKS now offers additional capabilities such as:
- An on-premise distribution called EKS Anywhere
- An OpenID Connect (OIDC) provider for Kubernetes role-based access control
- The ability to front Kubernetes applications with AWS load balancers
- A plugin for Kubernetes’s Container Network Interface (CNI) that provides advanced networking capabilities in AWS Virtual Private Cloud
What are the best ways to use EKS, its advantages, and disadvantages, and how can businesses get the most out of this service?
What Are EKS’s Best Use Cases?
While EKS makes deploying Kubernetes simple, it’s not ideal across all scenarios.
Suppose your organization wants to hit the ground running and rapidly get a minimum viable product up and available to customers as soon as possible. In that case, EKS can be a great choice. If you don’t have the in-house expertise needed to maintain and configure your Kubernetes clusters, then EKS’s managed services will enable you to release an app without overcoming that hurdle. And Amazon is actively contributing to the ecosystem of EKS tools, giving you a good chance of finding the right plugin for common cloud computing needs as you grow.
On the other hand, highly custom environments will struggle to mesh with EKS. So, large enterprises or organizations with their internal infrastructure practices may struggle to adapt to how Amazon likes to do things with EKS and AWS without significant changes to processes and best practices.
A Kubernetes deployment built on EKS won’t be at the cutting edge, either. This is in part due to the limitations imposed by the EKS platform, but it’s also because EKS will always be a version or two behind Kubernetes — after all, Amazon can only start updating EKS after Kubernetes releases a new version.
Lastly, the whole reason for EKS’s existence in the first place is to get more businesses up on AWS, so it doesn’t work well with other cloud providers. If your environment is multi-cloud, hybrid, or on-prem, you may struggle to get EKS working. This, however, is changing with EKS Anywhere. Currently, this service is optimized for on-prem deployments. It may expand to support future deployment styles, but again, EKS is all about funneling more customers to Amazon’s cloud service, so EKS will always work best on AWS.
EKS Storage May Be a Thorn in Your Side
Even if you’re using EKS on AWS, there are still specific challenges to overcome. The most significant that we’ve seen is finding a way to achieve performant shared storage across your nodes.
There are three major storage options that Amazon offers EKS users:
- Elastic Block Storage (EBS)
- Elastic File System (EFS)
- Simple Storage System (S3).
Unfortunately, any application with sufficient scale will run into issues if they solely rely on any one of these storage options.
There is one storage system that we have omitted from the list above - in use cases where performance is the number one concern, EC2 instance store can be used to achieve the best storage speeds possible on AWS. An instance store is available to one EC2 instance, cannot be detached or reassigned elsewhere and comes with the hefty caveat that users must carefully implement data lifecycle management (e.g. backups and maintaining availability) as instance storage is ephemeral and lost if instances are stopped.
Going through the list, we’ll start with S3 since it has its specific use cases. Unlike EBS and EFS — both file storage systems — S3 is an object storage system. It’s great for hosting websites and static content, but as an object store where all transactions are over the network, it doesn't necessarily suit very latency-sensitive use cases. It can be a useful part of your overall storage architecture, but it’s not exactly relevant to our focus on performant shared block storage.
EBS is very performant, but it’s specific to a single Elastic Compute Cloud (EC2) instance. So, if you want to launch multiple applications and have them easily share data with one another, it’s not the best solution. Or, if you want to provide high availability and resiliency for your application, you won’t be able to use the same EBS volume across different availability zones - even if you find an issue in the zone your application is running in. Thus, scalability is the biggest issue with EBS.
EFS provides shared storage functionality, enabling multiple applications to read and write to the same volume using the Network File Storage (NFS) protocol. So, if you have multiple EC2 instances and you want to share data across them, then EFS is the best Amazon storage solution for you.
Unfortunately, EFS’s performance isn’t very consistent. What’s more, EFS becomes exponentially more expensive to consume as your application’s storage needs grow.
How Ondat Addresses These Challenges
For organizations that want the simple deployment process provided by EKS and want scalable, affordable, and performant storage, incorporating Ondat can be a powerful solution.
Ondat pools storage across nodes, enabling apps to dynamically provision volumes to be mounted anywhere in the cluster. Crucially, Ondat is platform-agnostic — Amazon EKS users can easily incorporate it into their environments and use the same application code and manifests that they would on any other platform. Because it pools storage and is platform-agnostic, with Ondat's RWX (ReadWriteMany) volumes users can use Ondat to make EBS function more like a shared storage system. By default, EBS is specific to one EC2 node. Ondat can connect these multiple EBS volumes across multiple nodes, resulting in a distributed storage system that looks a lot more like EFS — only with far more consistent performance and a significantly lower price tag. Ondat can even be used with instance store for the absolute maximum performance from AWS storage, though the caveats on maintaining instance lifecycles and data backups for ephemeral storage systems still stand.
Getting Started on EKS with Ondat
EKS is meant to be a fast and reliable way to deploy a Kubernetes cluster on AWS, so we ensured that setting up Ondat with EKS was as streamlined as possible.
Make sure that the following CLI utilities are in your $PATH:
After setting up these prerequisites in place, provisioning your kubeconfig for kubectl, and testing that you can connect to Kubernetes, you can install Ondat with EKS in a four-step process:
- Conduct preflight checks to ensure against the EKS cluster to ensure that Ondat prerequisites are in place
- Define and export the STORAGEOS_USERNAME and STORAGEOS_PASSWORD environment variables that will be used to manage your Ondat instance, set the StorageClass for etcd to use, and run a simple kubectl-storageos plugin command to install Ondat.
- Run a simple set of kubectl commands to verify the installation.Obtain an Ondat license to the cluster.
For more detail on these individual steps, look at our documentation.
And if you want to talk more about how Ondat can be used to extend EKS and its persistent storage, reach out to one of our experts or reach out to us via email or our public Slack channel. We’re always happy to chat.