Weaveworks & eksctl
Provisioning a Kubernetes cluster with Amazon’s Elastic Kubernetes Service is already straight forward through the use of the AWS Console UI, however, the Weaveworks team famous for their Kubernetes networking stack, the use of GitOps for Kubernetes and for many other areas, have further increased productivity and usage with their awesome ‘eksctl’ command line tool. An open-source utility written in GoLang that utilises Amazons CloudFormation.
In this post we demonstrate how to quickly setup an AWS EKS cluster using eksctl, how to install Ondat for persistent data and the setup/configuration, of a MySQL app using persistent storage.
1. Initial setup and configuration
With an AWS environment configured using the aws command line tool (‘aws configure’ executed with respective credentials) and kubectl available in the host path, installing eksctl is simplified through the statically compiled binary, available on the eksctl github repository.
In my case, where I’m utilising a Mac Mini M1, I downloaded the prebuilt Darwin AMD64 binary.
Eksctl, doesn’t require a configuration file but, utilising one provides the specifics for the cluster configuration along with the required node groups –
🔎 Show eksctl Cluster Config - cat cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: StorageOS
region: eu-north-1
nodeGroups:
- name: StorageOS-Nodes
instanceType: t3.medium
desiredCapacity: 3
2. Creating an EKS Cluster using eksctl
With the configuration file available, building an entire cluster is a single command line operation with the configuration file specified –
✨ Building EKS Cluster - eksctl create cluster -f cluster.yaml 2021-08-12 11:17:17 [i] eksctl version 0.61.0-rc.1 2021-08-12 11:17:17 [i] using region eu-north-1 2021-08-12 11:17:17 [i] setting availability zones to [eu-north-1b eu-north-1a eu-north-1c] 2021-08-12 11:17:17 [i] subnets for eu-north-1b - public:192.168.0.0/19 private:192.168.96.0/19 2021-08-12 11:17:17 [i] subnets for eu-north-1a - public:192.168.32.0/19 private:192.168.128.0/19 2021-08-12 11:17:17 [i] subnets for eu-north-1c - public:192.168.64.0/19 private:192.168.160.0/19 2021-08-12 11:17:18 [i] nodegroup "StorageOS-Nodes" will use "ami-0e84d3087df4f5ff4" [AmazonLinux2/1.20] 2021-08-12 11:17:18 [i] using Kubernetes version 1.20 2021-08-12 11:17:18 [i] creating EKS cluster "StorageOS" in "eu-north-1" region with un-managed nodes 2021-08-12 11:17:18 [i] 1 nodegroup (StorageOS-Nodes) was included (based on the include/exclude rules) 2021-08-12 11:17:18 [i] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s) 2021-08-12 11:17:18 [i] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s) 2021-08-12 11:17:18 [i] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-north-1 --cluster=StorageOS' 2021-08-12 11:17:18 [i] CloudWatch logging will not be enabled for cluster "StorageOS" in "eu-north-1" 2021-08-12 11:17:18 [i] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-north-1 --cluster=StorageOS' 2021-08-12 11:17:18 [i] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "StorageOS" in "eu-north-1" 2021-08-12 11:17:18 [i] 2 sequential tasks: { create cluster control plane "StorageOS", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, create nodegroup "StorageOS-Nodes" } } 2021-08-12 11:17:18 [i] building cluster stack "eksctl-StorageOS-cluster" 2021-08-12 11:17:18 [i] deploying stack "eksctl-StorageOS-cluster" 2021-08-12 11:17:48 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:18:19 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:19:19 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:20:19 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:21:20 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:22:20 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:23:20 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:24:20 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:25:21 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:26:21 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:27:21 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:28:22 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:29:22 [i] waiting for CloudFormation stack "eksctl-StorageOS-cluster" 2021-08-12 11:33:25 [i] building nodegroup stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:33:25 [i] --nodes-min=3 was set automatically for nodegroup StorageOS-Nodes 2021-08-12 11:33:25 [i] --nodes-max=3 was set automatically for nodegroup StorageOS-Nodes 2021-08-12 11:33:26 [i] deploying stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:33:26 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:33:42 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:33:59 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:34:19 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:34:36 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:34:56 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:35:16 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:35:35 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:35:52 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:36:10 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:36:27 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:36:43 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:37:01 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:37:17 [i] waiting for CloudFormation stack "eksctl-StorageOS-nodegroup-StorageOS-Nodes" 2021-08-12 11:37:18 [i] waiting for the control plane availability... 2021-08-12 11:37:18 [✓] saved kubeconfig as "/Users/james/.kube/config" 2021-08-12 11:37:18 [i] no tasks 2021-08-12 11:37:18 [✓] all EKS cluster resources for "StorageOS" have been created 2021-08-12 11:37:18 [i] adding identity "arn:aws:iam::499176610670:role/eksctl-StorageOS-nodegroup-Storag-NodeInstanceRole-9E1CZAW4JQBA" to auth ConfigMap 2021-08-12 11:37:18 [i] nodegroup "StorageOS-Nodes" has 0 node(s) 2021-08-12 11:37:18 [i] waiting for at least 3 node(s) to become ready in "StorageOS-Nodes" 2021-08-12 11:37:58 [i] nodegroup "StorageOS-Nodes" has 3 node(s) 2021-08-12 11:37:58 [i] node "ip-192-168-28-127.eu-north-1.compute.internal" is ready 2021-08-12 11:37:58 [i] node "ip-192-168-44-144.eu-north-1.compute.internal" is ready 2021-08-12 11:37:58 [i] node "ip-192-168-71-156.eu-north-1.compute.internal" is ready 2021-08-12 11:40:01 [i] kubectl command should work with "/Users/james/.kube/config", try 'kubectl get nodes' 2021-08-12 11:40:01 [✓] EKS cluster "StorageOS" in "eu-north-1" region is ready
3. Accessing the cluster with kubectl
Weaveworks eksctl automatically configures ~/.kube/config as part of the configuration process, therefore allowing the user to directly interact with the cluster without any further configuration –
🔎 Show Kubernetes Nodes - kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-28-127.eu-north-1.compute.internal Ready <none> 4m5s v1.20.4-eks-6b7464
ip-192-168-44-144.eu-north-1.compute.internal Ready <none> 4m7s v1.20.4-eks-6b7464
ip-192-168-71-156.eu-north-1.compute.internal Ready <none> 4m8s v1.20.4-eks-6b7464
4. Installing Ondat
Ondat is installed using our operator as per the following video tutorial – Installation & Setup Guide
5. Creating a Kubernetes StorageClass with 2 Replicas and Encryption
For this example, we’re creating a StorageClass that provides 3 copies of Data, 1 Primary, and 2 Replicas. The StorageClass also provides Data at Rest Encryption.
StorageClasses provide a convenient means of providing tiering, features and multi-tenancy in a Kubernetes environment. For more information see StorageOS StorageClasses – Tiering and Features
✨ Creating StorageClass topsecret - kubectl apply -f- <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: topsecret
labels:
app: storageos
provisioner: csi.storageos.com # CSI Driver
allowVolumeExpansion: true
parameters:
storageos.com/replicas: "2" # 3 copies of Data, 1 Primary, 2 Replicas
storageos.com/encryption: "true" # Enable encryption
csi.storage.k8s.io/controller-expand-secret-name: csi-controller-expand-secret csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
csi.storage.k8s.io/controller-publish-secret-name: csi-controller-publish-secret csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-publish-secret-name: csi-node-publish-secret
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
csi.storage.k8s.io/provisioner-secret-name: csi-provisioner-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
EOF
storageclass.storage.k8s.io/topsecret created
6. Verifying StorageClasses
After configuring our StorageClass we now have 3 entries. The first entry ‘fast’, is the default StorageClass configured by StorageOS, providing highly available data. EKS provides ‘gp2’ as the default StorageClass provided by EKS. Lastly, ‘topsecret’ is our newly created StorageClass with both replicas and encryption –
🔍 Checking StorageClasses - kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
fast csi.storageos.com Delete Immediate true 2m6s
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 22m
topsecret csi.storageos.com Delete Immediate true 17s
7. Creating a Persistent Volume Claim
With our StorageClass configured, we’re now ready to create a Persistent Volume Claim. When using Ondat there is no requirement to create a Persistent Volume, the claim alone, automatically configures the associated Persistent Volume –
✨ Creating MySQL StorageOS Persistent Volume Claim and Persistent Volume - kubectl apply -f- <<EOF apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysqlpvc
spec:
storageClassName: topsecret # 2 replicas + encrypted
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
EOF
persistentvolumeclaim/mysqlpvc created
8. Checking Persistent Volumes and Persistent Volume Claims
After creating the Persistent Volume Claim we can now see two components, the PV and PVC respectively with the same UUID used as a reference within the name –
🔍 Checking PV's - kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO Delete Bound default/mysqlpvc topsecret 8s
🔍 Checking PVC's - kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysqlpvc Bound pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO topsecret 10s
9. Encrypted Volume Secrets
With our StorageClass including Encryption, the newly created PVC will include within the Annotations section, a reference to the Kubernetes secret that is used for encryption, ‘storageos-volume-key-17b3d930-976d-45f6-a7d6-8ee59b277f5c’ –
🔍 Checking PVC - kubectl describe pvc/mysqlpvc
Name: mysqlpvc
Namespace: default
StorageClass: topsecret
Status: Bound
Volume: pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
storageos.com/encryption-secret-name: storageos-volume-key-17b3d930-976d-45f6-a7d6-8ee59b277f5c
storageos.com/encryption-secret-namespace: default
storageos.com/storageclass: 9096cab0-d3c8-40cb-b304-8e19b7308780
volume.beta.kubernetes.io/storage-provisioner: csi.storageos.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 31s persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.storageos.com" or manually created by system administrator
Normal Provisioning 31s csi.storageos.com_storageos-csi-helper-69996ffd88-mfj7h_8664bbe1-ede5-4f34-a0fd-733cc14eea75 External provisioner is provisioning volume for claim "default/mysqlpvc"
Normal ProvisioningSucceeded 30s csi.storageos.com_storageos-csi-helper-69996ffd88-mfj7h_8664bbe1-ede5-4f34-a0fd-733cc14eea75 Successfully provisioned volume pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7
10. Tainting other scheduable nodes
To demonstrate data persistence and high availability, all of the scheduable nodes are tainted with exception of the first. When the Kubernetes schedular receives a request for our workload, the first node (ip-192-168-28-127.eu-north-1.compute.internal) will therefore, be the only option for scheduling.
✅ UnTainted ip-192-168-28-127.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-28-127.eu-north-1.compute.internal exclusive=true:NoSchedule- --overwrite ⚠️ Tainted ip-192-168-44-144.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-44-144.eu-north-1.compute.internal exclusive=true:NoSchedule --overwrite
⚠️ Tainted ip-192-168-71-156.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-71-156.eu-north-1.compute.internal exclusive=true:NoSchedule --overwrite
11. Create a MySQL Pod with Persistent Data
A MySQL pod is requested, using the standard mysql:5.7 image and persistent data for the standard MySQL data location of /var/lib/mysql –
✨ Creating MySQL Pod - kubectl apply -f- <<EOF apiVersion: v1 kind: Pod metadata: labels: name: mysql name: mysql spec: containers: - name: mysql image: mysql:5.7 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" ports: - name: mysql containerPort: 3306 volumeMounts: - name: mysql-data mountPath: /var/lib/mysql subPath: mysql volumes: - name: mysql-data persistentVolumeClaim: claimName: mysqlpvc EOF pod/mysql created
12. Check MySQL Running
After a short period, the mysql pod will transition to a running state on the untainted node of ip-192-168-28-127.eu-north-1.compute.internal –
🔍 Checking Pods - kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql 1/1 Running 0 45s 192.168.16.71 ip-192-168-28-127.eu-north-1.compute.internal <none> <none>
🔍 Checking PV's - kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO Delete Bound default/mysqlpvc topsecret 2m8s
🔍 Checking PVC's - kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysqlpvc Bound pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO topsecret 2m10s
13. Populate MySQL Data
Using the convenient kubectl exec, data is populated into the running mysql pod using SQL. As we’re utilising persistent storage, the populated data will be written to the Persistent Volume –
🔍 Populating MySQL - kubectl exec -i mysql -- mysql <<< $DATA CREATE DATABASE shop; USE shop; CREATE TABLE FRUIT( ID INT PRIMARY KEY NOT NULL, INVENTORY VARCHAR(25) NOT NULL, QUANTITY INT NOT NULL ); INSERT INTO FRUIT (ID,INVENTORY,QUANTITY) VALUES (1, 'Bananas', 132), (2, 'Apples', 165), (3, 'Oranges', 219); SELECT * FROM FRUIT; ID INVENTORY QUANTITY 1 Bananas 132 2 Apples 165 3 Oranges 219
14. Delete the MySQL pod
Remove the MySQL pod as a precursor, for verifying availability of data –
❌ Removing pod/mysql - kubectl delete pod/mysql --grace-period=0
pod "mysql" deleted
15. Adjust Node Taints
To verify high availability, the taints are adjusted. The previous node of ip-192-168-28-127.eu-north-1.compute.internal is tainted whilst the second node, ip-192-168-44-144.eu-north-1.compute.internal is untainted, therefore being the defacto choice next time the Kubernetes scheduler is called –
⚠️ Tainted ip-192-168-28-127.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-28-127.eu-north-1.compute.internal exclusive=true:NoSchedule --overwrite ✅ UnTainted ip-192-168-44-144.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-44-144.eu-north-1.compute.internal exclusive=true:NoSchedule- --overwrite ⚠️ Tainted ip-192-168-71-156.eu-north-1.compute.internal - kubectl taint nodes ip-192-168-71-156.eu-north-1.compute.internal exclusive=true:NoSchedule --overwrite
16. Recreate the MySQL pod
The MySQL pod is recreated, using the same specification as before –
✨ Creating MySQL Pod - kubectl apply -f- <<EOF
apiVersion: v1
kind: Pod
metadata:
labels:
name: mysql
name: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysqlpvc
EOF
pod/mysql created
17. Check the MySQL running pod
With our Taints configured, the mysql pod is now scheduled to the second node, ip-192-168-44-144.eu-north-1.compute.internal –
🔍 Checking Pods - kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql 1/1 Running 0 48s 192.168.34.48 ip-192-168-44-144.eu-north-1.compute.internal <none> <none>
🔍 Checking PV's - kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO Delete Bound default/mysqlpvc topsecret 4m22s
🔍 Checking PVC's - kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysqlpvc Bound pvc-3a6ab20c-674b-46eb-bc5c-0199cfdc82e7 2Gi RWO topsecret 4m23s
18. Check availability of Data
Despite our pod both being destroyed, and scheduled to another node, the data is accessible as expected –
🔍 Checking MySQL Data - kubectl exec mysql -- /bin/sh -c \"mysql -e \"RESET QUERY CACHE; SELECT * FROM shop.FRUIT\"" ID INVENTORY QUANTITY
1 Bananas 132
2 Apples 165
3 Oranges 219
We hope that you have found this demonstration and viewpoint of both eksctl and Ondat useful. More information on eksctl can be found at eksctl and should you wish to try Ondat, please get in touch and we’ll connect you with one of our experts to discuss requirements and assist with setup.