We use the command eksctl
(of version 0.27.0 or later) to create an EKS/Fargate cluster.
We create a single node group mr3-master
which is intended for those Pods that should always be running such as HiveServer2, Metastore, and DAGAppMaster Pods.
For running ContainerWorker Pods, we create a Fargate profile mr3-worker
.
In order to avoid the data transfer cost between multiple Availability Zones (AZs),
we restrict the EKS/Fargate cluster to a single AZ.
Creating an EKS/Fargate cluster
In our example, we allocate a single on-demand instance to the mr3-master
node group.
Here is a sample specification (kubernetes/fargate/cluster.yaml
) to be used as input to the command eksctl
.
We restrict the EKS/Fargate cluster to a single AZ ap-northeast-1a
.
- If the Docker image does not contain a MySQL connector jar file,
use the
preBootstrapCommands
field in the the specification of themr3-master
node group to automatically download such a jar file. If thepreBootstrapCommands
field cannot be set for some reason, the user should create a PersistentVolume and manually copy a MySQL connector jar file to it (see Copying a MySQL connector jar file to EFS in Creating a PersistentVolume using EFS). - If the Docker image already contains a MySQL connector jar file, we do not need the
preBootstrapCommands
field.
$ vi kubernetes/fargate/cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: hive-mr3
region: ap-northeast-1
vpc:
nat:
gateway: Single
availabilityZones: ["ap-northeast-1a", "ap-northeast-1c", "ap-northeast-1d"]
nodeGroups:
- name: mr3-master
availabilityZones: ["ap-northeast-1a"]
instanceType: m5.xlarge
labels: { roles: masters }
ssh:
allow: true
desiredCapacity: 1
preBootstrapCommands:
- "wget http://your.server.address/mysql-connector-java-8.0.12.jar"
- "mkdir -p /home/ec2-user/lib"
- "mv mysql-connector-java-8.0.12.jar /home/ec2-user/lib"
Execute the command eksctl
to create the EKS/Fargate cluster hive-mr3
.
$ eksctl create cluster -f kubernetes/fargate/cluster.yaml
[ℹ] eksctl version 0.27.0
[ℹ] using region ap-northeast-1
[ℹ] subnets for ap-northeast-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for ap-northeast-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for ap-northeast-1d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "mr3-master" will use "ami-08c36bc5fca5ac740" [AmazonLinux2/1.17]
...
[ℹ] using Kubernetes version 1.17
...
[✔] EKS cluster "hive-mr3" in "ap-northeast-1" region is ready
Every node instance in the mr3-master
node group starts with mysql-connector-java-8.0.12.jar
in the directory /home/ec2-user/lib
.
In order to make a MySQL connector available in a Metastore Pod,
we extend kubernetes/yaml/metastore.yaml
to create a hostPath volume and mount it under the directory /opt/mr3-run/host-lib
.
spec:
template:
spec:
containers:
volumeMounts:
- name: host-lib-volume
mountPath: /opt/mr3-run/host-lib
volumes:
- name: host-lib-volume
hostPath:
path: /home/ec2-user/lib
type: Directory
Since the directory /opt/mr3-run/host-lib
is included in the classpath (see kubernetes/hive/hive/hive-setup.sh
),
the MySQL connector jar file is accessible to Metastore.
Creating a Fargate profile
In order to place all ContainerWorker Pods in the same AZ where the EKS/Fargate cluster runs,
we need its private subnet ID.
The user can retrieve the CIDR range of the private subnet from the output of eksctl
in the previous step.
[ℹ] subnets for ap-northeast-1a - public:192.168.0.0/19 private:192.168.96.0/19
Open the AWS console and find the matching private subnet ID (subnet-034ffd4fb1bcd9220
in our example).
Then update the field fargateProfiles/subnets
in kubernetes/fargate/profile.yaml
.
$ vi kubernetes/fargate/profile.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: hive-mr3
region: ap-northeast-1
fargateProfiles:
- name: mr3-worker
selectors:
- namespace: hivemr3
labels:
mr3-pod-role: container-role
subnets:
- subnet-034ffd4fb1bcd9220
Execute the command eksctl
to create a Fargate profile mr3-worker
.
$ eksctl create fargateprofile -f kubernetes/fargate/profile.yaml
...
[ℹ] creating Fargate profile "mr3-worker" on EKS cluster "hive-mr3"
[ℹ] created Fargate profile "mr3-worker" on EKS cluster "hive-mr3"
$ eksctl get fargateprofile --cluster hive-mr3
NAME SELECTOR_NAMESPACE SELECTOR_LABELS POD_EXECUTION_ROLE_ARN SUBNETS TAGS
mr3-worker hivemr3 mr3-pod-role=container-role arn:aws:iam::111111111111:role/eksctl-hive-mr3-cluster-FargatePodExecutionRole-4SQFPLYU9DOE subnet-034ffd4fb1bcd9220 <none>
Now every ContainerWorker Pod is created as a Fargate Pod.
Creating a PersistentVolume using EFS
If the user decides to create a PersistentVolume using EFS (instead of using S3), use the Amazon EFS CSI driver as explained in Creating a PersistentVolume using EFS.
Deleting the EKS/Fargate cluster
If the user creates a PersistentVolume using EFS,
delete the PersistentVolumeClaim, the PersistentVolume, and the storage class efs-sc
.
$ kubectl delete -f kubernetes/efs-csi/workdir-pvc.yaml
$ kubectl delete -f kubernetes/efs-csi/workdir-pv.yaml
$ kubectl delete -f kubernetes/efs-csi/sc.yaml
storageclass.storage.k8s.io "efs-sc" deleted
Open the AWS console and delete the security group for allowing inbound NFS traffic for EFS mount points.
In order to delete the EKS/Fargate cluster, use kubernetes/fargate/cluster.yaml
.
Do not delete the Fargate profile manually
(e.g., by executing eksctl delete fargateprofile --cluster hive-mr3 mr3-worker
)
because it is automatically deleted along with the EKS/Fargate cluster.
$ eksctl delete cluster -f kubernetes/fargate/cluster.yaml
[ℹ] eksctl version 0.27.0
[ℹ] using region ap-northeast-1
[ℹ] deleting EKS cluster "hive-mr3"
[ℹ] deleting Fargate profile "mr3-worker"
...
[✔] all cluster resources were deleted