We use the command eksctl
(of version 0.28.0 or later) to create a Fargate-only cluster.
We create two Fargate profiles:
mr3-master
for those Pods that should always be running such as HiveServer2, Metastore, and DAGAppMaster Podsmr3-worker
for ContainerWorker Pods
In order to avoid the data transfer cost between multiple Availability Zones (AZs), we restrict both Fargate profiles to a single AZ.
Creating a Fargate-only cluster
For creating a Fargate-only cluster, we assume that the user has created a Docker image that contains a MySQL connector jar file. (We do not recommend the use of a pre-built Docker image available at DockerHub.)
Execute the command eksctl
to create a Fargate-only cluster hive-mr3
.
$ eksctl create cluster --name hive-mr3 --version 1.17 --fargate
[ℹ] eksctl version 0.28.1
[ℹ] using region ap-northeast-1
[ℹ] setting availability zones to [ap-northeast-1d ap-northeast-1a ap-northeast-1c]
[ℹ] subnets for ap-northeast-1d - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for ap-northeast-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for ap-northeast-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] using Kubernetes version 1.17
...
[✔] EKS cluster "hive-mr3" in "ap-northeast-1" region is ready
Creating Fargate profiles
In order to place all Pods in the same AZ,
we need its private subnet ID.
The user can retrieve the CIDR range of the private subnet from the output of eksctl
in the previous step.
[ℹ] subnets for ap-northeast-1a - public:192.168.32.0/19 private:192.168.128.0/19
Open the AWS console and find the matching private subnet ID (subnet-0b0ffc61c791591d5
in our example).
Then update the field fargateProfiles/subnets
in kubernetes/fargate/all-profile.yaml
.
$ vi kubernetes/fargate/all-profile.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: hive-mr3
region: ap-northeast-1
fargateProfiles:
- name: mr3-master
selectors:
- namespace: hivemr3
labels:
mr3-pod-role: master-role
subnets:
- subnet-0b0ffc61c791591d5
- name: mr3-worker
selectors:
- namespace: hivemr3
labels:
mr3-pod-role: container-role
subnets:
- subnet-0b0ffc61c791591d5
Execute the command eksctl
to create Fargate profiles mr3-master
and mr3-worker
.
$ eksctl create fargateprofile -f kubernetes/fargate/all-profile.yaml
...
[ℹ] creating Fargate profile "mr3-master" on EKS cluster "hive-mr3"
[ℹ] created Fargate profile "mr3-master" on EKS cluster "hive-mr3"
[ℹ] creating Fargate profile "mr3-worker" on EKS cluster "hive-mr3"
[ℹ] created Fargate profile "mr3-worker" on EKS cluster "hive-mr3"
$ eksctl get fargateprofile --cluster hive-mr3
NAME SELECTOR_NAMESPACE SELECTOR_LABELS POD_EXECUTION_ROLE_ARN SUBNETS TAGS
...
mr3-master hivemr3 mr3-pod-role=master-role arn:aws:iam::625137646557:role/eksctl-hive-mr3-cluster-FargatePodExecutionRole-OOUU31W5IRY4 subnet-0b0ffc61c791591d5 <none>
mr3-worker hivemr3 mr3-pod-role=container-role arn:aws:iam::625137646557:role/eksctl-hive-mr3-cluster-FargatePodExecutionRole-OOUU31W5IRY4 subnet-0b0ffc61c791591d5 <none>
Now HiveServer2, Metastore, and DAGAppMaster Pods run as Fargate Pods with profile mr3-master
,
and ContainerWorker Pods run as Fargate Pods with profile mr3-worker
.
Creating a PersistentVolume using EFS
If the user decides to create a PersistentVolume using EFS (instead of using S3), use the Amazon EFS CSI driver as explained in Creating a PersistentVolume using EFS.
Deleting the EKS/Fargate cluster
If the user creates a PersistentVolume using EFS,
delete the PersistentVolumeClaim, the PersistentVolume, and the storage class efs-sc
.
$ kubectl delete -f kubernetes/efs-csi/workdir-pvc.yaml
$ kubectl delete -f kubernetes/efs-csi/workdir-pv.yaml
$ kubectl delete -f kubernetes/efs-csi/sc.yaml
storageclass.storage.k8s.io "efs-sc" deleted
Open the AWS console and delete the security group for allowing inbound NFS traffic for EFS mount points.
In order to delete the Fargate-only cluster, execute the command eksctl
.
Do not delete the Fargate profile manually
(e.g., by executing eksctl delete fargateprofile --cluster hive-mr3 mr3-master
)
because it is automatically deleted along with the Fargate-only cluster.
$ eksctl delete cluster --name hive-mr3
[ℹ] eksctl version 0.28.1
[ℹ] using region ap-northeast-1
[ℹ] deleting EKS cluster "hive-mr3"
[ℹ] deleting Fargate profile "fp-default"
[ℹ] deleted Fargate profile "fp-default"
[ℹ] deleting Fargate profile "mr3-master"
[ℹ] deleted Fargate profile "mr3-master"
[ℹ] deleting Fargate profile "mr3-worker"
[ℹ] deleted Fargate profile "mr3-worker"
[ℹ] deleted 3 Fargate profile(s)
...
[✔] all cluster resources were deleted