In order to terminate Hive on MR3, the user should first delete the DAGAppMaster Pod and then delete Helm chart, not the other way. This is because deleting Helm chart revokes the ServiceAccount object which DAGAppMaster uses to delete ContainerWorker Pods. Hence, if the user deletes Helm chart first, all remaining Pods should be deleted manually.
Delete Deployment for DAGAppMaster which in turn deletes all ContainerWorker Pods automatically.
$ kubectl get deployment -n hivemr3 NAME DESIRED CURRENT READY AGE hivemr3-hiveserver2 1 1 1 5m37s mr3master-3292-0 1 1 1 5m6s $ kubectl -n hivemr3 delete deployment mr3master-3292-0 deployment "mr3master-3292-0" deleted
Deleting Helm chart
Delete Helm chart.
$ helm delete gaudy-ladybird release "gaudy-ladybird" deleted
Deleting ConfigMaps and Services
As the last step, the user will find that the following objects belonging to the namespace
hivemr3 are still alive:
- two ConfigMaps
- Service for DAGAppMaster, e.g.,
$ kubectl get configmaps -n hivemr3 NAME DATA AGE mr3conf-configmap-master 1 21m mr3conf-configmap-worker 1 21m $ kubectl get svc -n hivemr3 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-master-3292-0 ClusterIP 10.105.239.26 <none> 80/TCP 16m service-worker ClusterIP None <none> <none> 16m
These ConfigMaps and Services are not deleted by the command
helm delete because
they are created not by Helm but by HiveServer2 and DAGAppMaster.
These objects should not be deleted when HiveServer2 or DAGAppMaster terminates
because another instance of HiveServer2 or DAGAppMaster may start later.
Hence the user should delete these ConfigMaps and Services manually.
$ kubectl delete configmap -n hivemr3 mr3conf-configmap-master mr3conf-configmap-worker configmap "mr3conf-configmap-master" deleted configmap "mr3conf-configmap-worker" deleted $ kubectl delete svc -n hivemr3 service-master-3292-0 service-worker service "service-master-3292-0" deleted service "service-worker" deleted