This page shows how to use Helm to operate Hive on MR3 on Kubernetes with multiple nodes. By following the instruction, the user will learn:
- how to start Metastore
- how to run Hive on MR3
This scenario has the following prerequisites:
- A running Kubernetes cluster is available, and the user should be able to execute command
helm
to use Helm. - A database server for Metastore is running.
- Either HDFS or S3 (or S3-compatible storage) is available for storing the warehouse. For using S3, access credentials are required.
- The user can either create a PersistentVolume or store transient data on HDFS or S3. The PersistentVolume should be writable to 1) the user with UID 1000, and 2) user nobody (corresponding to root user) if Ranger is to be used for authorization.
- Every worker node has an identical set of local directories for storing intermediate data (to be mapped to hostPath volumes). These directories should be writable to the user with UID 1000 because all containers run as non-root user with UID 1000.
- The user can run Beeline to connect to HiveServer2 running at a given address.
We use a pre-built Docker image available at DockerHub, so the user does not have to build a new Docker image. In our example, we use a MySQL server for Metastore, but Postgres and MS SQL are also okay.
This scenario should take less than 30 minutes to complete, not including the time for downloading a pre-built Docker image. We use Helm 2.16.9.
For asking any questions, please visit MR3 Google Group or join MR3 Slack.
Overview
Hive on MR3 uses the following components: Metastore, HiveServer2, MR3 DAGAppMaster, and MR3 ContainerWorkers.
- The user can connect to a public HiveServer2 (via JDBC/ODBC) which is exposed to the outside of the Kubernetes cluster.
- The user can connect directly to MR3-UI, Grafana, and Ranger.
- Internally we run a Timeline Server to collect history logs from MR3 DAGAppMaster, and a Prometheus server to collect metrics from MR3 DAGAppMaster.
Hive on MR3 requires three types of storage:
- Data source such as HDFS or S3
- PersistentVolume (or HDFS/S3) for storing transient data
- hostPath volumes for storing intermediate data in ContainerWorker Pods
The hostPath volumes are created and mounted by MR3 at runtime for each ContainerWorker Pod. These hostPath volumes hold intermediate data to be shuffled (by shuffle handlers) between ContainerWorker Pods. In order to be able to create the same set of hostPath volumes for every ContainerWorker Pod, an identical set of local directories should be ready in all worker nodes (where ContainerWorker Pods are created).
Installation
Download an MR3 release containing the executable scripts.
$ git clone https://github.com/mr3project/mr3-run-k8s.git
$ cd mr3-run-k8s/kubernetes/
$ git checkout release-1.11-hive3
$ git clone https://github.com/mr3project/mr3-run-k8s.git
$ cd mr3-run-k8s/kubernetes/
Linking configuration files
We will reuse the configuration files in conf/
(and keys in key
if Kerberos is used for authentication).
Create symbolic links.
$ mkdir -p key
$ ln -s $(pwd)/conf/ helm/hive/conf
$ ln -s $(pwd)/key/ helm/hive/key
Now any change to the configuration files in conf/
is honored when running Hive on MR3.
For configuring Hive on MR3,
we create a new file values-hive.yaml
which is a collection of values to override those in helm/hive/values.yaml
.
Configuring Metastore Database
Open values-hive.yaml
and set the following fields for Metastore.
- Set
databaseHost
to the IP address of the MySQL server for Metastore. In our example, the MySQL server is running at the IP address192.168.10.1
. - Set
databaseName
to the database name for Metastore in the MySQL server. - Set
warehouseDir
to the HDFS directory or the S3 bucket storing the warehouse (e.g.,hdfs://hdfs.server:8020/hive/warehouse
ors3a://hivemr3/hive/warehouse
). Note that for using S3, we should use prefixs3a
, nots3
. - Set
dbType
to the database type for Metastore which is used as an argument toschematool
(mysql
for MYSQL,postgres
for Postgres,mssql
for MS SQL).
$ vi values-hive.yaml
metastore:
databaseHost: "192.168.10.1"
databaseName: hive3mr3
warehouseDir: s3a://hivemr3/warehouse
dbType: mysql
Setting host aliases (optional)
To use host names (instead of IP addresses) when configuring Hive on MR3, the user should update:
hostAliases
field invalues-hive.yaml
- configuration key
mr3.k8s.host.aliases
inconf/mr3-site.xml
For example,
if metastore/databaseHost
or metastore/warehouseDir
in values-hive.yaml
uses a host name called orange0
/orange1
with IP address 192.168.10.100/192.168.10.1, update as follows:
$ vi values-hive.yaml
hostAliases:
- ip: "192.168.10.100"
hostnames:
- "orange0"
- ip: "192.168.10.1"
hostnames:
- "orange1"
$ vi conf/mr3-site.xml
<property>
<name>mr3.k8s.host.aliases</name>
<value>orange0=192.168.10.100,orange1=192.168.10.1</value>
</property>
Directory for storing transient data
The user can use either a PersistentVolume or HDFS/S3 to store transient data. For using a PersistentVoume, follow the instruction in 1. Creating and mounting PersistentVolume. For using HDFS/S3, follow the instruction in 2. Using HDFS/S3.
For running Hive on MR3 in production, using PersistentVolume is recommended.
1. Creating and mounting PersistentVolume
The workDir
fields in values-hive.yaml
specify how to create a PersistentVolume.
In our example, we use an NFS volume by setting workDir.isNfs
to true.
The NFS server runs at 192.168.10.1, uses a directory /home/nfs/hivemr3
,
and allows up to 100GB of disk space.
We also specify the size of the storage to be used by Hive on MR3 in workDir.volumeClaimSize
.
The PersistentVolume should be writable to the user with UID 1000.
$ vi values-hive.yaml
workDir:
create: true
isNfs: true
nfs:
server: "192.168.10.1"
path: "/home/nfs/hivemr3"
volumeSize: 100Gi
volumeClaimSize: 10Gi
In order to use different types of PersistentVolumes,
the user should set the field workDir.isNfs
to false
and set the field workDir.volumeStr
appropriately
to a string to be used in helm/hive/templates/workdir-pv.yaml
.
2. Using HDFS/S3
As we do not use a PersistentVolume,
set the field workDir.create
to false in values-hive.yaml
.
$ vi values-hive.yaml
workDir:
create: false
Set the configuration keys hive.exec.scratchdir
and hive.query.results.cache.directory
in conf/hive-site.xml
to point to the directory on HDFS or S3 for storing transient data.
Note that for using S3, we should use prefix s3a
, not s3
.
$ vi conf/hive-site.xml
<property>
<name>hive.exec.scratchdir</name>
<value>s3a://hivemr3/workdir</value>
</property>
<property>
<name>hive.query.results.cache.directory</name>
<value>s3a://hivemr3/workdir/_resultscache_</value>
</property>
Configuring S3 (optional)
For accessing S3-compatible storage,
additional configuration keys should be set in conf/core-site.xml
.
Open conf/core-site.xml
and set configuration keys for S3.
The configuration key fs.s3a.endpoint
should be set to point to the storage server.
$ vi conf/core-site.xml
<property>
<name>fs.s3a.aws.credentials.provider</name>
<value>com.amazonaws.auth.EnvironmentVariableCredentialsProvider</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value>http://orange0:9000</value>
</property>
<property>
<name>fs.s3a.path.style.access</name>
<value>true</value>
</property>
The user may need to change the parameters for accessing S3
to avoid SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
.
For more details, see Troubleshooting.
When using
the class EnvironmentVariableCredentialsProvider
to read AWS credentials,
add two environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
in helm/hive/env-secret.sh
.
$ vi helm/hive/env-secret.sh
export AWS_ACCESS_KEY_ID=_your_aws_access_key_id_
export AWS_SECRET_ACCESS_KEY=_your_aws_secret_secret_key_
Note that AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are already appended
to the values of the configuration keys mr3.am.launch.env
and mr3.container.launch.env
in conf/mr3-site.xml
.
For the security purpose, the user should NOT write AWS access key ID and secret access key in conf/mr3-site.xml
.
$ vi conf/mr3-site.xml
<property>
<name>mr3.am.launch.env</name>
<value>LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/,HADOOP_CREDSTORE_PASSWORD,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,USE_JAVA_17</value>
</property>
<property>
<name>mr3.container.launch.env</name>
<value>LD_LIBRARY_PATH=/opt/mr3-run/hadoop/apache-hadoop/lib/native,HADOOP_CREDSTORE_PASSWORD,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,USE_JAVA_17</value>
</property>
Public IP address for HiveServer2
The field hive.externalIp
in values-hive.yaml
specifies the IP address for exposing HiveServer2 to the outside of the Kubernetes cluster.
The user should specify a public IP address for HiveServer2
so that clients can connect to it from the outside of the Kubernetes cluster.
By default, HiveServer2 uses port 9852 for Thrift transport and port 10001 for HTTP transport.
In our example, we expose HiveServer2 at 192.168.10.1.
$ vi values-hive.yaml
hive:
port: 9852
httpport: 10001
externalIp: 192.168.10.1
A valid host name for the IP address is necessary for secure communication using SSL.
Resources for Metastore and HiveServer2 Pods
To adjust resources for Metastore and HiveServer2 Pods,
set the following fields in values-hive.yaml
:
metastore.resources
, metastore.heapSize
, hive.resources
, hive.heapSize
.
metastore.heapSize
and hive.heapSize
specify the Java heap size (in MB).
$ vi values-hive.yaml
metastore:
resources:
requests:
cpu: 2
memory: 16Gi
limits:
cpu: 2
memory: 16Gi
heapSize: 16384
hive:
resources:
requests:
cpu: 2
memory: 16Gi
limits:
cpu: 2
memory: 16Gi
heapSize: 16384
Resources for DAGAppMaster Pod
By default, we allocate 16GB of memory and 4 CPUs to a DAGAppMaster Pod.
To adjust resources, update conf/mr3-site.xml
.
$ vi conf/mr3-site.xml
<property>
<name>mr3.am.resource.memory.mb</name>
<value>16384</value>
</property>
<property>
<name>mr3.am.resource.cpu.cores</name>
<value>4</value>
</property>
Resources for mappers, reducers, and ContainerWorker Pods
Change resources to be allocated to each mapper, reducer, and ContainerWorker by updating conf/hive-site.xml
.
In particular,
the configuration keys hive.mr3.all-in-one.containergroup.memory.mb
and hive.mr3.all-in-one.containergroup.vcores
should be adjusted so that a ContainerWorker can fit in a worker node.
In our example, we allocate 4GB of memory to each mapper/reducer,
and 16GB of memory to each ContainerWorker Pod.
$ vi conf/hive-site.xml
<property>
<name>hive.mr3.map.task.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>hive.mr3.map.task.vcores</name>
<value>1</value>
</property>
<property>
<name>hive.mr3.reduce.task.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>hive.mr3.reduce.task.vcores</name>
<value>1</value>
</property>
<property>
<name>hive.mr3.all-in-one.containergroup.memory.mb</name>
<value>16384</value>
</property>
<property>
<name>hive.mr3.all-in-one.containergroup.vcores</name>
<value>4</value>
</property>
For more details, see Performance Tuning.
Mounting hostPath volumes
As a prerequisite,
every worker node where ContainerWorker Pods may run
should have an identical set of local directories for storing intermediate data.
These directories are mapped to hostPath volumes in each ContainerWorker Pod.
Set the configuration key mr3.k8s.pod.worker.hostpaths
to the list of local directories in conf/mr3-site.xml
.
$ vi conf/mr3-site.xml
<property>
<name>mr3.k8s.pod.worker.hostpaths</name>
<value>/data1/k8s,/data2/k8s,/data3/k8s</value>
</property>
Docker image
We use a pre-built Docker image
(ex. mr3project/hive3:1.11
for Hive 3 on MR3 and mr3project/hive4:4.0.1
for Hive 4 on MR3).
If the user wants to use a different Docker image,
set docker
fields in values-hive.yaml
.
$ vi values-hive.yaml
docker:
image: mr3project/hive3:1.11
containerWorkerImage: mr3project/hive3:1.11
Configuring Metastore
Open conf/hive-site.xml
and update configurations for Metastore as necessary.
Below we list some of configuration keys that the user should check.
The two configuration keys javax.jdo.option.ConnectionUserName
and javax.jdo.option.ConnectionPassword
should match
the user name and password of the MySQL server for Metastore.
For simplicity, we disable security on the Metastore side.
$ vi conf/hive-site.xml
<property>
<name>hive.metastore.db.type</name>
<value>MYSQL</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>passwd</value>
</property>
<property>
<name>hive.metastore.pre.event.listeners</name>
<value></value>
</property>
<property>
<name>metastore.pre.event.listeners</name>
<value></value>
</property>
For the purpose of testing, we do not need initiator and cleaner threads in Metastore.
$ vi conf/hive-site.xml
<property>
<name>hive.compactor.initiator.on</name>
<value>false</value>
</property>
By default, Metastore does not initialize schema.
In order to initialize schema when starting Metastore,
set the field metastore.initSchema
to true in values-hive.yaml
:
$ vi values-hive.yaml
metastore:
initSchema: true
Update helm/hive/templates/metastore.yaml
to remove nodeAffinity
as we do not use node affinity rules.
$ vi helm/hive/templates/metastore.yaml
affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: roles
# operator: In
# values:
# - "masters"
Configuring HiveServer2
Check the configuration for authentication and authorization:
$ vi conf/hive-site.xml
<property>
<name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
</property>
<property>
<name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value>
</property>
Running Metastore and HiveServer2
Before running Metastore and HiveServer2, the user should make sure that no ConfigMaps and Services exist in the namespace hivemr3
.
For example, the user may see ConfigMaps and Services left over from a previous run.
$ kubectl get configmaps -n hivemr3
NAME DATA AGE
mr3conf-configmap-master 1 16m
mr3conf-configmap-worker 1 16m
$ kubectl get svc -n hivemr3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-master-1237-0 ClusterIP 10.105.238.21 <none> 80/TCP 11m
service-worker ClusterIP None <none> <none> 11m
In such a case, manually delete these ConfigMaps and Services.
$ kubectl delete configmap -n hivemr3 mr3conf-configmap-master mr3conf-configmap-worker
$ kubectl delete svc -n hivemr3 service-master-1237-0 service-worker
Install Helm chart for Hive on MR3 with values-hive.yaml
.
We use hivemr3
for the namespace.
Metastore automatically downloads a MySQL connector
from https://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-8.0.28.tar.gz
.
$ helm install --namespace hivemr3 helm/hive -f values-hive.yaml
2022/07/31 00:21:51 found symbolic link in path: /home/gitlab-runner/mr3-run-k8s/kubernetes/helm/hive/conf resolves to /home/gitlab-runner/mr3-run-k8s/kubernetes/conf
2022/07/31 00:21:51 found symbolic link in path: /home/gitlab-runner/mr3-run-k8s/kubernetes/helm/hive/key resolves to /home/gitlab-runner/mr3-run-k8s/kubernetes/key
NAME: jaundiced-fox
LAST DEPLOYED: Sun Jul 31 00:21:51 2022
NAMESPACE: hivemr3
STATUS: DEPLOYED
...
==> v1/ConfigMap
NAME DATA AGE
client-am-config 4 1s
env-configmap 1 1s
hivemr3-conf-configmap 15 1s
...
Check if all ConfigMaps are non-empty.
If the DATA
column for hivemr3-conf-configmap
is 0,
try to remove unnecessary files in the directory conf
or helm/hive/conf
.
This usually happens when a temporary file (e.g., .hive-site.xml.swp
) is kept at the time of installing Helm chart.
Helm mounts the following files inside the Metastore and HiveServer2 Pods:
helm/hive/env.sh
conf/*
In this way, the user can completely specify the behavior of Metastore and HiveServer2 as well as DAGAppMaster and ContainerWorkers.
HiveServer2 creates a DAGAppMaster Pod. Note, however, that no ContainerWorkers Pods are created until queries are submitted. Depending on the configuration for readiness probe, HiveServer2 may restart once before running normally. In our example, HiveServer2 becomes ready in 109 seconds.
$ kubectl get pods -n hivemr3
NAME READY STATUS RESTARTS AGE
hivemr3-hiveserver2-c6bc4b6f4-kjc2k 1/1 Running 0 109s
hivemr3-metastore-0 1/1 Running 0 109s
mr3master-1627-0-59564dfb95-97kwc 1/1 Running 0 87s
The user can check the log of the DAGAppMaster Pod to make sure that it has started properly.
$ kubectl logs -f -n hivemr3 mr3master-1627-0-59564dfb95-97kwc
...
2022-07-30T15:24:05,825 INFO [DAGAppMaster-1-13] HeartbeatHandler$: Timeout check in HeartbeatHandler:Container
2022-07-30T15:24:05,905 INFO [K8sContainerLauncher-3-1] K8sContainerLauncher: Resynchronizing Pod states for appattempt_83201627_0000_000000: 0
Running Beeline
The user may use any client program to connect to HiveServer2. In our example, we run Beeline inside the Hiveserver2 Pod.
$ kubectl exec -n hivemr3 -it hivemr3-hiveserver2-c6bc4b6f4-kjc2k -- /bin/bash
hive@hivemr3-hiveserver2-c6bc4b6f4-kjc2k:/opt/mr3-run/hive$ export USER=hive
hive@hivemr3-hiveserver2-c6bc4b6f4-kjc2k:/opt/mr3-run/hive$ /opt/mr3-run/hive/run-beeline.sh
...
Connecting to jdbc:hive2://hivemr3-hiveserver2-c6bc4b6f4-kjc2k:9852/;;;
Connected to: Apache Hive (version 3.1.3)
Driver: Hive JDBC (version 3.1.3)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.3 by Apache Hive
0: jdbc:hive2://hivemr3-hiveserver2-c6bc4b6f4>
After running a few queries, ContainerWorker Pods are created.
$ kubectl get pods -n hivemr3
NAME READY STATUS RESTARTS AGE
hivemr3-hiveserver2-c6bc4b6f4-kjc2k 1/1 Running 0 10m
hivemr3-metastore-0 1/1 Running 0 10m
mr3master-1627-0-59564dfb95-97kwc 1/1 Running 0 9m51s
mr3worker-ed71-1 1/1 Running 0 88s
mr3worker-ed71-2 1/1 Running 0 56s
mr3worker-ed71-3 1/1 Running 0 48s
mr3worker-ed71-4 1/1 Running 0 40s
mr3worker-ed71-5 1/1 Running 0 40s
mr3worker-ed71-6 1/1 Running 0 40s
mr3worker-ed71-7 1/1 Running 0 40s
mr3worker-ed71-8 1/1 Running 0 40s
mr3worker-ed71-9 1/1 Running 0 26s
Stopping Hive on MR3
In order to terminate Hive on MR3, the user should first delete the DAGAppMaster Pod and then delete Helm chart, not the other way. This is because deleting Helm chart revokes the ServiceAccount object which DAGAppMaster uses to delete ContainerWorker Pods. Hence, if the user deletes Helm chart first, all remaining Pods should be deleted manually.
Delete Deployment for DAGAppMaster which in turn deletes all ContainerWorker Pods automatically.
$ kubectl get deployment -n hivemr3
NAME READY UP-TO-DATE AVAILABLE AGE
hivemr3-hiveserver2 1/1 1 1 10m
mr3master-1627-0 1/1 1 1 10m
$ kubectl -n hivemr3 delete deployment mr3master-1627-0
deployment.apps "mr3master-1627-0" deleted
Delete Helm chart.
$ helm delete jaundiced-fox
release "jaundiced-fox" deleted
As the last step, the user will find that the following objects belonging to the namespace hivemr3
are still alive:
- two ConfigMaps
mr3conf-configmap-master
andmr3conf-configmap-worker
- Service for DAGAppMaster, e.g.,
service-master-1627-0
- Service
service-worker
$ kubectl get configmaps -n hivemr3
NAME DATA AGE
mr3conf-configmap-master 1 5m41s
mr3conf-configmap-worker 1 5m36s
$ kubectl get svc -n hivemr3
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-master-1627-0 ClusterIP 10.96.143.123 <none> 80/TCP,9890/TCP 11m
service-worker ClusterIP None <none> <none> 11m
These ConfigMaps and Services are not deleted by the command helm delete
because
they are created not by Helm but by HiveServer2 and DAGAppMaster.
Hence the user should delete these ConfigMaps and Services manually.
$ kubectl delete configmap -n hivemr3 mr3conf-configmap-master mr3conf-configmap-worker
configmap "mr3conf-configmap-master" deleted
configmap "mr3conf-configmap-worker" deleted
$ kubectl delete svc -n hivemr3 service-master-1627-0 service-worker
service "service-master-1627-0" deleted
service "service-worker" deleted