In the previous approach, we create a PersistentVolume for storing transient data such as results of running queries. With access to Amazon S3, the user can dispense with PersistentVolumes for Metastore and HiveServer2. In order to use S3, the user should skip or adjust those steps in the previous approach that deal with PersistentVolume workdir-pv and PersistentVolumeClaim workdir-pvc. For PersistentVolumes for Timeline Server and Apache Ranger, see Using HDFS instead of PersistentVolumes.


By default, MR3 DAGAppMaster checks the ownership and permission of its staging directory (which is specified by the configuration key in mr3-site.xml and automatically set by HiveServer2) for security purpose. Since S3 is an object store which only simulates directories without maintaining ownership and permission, we should set the configuration key to false so as to skip checking the ownership and permission of the staging directory.



Set the configuration key hive.exec.scratchdir in hive-site.xml to point to the S3 bucket for the scratch directory of HiveServer2 (under which a staging directory for MR3 DAGAppMaster is created). Do not update the configuration key hive.downloaded.resources.dir because it should point to a directory on the local file system.


If the query results cache is enabled with the configuration key hive.query.results.cache.enabled set to true, the configuration key should point to another S3 bucket. Otherwise the query results cache is never used.

Remove PersistentVolume workdir-pv and PersistentVolumeClaim workdir-pvc

Open kubernetes/ and set the following two environment variables to empty values.


Open kubernetes/yaml/metastore.yaml and comment out the following lines:

# - name: work-dir-volume
#   mountPath: /opt/mr3-run/work-dir/

# - name: work-dir-volume
#   persistentVolumeClaim:
#     claimName: workdir-pvc

Open kubernetes/yaml/hive.yaml and comment out the following lines:

# - name: work-dir-volume
#   mountPath: /opt/mr3-run/work-dir

# - name: work-dir-volume
#   persistentVolumeClaim:
#     claimName: workdir-pvc

Now the user can run Hive on MR3 on Kubernetes without using PersistentVolumes. If, however, the Docker image does not contain a MySQL connector jar file, the user should use a hostPath volume to mount such a jar file in the directory /opt/mr3-run/host-lib inside the Metastore Pod. See Downloading a MySQL connector in Creating an EKS cluster for an example.

The user can also use user defined functions (UDFs) by uploading jar files to S3.

Connecting to jdbc:hive2://;;
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Beeline version 3.1.2 by Apache Hive
0: jdbc:hive2://> use tpcds_partitioned_10_orc_s3a;
0: jdbc:hive2://> add jar s3a://hivemr3-warehouse-dir/temp1.jar;
INFO  : Added [/opt/mr3-run/work-dir/04e3e74b-6c98-4ef6-8706-d8466fb6223c_resources/temp1.jar] to class path
INFO  : Added resources: [s3a://hivemr3-warehouse-dir/temp1.jar]
No rows affected (0.132 seconds)
0: jdbc:hive2://> create temporary function foo as 'test.simple.SimpleClass1';
0: jdbc:hive2://> select foo(s_zip) from store limit 5;
|            _c0            |
| simple1-0713114453 53604  |
| simple1-0713114453 51904  |
| simple1-0713114453 31904  |
| simple1-0713114453 33604  |
| simple1-0713114453 59231  |
5 rows selected (1.253 seconds)
0: jdbc:hive2://> select foo(cc_city) from call_center, store where foo(cc_city) = foo(s_city) limit 10;
10 rows selected (5.517 seconds)
0: jdbc:hive2://> 

Note that those directories and files created under the scratch directory survive HiveServer2. Hence the user should manually clean the scratch directory of S3 if necessary.