Support DAG scheduling schemes (specified by mr3.dag.queue.scheme).
Optimize DAGAppMaster by freeing memory for messages to Tasks when fault tolerance is disabled (with mr3.am.task.max.failed.attempts set to 1).
Fix a minor memory leak in DaemonTask (which also prevents MR3 from running more than 2^30 DAGs when using the shuffle handler).
Improve the chance of assigning TaskAttempts to ContainerWorkers that match location hints.
TaskScheduler can use location hints produced by ONE_TO_ONE edges.
TaskScheduler can use location hints from HDFS when assigning TaskAttempts to ContainerWorker Pods on Kubernetes (with mr3.convert.container.address.host.name).
Introduce mr3.k8s.pod.cpu.cores.max.multiplier to specify the multiplier for the limit of CPU cores.
Introduce mr3.k8s.pod.memory.max.multiplier to specify the multiplier for the limit of memory.
Introduce mr3.k8s.pod.worker.security.context.sysctls to configure kernel parameters of ContainerWorker Pods using init containers.
Support speculative execution of TaskAttempts (with mr3.am.task.concurrent.run.threshold.percent).
A ContainerWorker can run multiple shuffle handlers each with a different port. The configuration key mr3.use.daemon.shufflehandler now specifies the number of shuffle handlers in each ContainerWorker.
With speculative execution and the use of multiple shuffle handlers in a single ContainerWorker, fetch delays rarely occur.
A ContainerWorker Pod can run shuffle handlers in a separate container (with mr3.k8s.shuffle.process.ports).
On Kubernetes, DAGAppMaster uses ReplicationController instead of Pod, thus making recovery much faster.
On Kubernetes, ConfigMaps mr3conf-configmap-master and mr3conf-configmap-worker survive MR3, so the user should delete them manually.
Java 8u251/8u252 can be used on Kubernetes 1.17 and later.