How YARN Changed Hadoop Job Scheduling
Preemption is the idea that some jobs can take the place of others that are currently running. Preemption is usually based upon the priority level of the job itself. The preempted job may be simply killed, suspended or possibly just requeued. All of these options come with benefits and disadvantages. Preemption in general tends to cause many internal political challenges but none as much as preemption by killing. Setting submitted high-priority work simply to be the next job to run when resources become available tends to balance the needs of high-priority work without the disruption potentially caused by a kill-style preemption model. An additional alternative would be to automate job requeue of preempted jobs instead of killing them. The best way to do preemption is intimately related to the workload profile.
Exclusive Job Placement
Exclusively placing jobs onto a node is an important job placement policy. Exclusively placing a job on a node means that no subsequent job could be placed on a node once a job is assigned to it. Exclusive placement is important when users want to ensure that there is absolutely no contention for resources with other jobs within the selected nodes. Users might request this type of placement when rendering video or graphics where memory is the rate-limiting factor in total wall time.
Exclusive placement can be enabled on most systems by matching the job resource request to encompass an entire single node. To do this, submitting users have to know specific hardware details of nodes in the cluster, and this approach also assumes node homogeneity. In many cases, users have no knowledge of the exact configuration of nodes, or there may be some level of heterogeneity across nodes in the cluster. Using a resource manager with a language for job submission that includes a client resource request flag to allow exclusive placement of jobs is highly desirable.
Advanced cluster users eventually find that creating their own algorithm for custom job placement becomes required. In practice, these algorithms tend to be highly secret and bound to some proprietary process specific to the owner's vertical line of business. An example of a custom algorithm might include assigning specific jobs an immediate high priority based upon an organizational goal or specific project.
SLA- or QoS-Based Policies
Many times it is difficult to guarantee a job will complete within a required window. Most workload management systems have a direct or indirect way to configure scheduling policy such that jobs are guaranteed to finish within given time constraints. Alternatively, there may be ways to define custom qualities used to build scheduling policy.
Round-robin placement will take jobs from each queue in a specific order, usually within a single scheduling cycle. The queues are ordered by priority in most systems, but the exact behavior can be tricky depending upon the additional options used (for example, strict ordering in PBS Professional).
HPC Workload Manager Resource Types
Workload managers use resource request languages to help the scheduler place work on nodes. Many job placement scenarios include the specification of static or built-in resources as well as the ability to use custom-style resources defined using a script. Resource types tend to reflect programming primitives like boolean, numerical and string as well as properties like static and dynamic to reflect the nature of the values. Some of these resource types are assigned specifically to hosts while others have to do with shared resources in a cluster like software licenses or allocation management (cluster use credits or chargebacks). These resources are all important in a multitenant distributed computing environment.
Hadoop Scheduling Policy
Hadoop currently makes use of mainly CPU and memory. There are additional selection criteria one can make when specifying container requests. The Application Master can specify a hostname, rack placement information and priority. Over time, Hadoop will benefit from a more mature resource specification design similar to HPC. One such use case would be a boolean host resource to specify placement of containers onto nodes with specific hardware (for example, a GPU). Even though very robust placement of containers can be accomplished in the Java code of the Application Master, resources requests probably need to be made more generic and available at a higher level (that is, during submission time via a common client). YARN allows for what it calls static resources from the submitting client and dynamic resources as those defined at runtime by the Application Master.
There are two built-in scheduling policies for Hadoop
(excluding FIFO) at this time, but scheduling, like most things in Hadoop,
is pluggable. Setting
yarn.resourcemanager.scheduler.class to the
desired class in the configuration yarn-site.xml file can alter the
specific scheduling type used. Custom scheduling policy classes can be
defined here as well.
Scheduling policy for a Hadoop cluster is easy to access via a Web browser. Simply navigate to http://ResourceManager:port/cluster/scheduler using the Resource Manager hostname or IP and the correct port for the distribution of Hadoop being used.
Figure 2. The scheduler page of the Resource Manager Web interface showing queue configuration and data on running applications.
This is the standard first-in first-out method one might expect as a default scheduling option. It operates by accepting jobs and dispatching them in order received.
Adam Diaz is a longtime Linux geek and fan of distributed/parallel systems. Adam cut his teeth working for companies like Platform Computing, Altair Engineering and a handful of startups.
|Understanding OpenStack's Success||Feb 21, 2017|
|Natalie Rusk's Scratch Coding Cards (No Starch Press)||Feb 17, 2017|
|Own Your DNS Data||Feb 16, 2017|
|IGEL Universal Desktop Converter||Feb 15, 2017|
|Simple Server Hardening||Feb 14, 2017|
|Server Technology's HDOT Alt-Phase Switched POPS PDU||Feb 13, 2017|
- Understanding OpenStack's Success
- Own Your DNS Data
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- From vs. to + for Microsoft and Linux
- Bash Shell Script: Building a Better March Madness Bracket
- IGEL Universal Desktop Converter
- The Weather Outside Is Frightful (Or Is It?)
- Natalie Rusk's Scratch Coding Cards (No Starch Press)