Skip to content


TaskSchedulerImpl is the default TaskScheduler that uses a SchedulerBackend to schedule tasks (for execution on a cluster manager).

When a Spark application starts (and so an instance of[SparkContext is created]) TaskSchedulerImpl with a[SchedulerBackend] and[DAGScheduler] are created and soon started.

TaskSchedulerImpl and Other Services

TaskSchedulerImpl <>.

TaskSchedulerImpl can <> (that however is[only used with Hadoop YARN cluster manager]).

Using[spark.scheduler.mode] configuration property you can select the[scheduling policy].

TaskSchedulerImpl <> using[SchedulableBuilders].

[[CPUS_PER_TASK]] TaskSchedulerImpl uses[spark.task.cpus] configuration property for...FIXME

Creating Instance

TaskSchedulerImpl takes the following to be created:

  • [[sc]][]
  • <>
  • [[isLocal]] isLocal flag for local and cluster run modes (default: false)

TaskSchedulerImpl initializes the <>.

TaskSchedulerImpl sets[schedulingMode] to the value of[spark.scheduler.mode] configuration property.

NOTE: schedulingMode is part of[TaskScheduler] contract.

Failure to set schedulingMode results in a SparkException:

Unrecognized spark.scheduler.mode: [schedulingModeConf]

Ultimately, TaskSchedulerImpl creates a[TaskResultGetter].

== [[backend]] SchedulerBackend

TaskSchedulerImpl is assigned a[SchedulerBackend] when requested to <>.

The lifecycle of the SchedulerBackend is tightly coupled to the lifecycle of the owning TaskSchedulerImpl:

  • When <> so is the[SchedulerBackend]

  • When <>, so is the[SchedulerBackend]

TaskSchedulerImpl <> before requesting it for the following:

  •[Reviving resource offers] when requested to <>, <>, <>, <>, and <>

  •[Killing tasks] when requested to <> and <>

  •[Default parallelism], <> and <> when requested for the <>,[applicationId] and[applicationAttemptId], respectively

== [[applicationId]] Unique Identifier of Spark Application

[source, scala]

applicationId(): String

NOTE: applicationId is part of[TaskScheduler] contract.

applicationId simply request the <> for the[applicationId].

== [[nodeBlacklist]] nodeBlacklist Method


== [[cleanupTaskState]] cleanupTaskState Method


== [[newTaskId]] newTaskId Method


== [[getExecutorsAliveOnHost]] getExecutorsAliveOnHost Method


== [[isExecutorAlive]] isExecutorAlive Method


== [[hasExecutorsAliveOnHost]] hasExecutorsAliveOnHost Method


== [[hasHostAliveOnRack]] hasHostAliveOnRack Method


== [[executorLost]] executorLost Method


== [[mapOutputTracker]] mapOutputTracker


== [[starvationTimer]] starvationTimer


== [[executorHeartbeatReceived]] executorHeartbeatReceived Method

[source, scala]

executorHeartbeatReceived( execId: String, accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])], blockManagerId: BlockManagerId): Boolean

executorHeartbeatReceived is...FIXME

executorHeartbeatReceived is part of the[TaskScheduler] contract.

== [[cancelTasks]] Cancelling All Tasks of Stage -- cancelTasks Method

[source, scala]

cancelTasks(stageId: Int, interruptThread: Boolean): Unit

NOTE: cancelTasks is part of[TaskScheduler contract].

cancelTasks cancels all tasks submitted for execution in a stage stageId.

NOTE: cancelTasks is used exclusively when DAGScheduler[cancels a stage].

== [[handleSuccessfulTask]] handleSuccessfulTask Method

[source, scala]

handleSuccessfulTask( taskSetManager: TaskSetManager, tid: Long, taskResult: DirectTaskResult[_]): Unit

handleSuccessfulTask simply[forwards the call to the input taskSetManager] (passing tid and taskResult).

NOTE: handleSuccessfulTask is called when[TaskSchedulerGetter has managed to deserialize the task result of a task that finished successfully].

== [[handleTaskGettingResult]] handleTaskGettingResult Method

[source, scala]

handleTaskGettingResult(taskSetManager: TaskSetManager, tid: Long): Unit

handleTaskGettingResult simply[forwards the call to the taskSetManager].

NOTE: handleTaskGettingResult is used to inform that[TaskResultGetter enqueues a successful task with IndirectTaskResult task result (and so is about to fetch a remote block from a BlockManager)].

== [[applicationAttemptId]] applicationAttemptId Method

[source, scala]

applicationAttemptId(): Option[String]


== [[getRackForHost]] Tracking Racks per Hosts and Ports -- getRackForHost Method

[source, scala]

getRackForHost(value: String): Option[String]

getRackForHost is a method to know about the racks per hosts and ports. By default, it assumes that racks are unknown (i.e. the method returns None).

NOTE: It is overriden by the YARN-specific TaskScheduler[YarnScheduler].

getRackForHost is currently used in two places:

  • <> to track hosts per rack (using the <hostsByRack registry>>) while processing resource offers.

  • <> to...FIXME

  •[TaskSetManager.addPendingTask],[TaskSetManager.dequeueTask], and[TaskSetManager.dequeueSpeculativeTask]

== [[initialize]] Initializing -- initialize Method

[source, scala]

initialize( backend: SchedulerBackend): Unit

initialize initializes TaskSchedulerImpl.

.TaskSchedulerImpl initialization image::TaskSchedulerImpl-initialize.png[align="center"]

initialize saves the input <>.

initialize then sets <Pool>> as an empty-named[Pool] (passing in <>, initMinShare and initWeight as 0).

NOTE: <> is defined when <>.

NOTE: <> and <> are a part of[TaskScheduler Contract].

initialize sets <> (based on <>):

  •[FIFOSchedulableBuilder] for FIFO scheduling mode
  •[FairSchedulableBuilder] for FAIR scheduling mode

initialize[requests SchedulableBuilder to build pools].

CAUTION: FIXME Why are rootPool and schedulableBuilder created only now? What do they need that it is not available when TaskSchedulerImpl is created?

NOTE: initialize is called while[SparkContext is created and creates SchedulerBackend and TaskScheduler].

== [[start]] Starting TaskSchedulerImpl

[source, scala]

start(): Unit

start starts the[scheduler backend].

.Starting TaskSchedulerImpl in Spark Standalone image::taskschedulerimpl-start-standalone.png[align="center"]

start also starts <task-scheduler-speculation executor service>>.

Handling Task Status Update

  tid: Long,
  state: TaskState,
  serializedData: ByteBuffer): Unit

statusUpdate finds TaskSetManager for the input tid task (in <>).

When state is LOST, statusUpdate...FIXME

NOTE: TaskState.LOST is only used by the deprecated Mesos fine-grained scheduling mode.

When state is one of the[finished states], i.e. FINISHED, FAILED, KILLED or LOST, statusUpdate <> for the input tid.

statusUpdate[requests TaskSetManager to unregister tid from running tasks].

statusUpdate requests <> to[schedule an asynchrounous task to deserialize the task result (and notify TaskSchedulerImpl back)] for tid in FINISHED state and[schedule an asynchrounous task to deserialize TaskFailedReason (and notify TaskSchedulerImpl back)] for tid in the other finished states (i.e. FAILED, KILLED, LOST).

If a task is in LOST state, statusUpdate[notifies DAGScheduler that the executor was lost] (with SlaveLost and the reason Task [tid] was lost, so marking the executor as lost as well.) and[requests SchedulerBackend to revive offers].

In case the TaskSetManager for tid could not be found (in <> registry), you should see the following ERROR message in the logs:

Ignoring update with state [state] for TID [tid] because its task set is gone (this is likely the result of receiving duplicate task finished status updates)

Any exception is caught and reported as ERROR message in the logs:

Exception in statusUpdate

CAUTION: FIXME image with scheduler backends calling TaskSchedulerImpl.statusUpdate.

statusUpdate is used when:

== [[speculationScheduler]][[task-scheduler-speculation]] task-scheduler-speculation Scheduled Executor Service -- speculationScheduler Internal Attribute

speculationScheduler is a[java.util.concurrent.ScheduledExecutorService] with the name task-scheduler-speculation for[].

When <> (in non-local run mode) with[spark.speculation] enabled, speculationScheduler is used to schedule <> to execute periodically every[spark.speculation.interval] after the initial spark.speculation.interval passes.

speculationScheduler is shut down when <>.

== [[checkSpeculatableTasks]] Checking for Speculatable Tasks

[source, scala]

checkSpeculatableTasks(): Unit

checkSpeculatableTasks requests rootPool to check for speculatable tasks (if they ran for more than 100 ms) and, if there any, requests[SchedulerBackend to revive offers].

NOTE: checkSpeculatableTasks is executed periodically as part of[].

== [[maxTaskFailures]] Acceptable Number of Task Failures

TaskSchedulerImpl can be given the acceptable number of task failures when created or defaults to[spark.task.maxFailures] configuration property.

The number of task failures is used when <> through[TaskSetManager].

== [[removeExecutor]] Cleaning up After Removing Executor -- removeExecutor Internal Method

[source, scala]

removeExecutor(executorId: String, reason: ExecutorLossReason): Unit

removeExecutor removes the executorId executor from the following <>: <>, executorIdToHost, executorsByHost, and hostsByRack. If the affected hosts and racks are the last entries in executorsByHost and hostsByRack, appropriately, they are removed from the registries.

Unless reason is LossReasonPending, the executor is removed from executorIdToHost registry and[TaskSetManagers get notified].

NOTE: The internal removeExecutor is called as part of <> and[executorLost].

== [[postStartHook]] Handling Nearly-Completed SparkContext Initialization -- postStartHook Callback

[source, scala]

postStartHook(): Unit

NOTE: postStartHook is part of the[TaskScheduler Contract] to notify a[task scheduler] that the SparkContext (and hence the Spark application itself) is about to finish initialization.

postStartHook simply <>.

== [[stop]] Stopping TaskSchedulerImpl -- stop Method

[source, scala]

stop(): Unit

stop() stops all the internal services, i.e. <task-scheduler-speculation executor service>>,[SchedulerBackend],[TaskResultGetter], and <> timer.

== [[defaultParallelism]] Finding Default Level of Parallelism -- defaultParallelism Method

[source, scala]

defaultParallelism(): Int

NOTE: defaultParallelism is part of[TaskScheduler contract] as a hint for sizing jobs.

defaultParallelism simply requests <> for the[default level of parallelism].

NOTE: Default level of parallelism is a hint for sizing jobs that SparkContext[uses to create RDDs with the right number of partitions when not specified explicitly].

== [[submitTasks]] Submitting Tasks (of TaskSet) for Execution -- submitTasks Method

[source, scala]

submitTasks(taskSet: TaskSet): Unit

NOTE: submitTasks is part of the[TaskScheduler Contract] to submit the tasks (of the given[TaskSet]) for execution.

In essence, submitTasks registers a new[TaskSetManager] (for the given[TaskSet]) and requests the <> to[handle resource allocation offers (from the scheduling system)].

.TaskSchedulerImpl.submitTasks image::taskschedulerImpl-submitTasks.png[align="center"]

Internally, submitTasks first prints out the following INFO message to the logs:

Adding task set [id] with [length] tasks

submitTasks then <> (for the given[TaskSet] and the <>).

submitTasks registers (adds) the TaskSetManager per[stage] and[stage attempt] IDs (of the[TaskSet]) in the <> internal registry.

NOTE: <> internal registry tracks the[TaskSetManagers] (that represent[TaskSets]) per stage and stage attempts. In other words, there could be many TaskSetManagers for a single stage, each representing a unique stage attempt.

NOTE: Not only could a task be retried (cf. <>), but also a single stage.

submitTasks makes sure that there is exactly one active TaskSetManager (with different TaskSet) across all the managers (for the stage). Otherwise, submitTasks throws an IllegalStateException:

more than one active taskSet for stage [stage]: [TaskSet ids]

NOTE: TaskSetManager is considered active when it is not a zombie.

submitTasks requests the <> to[add the TaskSetManager to the schedulable pool].

NOTE: The[schedulable pool] can be a single flat linked queue (in[FIFO scheduling mode]) or a hierarchy of pools of Schedulables (in[FAIR scheduling mode]).

submitTasks <> to make sure that the requested resources (i.e. CPU and memory) are assigned to the Spark application for a <> (the very first time the Spark application is started per <> flag).

NOTE: The very first time (<> flag is false) in cluster mode only (i.e. isLocal of the TaskSchedulerImpl is false), starvationTimer is scheduled to execute after[spark.starvation.timeout] to ensure that the requested resources, i.e. CPUs and memory, were assigned by a cluster manager.

NOTE: After the first[spark.starvation.timeout] passes, the <> internal flag is true.

In the end, submitTasks requests the <> to[reviveOffers].

TIP: Use dag-scheduler-event-loop thread to step through the code in a debugger.

=== [[submitTasks-starvationTimer]] Scheduling Starvation Task

Every time the starvation timer thread is executed and hasLaunchedTask flag is false, the following WARN message is printed out to the logs:

WARN Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Otherwise, when the hasLaunchedTask flag is true the timer thread cancels itself.

== [[createTaskSetManager]] Creating TaskSetManager -- createTaskSetManager Method

[source, scala]

createTaskSetManager(taskSet: TaskSet, maxTaskFailures: Int): TaskSetManager

createTaskSetManager[creates a TaskSetManager] (passing on the reference to TaskSchedulerImpl, the input taskSet and maxTaskFailures, and optional BlacklistTracker).

NOTE: createTaskSetManager uses the optional <> that is specified when <>.

NOTE: createTaskSetManager is used exclusively when <TaskSet)>>.

== [[handleFailedTask]] Notifying TaskSetManager that Task Failed -- handleFailedTask Method

[source, scala]

handleFailedTask( taskSetManager: TaskSetManager, tid: Long, taskState: TaskState, reason: TaskFailedReason): Unit

handleFailedTask[notifies taskSetManager that tid task has failed] and, only when[taskSetManager is not in zombie state] and tid is not in KILLED state,[requests SchedulerBackend to revive offers].

NOTE: handleFailedTask is called when[TaskResultGetter deserializes a TaskFailedReason] for a failed task.

== [[taskSetFinished]] taskSetFinished Method

[source, scala]

taskSetFinished(manager: TaskSetManager): Unit

taskSetFinished looks all[TaskSet]s up by the stage id (in <> registry) and removes the stage attempt from them, possibly with removing the entire stage record from taskSetsByStageIdAndAttempt registry completely (if there are no other attempts registered).

.TaskSchedulerImpl.taskSetFinished is called when all tasks are finished image::taskschedulerimpl-tasksetmanager-tasksetfinished.png[align="center"]

NOTE: A TaskSetManager manages a TaskSet for a stage.

taskSetFinished then[removes manager from the parent's schedulable pool].

You should see the following INFO message in the logs:

Removed TaskSet [id], whose tasks have all completed, from pool [name]

NOTE: taskSetFinished method is called when[TaskSetManager has received the results of all the tasks in a TaskSet].

== [[executorAdded]] Notifying DAGScheduler About New Executor -- executorAdded Method

[source, scala]

executorAdded(execId: String, host: String)

executorAdded just[notifies DAGScheduler that an executor was added].

CAUTION: FIXME Image with a call from TaskSchedulerImpl to DAGScheduler, please.

NOTE: executorAdded uses <> that was given when <>.

== [[waitBackendReady]] Waiting Until SchedulerBackend is Ready -- waitBackendReady Internal Method

[source, scala]

waitBackendReady(): Unit

waitBackendReady waits until the <> is[ready]. If it is, waitBackendReady returns immediately. Otherwise, waitBackendReady keeps checking every 100 milliseconds (hardcoded) or the <> is[stopped].

NOTE: A SchedulerBackend is[ready] by default.

If the SparkContext happens to be stopped while waiting, waitBackendReady throws an IllegalStateException:

Spark context stopped while waiting for backend

NOTE: waitBackendReady is used exclusively when TaskSchedulerImpl is requested to <>.

== [[resourceOffers]] Creating TaskDescriptions For Available Executor Resource Offers

[source, scala]

resourceOffers( offers: Seq[WorkerOffer]): Seq[Seq[TaskDescription]]

resourceOffers takes the resources offers (as <>) and generates a collection of tasks (as TaskDescription) to launch (given the resources available).

NOTE: <> represents a resource offer with CPU cores free to use on an executor.

.Processing Executor Resource Offers image::taskscheduler-resourceOffers.png[align="center"]

Internally, resourceOffers first updates <> and <> lookup tables to record new hosts and executors (given the input offers).

For new executors (not in <>) resourceOffers <DAGScheduler that an executor was added>>.

NOTE: TaskSchedulerImpl uses resourceOffers to track active executors.

CAUTION: FIXME a picture with executorAdded call from TaskSchedulerImpl to DAGScheduler.

resourceOffers requests BlacklistTracker to applyBlacklistTimeout and filters out offers on blacklisted nodes and executors.

NOTE: resourceOffers uses the optional <> that was given when <>.

CAUTION: FIXME Expand on blacklisting

resourceOffers then randomly shuffles offers (to evenly distribute tasks across executors and avoid over-utilizing some executors) and initializes the local data structures tasks and availableCpus (as shown in the figure below).

.Internal Structures of resourceOffers with 5 WorkerOffers (with 4, 2, 0, 3, 2 free cores) image::TaskSchedulerImpl-resourceOffers-internal-structures.png[align="center"]

resourceOffers[takes TaskSets in scheduling order] from[top-level Schedulable Pool].

.TaskSchedulerImpl Requesting TaskSets (as TaskSetManagers) from Root Pool image::TaskSchedulerImpl-resourceOffers-rootPool-getSortedTaskSetQueue.png[align="center"]


rootPool is configured when <>.

rootPool is part of the[TaskScheduler Contract] and exclusively managed by[SchedulableBuilders], i.e.[FIFOSchedulableBuilder] and[FairSchedulableBuilder] (that[manage registering TaskSetManagers with the root pool]).[TaskSetManager] manages execution of the tasks in a single[TaskSet] that represents a single[Stage].

For every TaskSetManager (in scheduling order), you should see the following DEBUG message in the logs:

parentName: [name], name: [name], runningTasks: [count]

Only if a new executor was added, resourceOffers[notifies every TaskSetManager about the change] (to recompute locality preferences).

resourceOffers then takes every TaskSetManager (in scheduling order) and offers them each node in increasing order of locality levels (per[TaskSetManager's valid locality levels]).

NOTE: A TaskSetManager[computes locality levels of the tasks] it manages.

For every TaskSetManager and the TaskSetManager's valid locality level, resourceOffers tries to <> as long as the TaskSetManager manages to launch a task (given the locality level).

If resourceOffers did not manage to offer resources to a TaskSetManager so it could launch any task, resourceOffers[requests the TaskSetManager to abort the TaskSet if completely blacklisted].

When resourceOffers managed to launch a task, the internal <> flag gets enabled (that effectively means what the name says "there were executors and I managed to launch a task").

resourceOffers is used when:

== [[resourceOfferSingleTaskSet]] Finding Tasks from TaskSetManager to Schedule on Executors -- resourceOfferSingleTaskSet Internal Method

[source, scala]

resourceOfferSingleTaskSet( taskSet: TaskSetManager, maxLocality: TaskLocality, shuffledOffers: Seq[WorkerOffer], availableCpus: Array[Int], tasks: Seq[ArrayBuffer[TaskDescription]]): Boolean

resourceOfferSingleTaskSet takes every WorkerOffer (from the input shuffledOffers) and (only if the number of available CPU cores (using the input availableCpus) is at least[spark.task.cpus])[requests TaskSetManager (as the input taskSet) to find a Task to execute (given the resource offer)] (as an executor, a host, and the input maxLocality).

resourceOfferSingleTaskSet adds the task to the input tasks collection.

resourceOfferSingleTaskSet records the task id and TaskSetManager in the following registries:

  • <>
  • <>
  • <>

resourceOfferSingleTaskSet decreases[spark.task.cpus] from the input availableCpus (for the WorkerOffer).

NOTE: resourceOfferSingleTaskSet makes sure that the number of available CPU cores (in the input availableCpus per WorkerOffer) is at least 0.

If there is a TaskNotSerializableException, you should see the following ERROR in the logs:

ERROR Resource offer failed, task set [name] was not serializable

resourceOfferSingleTaskSet returns whether a task was launched or not.

NOTE: resourceOfferSingleTaskSet is used when TaskSchedulerImpl <TaskDescriptions for available executor resource offers (with CPU cores)>>.

== [[TaskLocality]] TaskLocality -- Task Locality Preference

TaskLocality represents a task locality preference and can be one of the following (from most localized to the widest):


== [[WorkerOffer]] WorkerOffer -- Free CPU Cores on Executor

[source, scala]

WorkerOffer(executorId: String, host: String, cores: Int)

WorkerOffer represents a resource offer with free CPU cores available on an executorId executor on a host.

== [[workerRemoved]] workerRemoved Method

[source, scala]

workerRemoved( workerId: String, host: String, message: String): Unit

workerRemoved prints out the following INFO message to the logs:

Handle removed worker [workerId]: [message]

workerRemoved then requests the <> to[handle it].

workerRemoved is part of the[TaskScheduler] abstraction.

== [[maybeInitBarrierCoordinator]] maybeInitBarrierCoordinator Method


maybeInitBarrierCoordinator(): Unit


maybeInitBarrierCoordinator is used when TaskSchedulerImpl is requested to <>.

== [[logging]] Logging

Enable ALL logging level for org.apache.spark.scheduler.TaskSchedulerImpl logger to see what happens inside.

Add the following line to conf/


Refer to[Logging].

== [[internal-properties]] Internal Properties

[cols="30m,70",options="header",width="100%"] |=== | Name | Description

| dagScheduler a| [[dagScheduler]][DAGScheduler]

Used when...FIXME

| executorIdToHost a| [[executorIdToHost]] Lookup table of hosts per executor.

Used when...FIXME

| executorIdToRunningTaskIds a| [[executorIdToRunningTaskIds]] Lookup table of running tasks per executor.

Used when...FIXME

| executorIdToTaskCount a| [[executorIdToTaskCount]] Lookup table of the number of running tasks by[].

| executorsByHost a| [[executorsByHost]] Collection of[executors] per host

| hasLaunchedTask a| [[hasLaunchedTask]] Flag...FIXME

Used when...FIXME

| hostToExecutors a| [[hostToExecutors]] Lookup table of executors per hosts in a cluster.

Used when...FIXME

| hostsByRack a| [[hostsByRack]] Lookup table of hosts per rack.

Used when...FIXME

| nextTaskId a| [[nextTaskId]] The next[task] id counting from 0.

Used when TaskSchedulerImpl...

| rootPool a| [[rootPool]][Schedulable pool]

Used when TaskSchedulerImpl...

| schedulableBuilder a| [[schedulableBuilder]] <>

Created when TaskSchedulerImpl is requested to <> and can be one of two available builders:

  •[FIFOSchedulableBuilder] when scheduling policy is FIFO (which is the default scheduling policy).

  •[FairSchedulableBuilder] for FAIR scheduling policy.

NOTE: Use[spark.scheduler.mode] configuration property to select the scheduling policy.

| schedulingMode a| [[schedulingMode]][SchedulingMode]

Used when TaskSchedulerImpl...

| taskSetsByStageIdAndAttempt a| [[taskSetsByStageIdAndAttempt]] Lookup table of[TaskSet] by stage and attempt ids.

| taskIdToExecutorId a| [[taskIdToExecutorId]] Lookup table of[] by task id.

| taskIdToTaskSetManager a| [[taskIdToTaskSetManager]] Registry of active[TaskSetManagers] per task id.


Last update: 2020-12-04