TaskSchedulerImpl is a TaskScheduler that uses a SchedulerBackend to schedule tasks (for execution on a cluster manager).
When a Spark application starts (and so an instance of
SparkContext is created)
TaskSchedulerImpl with a SchedulerBackend and DAGScheduler are created and soon started.
TaskSchedulerImpl generates tasks based on executor resource offers.
TaskSchedulerImpl can track racks per host and port (that however is only used with Hadoop YARN cluster manager).
Using spark.scheduler.mode configuration property you can select the scheduling policy.
TaskSchedulerImpl submits tasks using SchedulableBuilders.
TaskSchedulerImpl takes the following to be created:
- Maximum Number of Task Failures
While being created,
TaskSchedulerImpl sets schedulingMode to the value of spark.scheduler.mode configuration property.
schedulingMode is part of the TaskScheduler abstraction.
TaskSchedulerImpl throws a
SparkException for unrecognized scheduling mode:
Unrecognized spark.scheduler.mode: [schedulingModeConf]
In the end,
TaskSchedulerImpl creates a TaskResultGetter.
TaskSchedulerImpl is created when:
SparkContextis requested for a TaskScheduler (for
MesosClusterManagerare requested for a
Maximum Number of Task Failures¶
TaskSchedulerImpl can be given the maximum number of task failures when created or default to spark.task.maxFailures configuration property.
The number of task failures is used when submitting tasks (to create a TaskSetManager).
TaskSchedulerImpl uses spark.task.cpus configuration property for...FIXME
TaskSchedulerImpl is given a SchedulerBackend when requested to initialize.
The lifecycle of the
SchedulerBackend is tightly coupled to the lifecycle of the
TaskSchedulerImpl waits until the SchedulerBackend is ready before requesting it for the following:
Reviving resource offers when requested to submitTasks, statusUpdate, handleFailedTask, checkSpeculatableTasks, and executorLost
Killing tasks when requested to killTaskAttempt and killAllTaskAttempts
Default parallelism, applicationId and applicationAttemptId when requested for the defaultParallelism, applicationId and applicationAttemptId, respectively
Unique Identifier of Spark Application¶
applicationId is part of the TaskScheduler abstraction.
applicationId simply request the SchedulerBackend for the applicationId.
executorHeartbeatReceived( execId: String, accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])], blockManagerId: BlockManagerId): Boolean
executorHeartbeatReceived is part of the TaskScheduler abstraction.
Cancelling All Tasks of Stage¶
cancelTasks( stageId: Int, interruptThread: Boolean): Unit
cancelTasks is part of the TaskScheduler abstraction.
cancelTasks cancels all tasks submitted for execution in a stage
cancelTasks is used when:
DAGScheduleris requested to failJobAndIndependentStages
handleSuccessfulTask( taskSetManager: TaskSetManager, tid: Long, taskResult: DirectTaskResult[_]): Unit
handleSuccessfulTask requests the given TaskSetManager to handleSuccessfulTask (with the given
handleSuccessfulTask is used when:
TaskResultGetteris requested to enqueueSuccessfulTask
handleTaskGettingResult( taskSetManager: TaskSetManager, tid: Long): Unit
handleTaskGettingResult requests the given TaskSetManager to handleTaskGettingResult.
handleTaskGettingResult is used when:
TaskResultGetteris requested to enqueueSuccessfulTask
Tracking Racks per Hosts and Ports¶
getRackForHost(value: String): Option[String]
getRackForHost is a method to know about the racks per hosts and ports. By default, it assumes that racks are unknown (i.e. the method returns
getRackForHost is currently used in two places:
> to track hosts per rack (using the < hostsByRack registry>>) while processing resource offers.
scheduler:TaskSetManager.md#addPendingTask[TaskSetManager.addPendingTask], scheduler:TaskSetManager.md#[TaskSetManager.dequeueTask], and scheduler:TaskSetManager.md#dequeueSpeculativeTask[TaskSetManager.dequeueSpeculativeTask]
initialize( backend: SchedulerBackend): Unit
initialize initializes the
TaskSchedulerImpl with the given SchedulerBackend.
initialize saves the given SchedulerBackend.
initialize then sets <
initialize sets <
- FIFOSchedulableBuilder.md[FIFOSchedulableBuilder] for
- FairSchedulableBuilder.md[FairSchedulableBuilder] for
SchedulableBuilder to build pools].
CAUTION: FIXME Why are
schedulableBuilder created only now? What do they need that it is not available when TaskSchedulerImpl is created?
initialize is called while SparkContext.md#createTaskScheduler[SparkContext is created and creates SchedulerBackend and
start starts the SchedulerBackend and the task-scheduler-speculation executor service.
Handling Task Status Update¶
statusUpdate( tid: Long, state: TaskState, serializedData: ByteBuffer): Unit
statusUpdate finds TaskSetManager for the input
tid task (in <
TaskState.LOST is only used by the deprecated Mesos fine-grained scheduling mode.
state is one of the scheduler:Task.md#states[finished states], i.e.
TaskSetManager to unregister
tid from running tasks].
statusUpdate requests <
FINISHED state and scheduler:TaskResultGetter.md#enqueueFailedTask[schedule an asynchrounous task to deserialize
TaskFailedReason (and notify TaskSchedulerImpl back)] for
tid in the other finished states (i.e.
If a task is in
DAGScheduler that the executor was lost] (with
SlaveLost and the reason
Task [tid] was lost, so marking the executor as lost as well.) and scheduler:SchedulerBackend.md#reviveOffers[requests SchedulerBackend to revive offers].
In case the
tid could not be found (in <
Ignoring update with state [state] for TID [tid] because its task set is gone (this is likely the result of receiving duplicate task finished status updates)
Any exception is caught and reported as ERROR message in the logs:
Exception in statusUpdate
CAUTION: FIXME image with scheduler backends calling
statusUpdate is used when:
DriverEndpoint(of CoarseGrainedSchedulerBackend) is requested to handle a StatusUpdate message
LocalEndpointis requested to handle a StatusUpdate message
task-scheduler-speculation Scheduled Executor Service¶
speculationScheduler is a java.util.concurrent.ScheduledExecutorService with the name task-scheduler-speculation for Speculative Execution of Tasks.
TaskSchedulerImpl is requested to start (in non-local run mode) with spark.speculation enabled,
speculationScheduler is used to schedule checkSpeculatableTasks to execute periodically every spark.speculation.interval.
speculationScheduler is shut down when
TaskSchedulerImpl is requested to stop.
Checking for Speculatable Tasks¶
rootPool to check for speculatable tasks (if they ran for more than
100 ms) and, if there any, requests scheduler:SchedulerBackend.md#reviveOffers[SchedulerBackend to revive offers].
checkSpeculatableTasks is executed periodically as part of speculative-execution-of-tasks.md.
Cleaning up After Removing Executor¶
removeExecutor( executorId: String, reason: ExecutorLossReason): Unit
removeExecutor removes the
executorId executor from the following <
hostsByRack. If the affected hosts and racks are the last entries in
hostsByRack, appropriately, they are removed from the registries.
LossReasonPending, the executor is removed from
executorIdToHost registry and Schedulable.md#executorLost[TaskSetManagers get notified].
NOTE: The internal
removeExecutor is called as part of <
Handling Nearly-Completed SparkContext Initialization¶
postStartHook is part of the TaskScheduler abstraction.
postStartHook waits until a scheduler backend is ready.
Waiting Until SchedulerBackend is Ready¶
waitBackendReady waits until the SchedulerBackend is ready. If it is,
waitBackendReady returns immediately. Otherwise,
waitBackendReady keeps checking every
100 milliseconds (hardcoded) or the <
SchedulerBackend is ready by default.
SparkContext happens to be stopped while waiting,
waitBackendReady throws an
Spark context stopped while waiting for backend
stop stops all the internal services, i.e. <
Default Level of Parallelism¶
defaultParallelism is part of the TaskScheduler abstraction.
defaultParallelism requests the SchedulerBackend for the default level of parallelism.
Default level of parallelism is a hint for sizing jobs that
SparkContext uses to create RDDs with the right number of partitions unless specified explicitly.
Submitting Tasks (of TaskSet) for Execution¶
submitTasks( taskSet: TaskSet): Unit
submitTasks is part of the TaskScheduler abstraction.
submitTasks registers a new TaskSetManager (for the given TaskSet) and requests the SchedulerBackend to handle resource allocation offers (from the scheduling system).
submitTasks prints out the following INFO message to the logs:
Adding task set [id] with [length] tasks
submitTasks then <
submitTasks registers (adds) the
TaskSetManager per TaskSet.md#stageId[stage] and TaskSet.md#stageAttemptId[stage attempt] IDs (of the TaskSet.md[TaskSet]) in the <
TaskSetManagers for a single stage, each representing a unique stage attempt.
NOTE: Not only could a task be retried (cf. <
submitTasks makes sure that there is exactly one active
TaskSetManager (with different
TaskSet) across all the managers (for the stage). Otherwise,
submitTasks throws an
more than one active taskSet for stage [stage]: [TaskSet ids]
TaskSetManager is considered active when it is not a zombie.
submitTasks requests the <
NOTE: The TaskScheduler.md#rootPool[schedulable pool] can be a single flat linked queue (in FIFOSchedulableBuilder.md[FIFO scheduling mode]) or a hierarchy of pools of
Schedulables (in FairSchedulableBuilder.md[FAIR scheduling mode]).
NOTE: The very first time (<
false) in cluster mode only (i.e.
isLocal of the TaskSchedulerImpl is
starvationTimer is scheduled to execute after configuration-properties.md#spark.starvation.timeout[spark.starvation.timeout] to ensure that the requested resources, i.e. CPUs and memory, were assigned by a cluster manager.
NOTE: After the first configuration-properties.md#spark.starvation.timeout[spark.starvation.timeout] passes, the <
In the end,
submitTasks requests the <
dag-scheduler-event-loop thread to step through the code in a debugger.
Scheduling Starvation Task¶
Every time the starvation timer thread is executed and
hasLaunchedTask flag is
false, the following WARN message is printed out to the logs:
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Otherwise, when the
hasLaunchedTask flag is
true the timer thread cancels itself.
createTaskSetManager( taskSet: TaskSet, maxTaskFailures: Int): TaskSetManager
createTaskSetManager creates a TaskSetManager (with this
TaskSchedulerImpl, the given TaskSet and the
createTaskSetManager is used when:
TaskSchedulerImplis requested to submit a TaskSet
Notifying TaskSetManager that Task Failed¶
handleFailedTask( taskSetManager: TaskSetManager, tid: Long, taskState: TaskState, reason: TaskFailedReason): Unit
tid task has failed] and, only when scheduler:TaskSetManager.md#zombie-state[
taskSetManager is not in zombie state] and
tid is not in
KILLED state, scheduler:SchedulerBackend.md#reviveOffers[requests SchedulerBackend to revive offers].
handleFailedTask is called when scheduler:TaskResultGetter.md#enqueueSuccessfulTask[
TaskResultGetter deserializes a
TaskFailedReason] for a failed task.
taskSetFinished( manager: TaskSetManager): Unit
taskSetFinished looks all scheduler:TaskSet.md[TaskSet]s up by the stage id (in <
taskSetsByStageIdAndAttempt registry completely (if there are no other attempts registered).
taskSetFinished then removes
manager from the parent's schedulable pool.
You should see the following INFO message in the logs:
Removed TaskSet [id], whose tasks have all completed, from pool [name]
taskSetFinished is used when:
TaskSetManageris requested to maybeFinishTaskSet
Notifying DAGScheduler About New Executor¶
executorAdded( execId: String, host: String)
executorAdded just DAGScheduler.md#executorAdded[notifies
DAGScheduler that an executor was added].
executorAdded uses <
Creating TaskDescriptions For Available Executor Resource Offers¶
resourceOffers( offers: Seq[WorkerOffer]): Seq[Seq[TaskDescription]]
resourceOffers takes the resources
offers (as <
resourceOffers first updates <
For new executors (not in <
NOTE: TaskSchedulerImpl uses
resourceOffers to track active executors.
CAUTION: FIXME a picture with
executorAdded call from TaskSchedulerImpl to DAGScheduler.
applyBlacklistTimeout and filters out offers on blacklisted nodes and executors.
resourceOffers uses the optional <
CAUTION: FIXME Expand on blacklisting
resourceOffers then randomly shuffles offers (to evenly distribute tasks across executors and avoid over-utilizing some executors) and initializes the local data structures
availableCpus (as shown in the figure below).
TaskSets in scheduling order] from scheduler:TaskScheduler.md#rootPool[top-level Schedulable Pool].
rootPool is configured when <
rootPool is part of the scheduler:TaskScheduler.md#rootPool[TaskScheduler Contract] and exclusively managed by scheduler:SchedulableBuilder.md[SchedulableBuilders], i.e. scheduler:FIFOSchedulableBuilder.md[FIFOSchedulableBuilder] and scheduler:FairSchedulableBuilder.md[FairSchedulableBuilder] (that scheduler:SchedulableBuilder.md#addTaskSetManager[manage registering TaskSetManagers with the root pool]).
scheduler:TaskSetManager.md[TaskSetManager] manages execution of the tasks in a single scheduler:TaskSet.md[TaskSet] that represents a single scheduler:Stage.md[Stage].
TaskSetManager (in scheduling order), you should see the following DEBUG message in the logs:
parentName: [name], name: [name], runningTasks: [count]
Only if a new executor was added,
resourceOffers scheduler:TaskSetManager.md#executorAdded[notifies every
TaskSetManager about the change] (to recompute locality preferences).
resourceOffers then takes every
TaskSetManager (in scheduling order) and offers them each node in increasing order of locality levels (per scheduler:TaskSetManager.md#computeValidLocalityLevels[TaskSetManager's valid locality levels]).
TaskSetManager scheduler:TaskSetManager.md#computeValidLocalityLevels[computes locality levels of the tasks] it manages.
TaskSetManager and the
TaskSetManager's valid locality level,
resourceOffers tries to <
TaskSetManager manages to launch a task (given the locality level).
resourceOffers did not manage to offer resources to a
TaskSetManager so it could launch any task,
resourceOffers scheduler:TaskSetManager.md#abortIfCompletelyBlacklisted[requests the
TaskSetManager to abort the
TaskSet if completely blacklisted].
resourceOffers managed to launch a task, the internal <
resourceOffers is used when:
DriverEndpointRPC endpoint) is requested to make executor resource offers
LocalEndpointis requested to revive resource offers
Finding Tasks from TaskSetManager to Schedule on Executors¶
resourceOfferSingleTaskSet( taskSet: TaskSetManager, maxLocality: TaskLocality, shuffledOffers: Seq[WorkerOffer], availableCpus: Array[Int], tasks: Seq[ArrayBuffer[TaskDescription]]): Boolean
resourceOfferSingleTaskSet takes every
WorkerOffer (from the input
shuffledOffers) and (only if the number of available CPU cores (using the input
availableCpus) is at least configuration-properties.md#spark.task.cpus[spark.task.cpus]) scheduler:TaskSetManager.md#resourceOffer[requests
TaskSetManager (as the input
taskSet) to find a
Task to execute (given the resource offer)] (as an executor, a host, and the input
resourceOfferSingleTaskSet adds the task to the input
resourceOfferSingleTaskSet records the task id and
TaskSetManager in the following registries:
resourceOfferSingleTaskSet decreases configuration-properties.md#spark.task.cpus[spark.task.cpus] from the input
availableCpus (for the
resourceOfferSingleTaskSet makes sure that the number of available CPU cores (in the input
WorkerOffer) is at least
If there is a
TaskNotSerializableException, you should see the following ERROR in the logs:
Resource offer failed, task set [name] was not serializable
resourceOfferSingleTaskSet returns whether a task was launched or not.
resourceOfferSingleTaskSet is used when:
TaskSchedulerImplis requested to resourceOffers
Task Locality Preference¶
TaskLocality represents a task locality preference and can be one of the following (from most localized to the widest):
WorkerOffer — Free CPU Cores on Executor¶
WorkerOffer( executorId: String, host: String, cores: Int)
WorkerOffer represents a resource offer with free CPU
cores available on an
executorId executor on a
workerRemoved( workerId: String, host: String, message: String): Unit
workerRemoved is part of the TaskScheduler abstraction.
workerRemoved prints out the following INFO message to the logs:
Handle removed worker [workerId]: [message]
In the end,
workerRemoved requests the DAGScheduler to workerRemoved.
ALL logging level for
org.apache.spark.scheduler.TaskSchedulerImpl logger to see what happens inside.
Add the following line to
Refer to Logging.
[cols="30m,70",options="header",width="100%"] |=== | Name | Description
| dagScheduler a| [[dagScheduler]] DAGScheduler.md[DAGScheduler]
| executorIdToHost a| [[executorIdToHost]] Lookup table of hosts per executor.
| executorIdToRunningTaskIds a| [[executorIdToRunningTaskIds]] Lookup table of running tasks per executor.
| executorIdToTaskCount a| [[executorIdToTaskCount]] Lookup table of the number of running tasks by executor:Executor.md.
| executorsByHost a| [[executorsByHost]] Collection of executor:Executor.md[executors] per host
| hasLaunchedTask a| [[hasLaunchedTask]] Flag...FIXME
| hostToExecutors a| [[hostToExecutors]] Lookup table of executors per hosts in a cluster.
| hostsByRack a| [[hostsByRack]] Lookup table of hosts per rack.
| nextTaskId a| [[nextTaskId]] The next scheduler:Task.md[task] id counting from
Used when TaskSchedulerImpl...
| rootPool a| [[rootPool]] Pool.md[Schedulable pool]
Used when TaskSchedulerImpl...
| schedulableBuilder a| [[schedulableBuilder]] <
Created when TaskSchedulerImpl is requested to <
FIFOSchedulableBuilder.md[FIFOSchedulableBuilder] when scheduling policy is FIFO (which is the default scheduling policy).
FairSchedulableBuilder.md[FairSchedulableBuilder] for FAIR scheduling policy.
NOTE: Use configuration-properties.md#spark.scheduler.mode[spark.scheduler.mode] configuration property to select the scheduling policy.
| schedulingMode a| [[schedulingMode]] SchedulingMode.md[SchedulingMode]
Used when TaskSchedulerImpl...
| taskSetsByStageIdAndAttempt a| [[taskSetsByStageIdAndAttempt]] Lookup table of scheduler:TaskSet.md[TaskSet] by stage and attempt ids.
| taskIdToExecutorId a| [[taskIdToExecutorId]] Lookup table of executor:Executor.md by task id.
| taskIdToTaskSetManager a| [[taskIdToTaskSetManager]] Registry of active scheduler:TaskSetManager.md[TaskSetManagers] per task id.