Spark Configuration Properties

Properties

Name Description

spark.blockManager.port

Port to use for block managers to listen on when a more specific setting is not provided (i.e. spark.driver.blockManager.port for the driver).

Default: 0

In Spark on Kubernetes the default port is 7079

spark.default.parallelism

Number of partitions to use for HashPartitioner

spark.default.parallelism corresponds to default parallelism of a scheduler backend and is as follows:

spark.diskStore.subDirectories

Default: 64

spark.driver.blockManager.port

Port the block manager on the driver listens on

spark.driver.maxResultSize

The maximum size of all results of the tasks in a TaskSet

Default: 1g

Used when:

spark.executor.extraClassPath

User-defined class path for executors, i.e. URLs representing user-defined class path entries that are added to an executor’s class path. URLs are separated by system-dependent path separator, i.e. : on Unix-like systems and ; on Microsoft Windows.

Default: (empty)

Used when:

spark.executor.cores

Number of cores of an Executor

spark.executor.extraJavaOptions

Extra Java options of an Executor

Used when Spark on YARN’s ExecutorRunnable is requested to prepare the command to launch CoarseGrainedExecutorBackend in a YARN container

spark.executor.extraLibraryPath

Extra library paths separated by system-dependent path separator, i.e. : on Unix/MacOS systems and ; on Microsoft Windows

Used when Spark on YARN’s ExecutorRunnable is requested to prepare the command to launch CoarseGrainedExecutorBackend in a YARN container

spark.executor.uri

Equivalent to SPARK_EXECUTOR_URI

spark.executor.logs.rolling.time.interval

spark.executor.logs.rolling.strategy

spark.executor.logs.rolling.maxRetainedFiles

spark.executor.logs.rolling.maxSize

spark.executor.id

spark.executor.heartbeatInterval

Interval after which an Executor reports heartbeat and metrics for active tasks to the driver

Default: 10s

spark.executor.heartbeat.maxFailures

Number of times an Executor will try to send heartbeats to the driver before it gives up and exits (with exit code 56).

Default: 60

spark.executor.instances

Number of Executor in use

Default: 0

spark.storage.unrollMemoryThreshold

Initial per-task memory size needed to store a block in memory.

Default: 1024 * 1024

Used when MemoryStore is requested to putIteratorAsValues and putIteratorAsBytes

spark.task.maxDirectResultSize

Default: 1048576B

spark.executor.userClassPathFirst

Flag to control whether to load classes in user jars before those in Spark jars

Default: false

spark.executor.memory

Amount of memory to use for an Executor

Default: 1g

Equivalent to SPARK_EXECUTOR_MEMORY environment variable.

spark.executor.port

spark.launcher.port

spark.launcher.secret

spark.locality.wait

For locality-aware delay scheduling for PROCESS_LOCAL, NODE_LOCAL, and RACK_LOCAL TaskLocalities when locality-specific setting is not set.

Default: 3s

spark.locality.wait.node

Scheduling delay for NODE_LOCAL TaskLocality

Default: The value of spark.locality.wait

spark.locality.wait.process

Scheduling delay for PROCESS_LOCAL TaskLocality

Default: The value of spark.locality.wait

spark.locality.wait.rack

Scheduling delay for RACK_LOCAL TaskLocality

Default: The value of spark.locality.wait

spark.logging.exceptionPrintInterval

How frequently to reprint duplicate exceptions in full (in millis).

Default: 10000

spark.master

Master URL to connect a Spark application to

spark.scheduler.allocation.file

Path to the configuration file of FairSchedulableBuilder

Default: fairscheduler.xml (on a Spark application’s class path)

spark.scheduler.executorTaskBlacklistTime

How long to wait before a task can be re-launched on the executor where it once failed. It is to prevent repeated task failures due to executor failures.

Default: 0L

spark.scheduler.mode

Scheduling Mode of the TaskSchedulerImpl, i.e. case-insensitive name of the scheduling mode that TaskSchedulerImpl uses to choose between the available SchedulableBuilders for task scheduling (of tasks of jobs submitted for execution to the same SparkContext)

Default: FIFO

Supported values:

  • FAIR for fair sharing (of cluster resources)

  • FIFO (default) for queueing jobs one after another

Task scheduling is an algorithm that is used to assign cluster resources (CPU cores and memory) to tasks (that are part of jobs with one or more stages). Fair sharing allows for executing tasks of different jobs at the same time (that were all submitted to the same SparkContext). In FIFO scheduling mode a single SparkContext can submit a single job for execution only (regardless of how many cluster resources the job really use which could lead to a inefficient utilization of cluster resources and a longer execution of the Spark application overall).

Scheduling mode is particularly useful in multi-tenant environments in which a single SparkContext could be shared across different users (to make a cluster resource utilization more efficient).

Use web UI to know the current scheduling mode (e.g. Environment tab as part of Spark Properties and Jobs tab as Scheduling Mode).

spark.starvation.timeout

Threshold above which Spark warns a user that an initial TaskSet may be starved

Default: 15s

spark.storage.exceptionOnPinLeak

spark.task.cpus

The number of CPU cores used to schedule (allocate for) a task

Default: 1

Used when:

spark.task.maxFailures

The number of individual task failures before giving up on the entire TaskSet and the job afterwards

Default:

spark.unsafe.exceptionOnMemoryLeak

spark.memory.offHeap.size

spark.memory.offHeap.size is the absolute amount of memory in bytes which can be used for off-heap allocation. This setting has no impact on heap memory usage, so if your executors' total memory consumption must fit within some hard limit then be sure to shrink your JVM heap size accordingly.

Default: 0

Must be set to a positive value when spark.memory.offHeap.enabled is enabled (true).

Must not be negative

spark.memory.storageFraction

spark.memory.storageFraction controls the fraction of the memory to use for storage region.

Default: 0.5

spark.memory.fraction

spark.memory.fraction is the fraction of JVM heap space used for execution and storage.

Default: 0.6

spark.memory.useLegacyMode

spark.memory.useLegacyMode controls the type of the MemoryManager to use. When enabled (i.e. true) it is the legacy StaticMemoryManager while UnifiedMemoryManager otherwise (i.e. false).

Default: false

spark.memory.offHeap.enabled

spark.memory.offHeap.enabled controls whether Spark will attempt to use off-heap memory for certain operations (true) or not (false).

Default: false

Tracks whether Tungsten memory will be allocated on the JVM heap or off-heap (using sun.misc.Unsafe).

If enabled, spark.memory.offHeap.size has to be greater than 0.

Used when MemoryManager is requested for tungstenMemoryMode.

spark.shuffle.file.buffer

Size of the in-memory buffer for each shuffle file output stream, in KiB unless otherwise specified. These buffers reduce the number of disk seeks and system calls made in creating intermediate shuffle files.

Default: 32k

Must be greater than 0 and less than or equal to 2097151 ((Integer.MAX_VALUE - 15) / 1024)

spark.shuffle.spill.batchSize

Size of object batches when reading or writing from serializers.

Default: 10000

spark.shuffle.spill.initialMemoryThreshold

Initial threshold for the size of an in-memory collection

Default: 5 * 1024 * 1024

Used by Spillable

spark.shuffle.spill.numElementsForceSpillThreshold

(internal) The maximum number of elements in memory before forcing the shuffle sorter to spill. Claimed to be used for testing only

Default: Integer.MAX_VALUE

The default value is to never force the sorter to spill, until we reach some limitations, like the max page size limitation for the pointer array in the sorter.

Used when:

  • ShuffleExternalSorter is created

  • Spillable is requested to maybeSpill

spark.shuffle.manager

Specifies the fully-qualified class name or the alias of the ShuffleManager in a Spark application

Default: sort

The supported aliases:

  • sort

  • tungsten-sort

Used when SparkEnv object is requested to create a "base" SparkEnv for a driver or an executor

spark.shuffle.mapOutput.dispatcher.numThreads

Default: 8

spark.shuffle.mapOutput.minSizeForBroadcast

Size of serialized shuffle map output statuses when MapOutputTrackerMaster uses to determine whether to use a broadcast variable to send them to executors

Default: 512k

Must be below spark.rpc.message.maxSize (to prevent sending an RPC message that is too large)

spark.rpc.message.maxSize

Maximum allowed message size for RPC communication (in MB unless specified)

Default: 128

Generally only applies to map output size (serialized) information sent between executors and the driver.

Increase this if you are running jobs with many thousands of map and reduce tasks and see messages about the RPC message size.

spark.shuffle.minNumPartitionsToHighlyCompress

(internal) Minimum number of partitions (threshold) when MapStatus object creates a HighlyCompressedMapStatus (over CompressedMapStatus) when requested for one (for ShuffleWriters).

Default: 2000

Must be a positive integer (above 0)

spark.shuffle.reduceLocality.enabled

Enables locality preferences for reduce tasks

Default: true

When enabled (true), MapOutputTrackerMaster will compute the preferred hosts on which to run a given map output partition in a given shuffle, i.e. the nodes that the most outputs for that partition are on.

spark.shuffle.sort.bypassMergeThreshold

Maximum number of reduce partitions below which SortShuffleManager avoids merge-sorting data for no map-side aggregation

Default: 200

spark.shuffle.sort.initialBufferSize

Initial buffer size for sorting

Default: 4096

Used exclusively when UnsafeShuffleWriter is requested to open (and creates a ShuffleExternalSorter)

spark.shuffle.sync

Controls whether DiskBlockObjectWriter should force outstanding writes to disk while committing a single atomic block, i.e. all operating system buffers should synchronize with the disk to ensure that all changes to a file are in fact recorded in the storage.

Default: false

Used when BlockManager is requested for a DiskBlockObjectWriter

spark.shuffle.unsafe.file.output.buffer

The file system for this buffer size after each partition is written in unsafe shuffle writer. In KiB unless otherwise specified.

Default: 32k

Must be greater than 0 and less than or equal to 2097151 ((Integer.MAX_VALUE - 15) / 1024)

spark.scheduler.revive.interval

Time (in ms) between resource offers revives

Default: 1s

spark.scheduler.minRegisteredResourcesRatio

Minimum ratio of (registered resources / total expected resources) before submitting tasks

Default: 0

spark.scheduler.maxRegisteredResourcesWaitingTime

Time to wait for sufficient resources available

Default: 30s

spark.file.transferTo

When enabled (true), copying data between two Java FileInputStreams uses Java FileChannels (Java NIO) to improve copy performance.

Default: true

spark.shuffle.service.enabled

Controls whether to use the External Shuffle Service

Default: false

When enabled (true), the driver registers itself with the shuffle service.

spark.shuffle.service.port

Default: 7337

spark.shuffle.compress

Controls whether to compress shuffle output when stored

Default: true

spark.shuffle.unsafe.fastMergeEnabled

Enables fast merge strategy for UnsafeShuffleWriter to merge spill files.

Default: true

spark.rdd.compress

Controls whether to compress RDD partitions when stored serialized.

Default: false

spark.shuffle.spill.compress

Controls whether to compress shuffle output temporarily spilled to disk.

Default: true

spark.block.failures.beforeLocationRefresh

Default: 5

spark.io.encryption.enabled

Controls whether to use IO encryption

Default: false

spark.closure.serializer

Default: org.apache.spark.serializer.JavaSerializer

spark.serializer

Default: org.apache.spark.serializer.JavaSerializer

spark.io.compression.codec

The default CompressionCodec

Default: lz4

spark.io.compression.lz4.blockSize

The block size of the LZ4CompressionCodec

Default: 32k

spark.io.compression.snappy.blockSize

The block size of the SnappyCompressionCodec

Default: 32k

spark.io.compression.zstd.bufferSize

The buffer size of the BufferedOutputStream of the ZStdCompressionCodec

Default: 32k

The buffer is used to avoid the overhead of excessive JNI calls while compressing or uncompressing small amount of data

spark.io.compression.zstd.level

The compression level of the ZStdCompressionCodec

Default: 1

The default level is the fastest of all with reasonably high compression ratio

spark.buffer.size

Default: 65536

spark.cleaner.referenceTracking.cleanCheckpoints

Enables cleaning checkpoint files when a checkpointed reference is out of scope

Default: false

spark.cleaner.periodicGC.interval

Controls how often to trigger a garbage collection

Default: 30min

spark.cleaner.referenceTracking

Controls whether to enable ContextCleaner

Default: true

spark.cleaner.referenceTracking.blocking

Controls whether the cleaning thread should block on cleanup tasks (other than shuffle, which is controlled by spark.cleaner.referenceTracking.blocking.shuffle)

Default: true

spark.cleaner.referenceTracking.blocking.shuffle

Controls whether the cleaning thread should block on shuffle cleanup tasks.

Default: false

spark.broadcast.blockSize

The size of a block (in kB unless the unit is specified)

Default: 4m

spark.broadcast.compress

Controls broadcast compression

Default: true

spark.app.id

Unique identifier of a Spark application that Spark uses to uniquely identify metric sources.

Set when SparkContext is created (right after TaskScheduler is started that actually gives the identifier).

spark.app.name

Application Name

Default: (undefined)

spark.rpc.lookupTimeout

Timeout to use for the Default Endpoint Lookup Timeout

Default: 120s

spark.rpc.numRetries

Number of attempts to send a message to and receive a response from a remote endpoint.

Default: 3

spark.rpc.retry.wait

Time to wait between retries.

Default: 3s

spark.rpc.askTimeout

Timeout for RPC ask calls

Default: 120s

spark.network.timeout

Network timeout to use for RPC remote endpoint lookup. Fallback for spark.rpc.askTimeout.

Default: 120s