FilePartition¶
maxSplitBytes¶
maxSplitBytes(
sparkSession: SparkSession,
selectedPartitions: Seq[PartitionDirectory]): Long
maxSplitBytes
can be adjusted based on the following configuration properties:
- spark.sql.files.maxPartitionBytes
- spark.sql.files.openCostInBytes
- spark.sql.files.minPartitionNum (default: Default Parallelism of Leaf Nodes)
maxSplitBytes
calculates the total size of all the files (in the given PartitionDirectory
ies) with spark.sql.files.openCostInBytes overhead added (to the size of every file).
PartitionDirectory
PartitionDirectory
is a collection of FileStatus
es (Apache Hadoop) along with partition values (if there are any).
maxSplitBytes
calculates how many bytes to allow per partition (bytesPerCore
) that is the total size of all the files divided by spark.sql.files.minPartitionNum configuration property.
In the end, maxSplitBytes
is spark.sql.files.maxPartitionBytes unless the maximum of spark.sql.files.openCostInBytes and bytesPerCore
is even smaller.
maxSplitBytes
is used when:
FileSourceScanExec
physical operator is requested to create an RDD for scanning (and creates a FileScanRDD)FileScan
is requested for partitions