BaseDynamicPartitionDataWriter¶
BaseDynamicPartitionDataWriter
is an extension of the FileFormatDataWriter abstraction for dynamic partition writers.
Implementations¶
Creating Instance¶
BaseDynamicPartitionDataWriter
takes the following to be created:
-
WriteJobDescription
-
TaskAttemptContext
(Apache Hadoop) -
FileCommitProtocol
(Spark Core) - SQLMetrics
Abstract Class
BaseDynamicPartitionDataWriter
is an abstract class and cannot be created directly. It is created indirectly for the concrete BaseDynamicPartitionDataWriters.
renewCurrentWriter¶
renewCurrentWriter(
partitionValues: Option[InternalRow],
bucketId: Option[Int],
closeCurrentWriter: Boolean): Unit
renewCurrentWriter
...FIXME
renewCurrentWriter
is used when:
BaseDynamicPartitionDataWriter
is requested to renewCurrentWriterIfTooManyRecordsDynamicPartitionDataSingleWriter
is requested to write a recordDynamicPartitionDataConcurrentWriter
is requested to setupCurrentWriterUsingMap
getPartitionPath¶
getPartitionPath: InternalRow => String
Lazy Value
getPartitionPath
is a Scala lazy value to guarantee that the code to initialize it is executed once only (when accessed for the first time) and the computed value never changes afterwards.
Learn more in the Scala Language Specification.
getPartitionPath
...FIXME
partitionPathExpression¶
partitionPathExpression: Expression
Lazy Value
partitionPathExpression
is a Scala lazy value to guarantee that the code to initialize it is executed once only (when accessed for the first time) and the computed value never changes afterwards.
Learn more in the Scala Language Specification.
partitionPathExpression
...FIXME