DeltaConfig¶
DeltaConfig
(of type T
) represents a named configuration property of a delta table with values (of type T
).
Creating Instance¶
DeltaConfig
takes the following to be created:
- Configuration Key
- Default Value
- Conversion function (from text representation of the
DeltaConfig
to theT
type, i.e.String => T
) - Validation function (that guards from incorrect values, i.e.
T => Boolean
) - Help message
- (optional) Minimum version of protocol supported (default: undefined)
DeltaConfig
is created when:
DeltaConfigs
utility is used to build a DeltaConfig
Reading Configuration Property From Metadata¶
fromMetaData(
metadata: Metadata): T
fromMetaData
looks up the key in the configuration of the given Metadata. If not found, fromMetaData
gives the default value.
In the end, fromMetaData
converts the text representation to the proper type using fromString conversion function.
fromMetaData
is used when:
Checkpoints
utility is used to buildCheckpointDeltaErrors
utility is used to logFileNotFoundExceptionDeltaLog
is requested for checkpointInterval and deletedFileRetentionDuration table properties, and to assert a table is not read-onlyMetadataCleanup
is requested for the enableExpiredLogCleanup and the deltaRetentionMillisOptimisticTransactionImpl
is requested to commitSnapshot
is requested for the numIndexedCols
Demo¶
import org.apache.spark.sql.delta.{DeltaConfig, DeltaConfigs}
scala> :type DeltaConfigs.TOMBSTONE_RETENTION
org.apache.spark.sql.delta.DeltaConfig[org.apache.spark.unsafe.types.CalendarInterval]
import org.apache.spark.sql.delta.DeltaLog
val path = "/tmp/delta/t1"
val t1 = DeltaLog.forTable(spark, path)
val metadata = t1.snapshot.metadata
val retention = DeltaConfigs.TOMBSTONE_RETENTION.fromMetaData(metadata)
scala> :type retention
org.apache.spark.unsafe.types.CalendarInterval