DeltaConfig¶
DeltaConfig (of type T) represents a named configuration property of a delta table with values (of type T).
Creating Instance¶
DeltaConfig takes the following to be created:
- Configuration Key
- Default Value
- Conversion function (from text representation of the
DeltaConfigto theTtype, i.e.String => T) - Validation function (that guards from incorrect values, i.e.
T => Boolean) - Help message
- (optional) Minimum version of protocol supported (default: undefined)
DeltaConfig is created when:
DeltaConfigsutility is used to build a DeltaConfig
Reading Configuration Property From Metadata¶
fromMetaData(
metadata: Metadata): T
fromMetaData looks up the key in the configuration of the given Metadata. If not found, fromMetaData gives the default value.
In the end, fromMetaData converts the text representation to the proper type using fromString conversion function.
fromMetaData is used when:
Checkpointsutility is used to buildCheckpointDeltaErrorsutility is used to logFileNotFoundExceptionDeltaLogis requested for checkpointInterval and deletedFileRetentionDuration table properties, and to assert a table is not read-onlyMetadataCleanupis requested for the enableExpiredLogCleanup and the deltaRetentionMillisOptimisticTransactionImplis requested to commitSnapshotis requested for the numIndexedCols
Demo¶
import org.apache.spark.sql.delta.{DeltaConfig, DeltaConfigs}
scala> :type DeltaConfigs.TOMBSTONE_RETENTION
org.apache.spark.sql.delta.DeltaConfig[org.apache.spark.unsafe.types.CalendarInterval]
import org.apache.spark.sql.delta.DeltaLog
val path = "/tmp/delta/t1"
val t1 = DeltaLog.forTable(spark, path)
val metadata = t1.snapshot.metadata
val retention = DeltaConfigs.TOMBSTONE_RETENTION.fromMetaData(metadata)
scala> :type retention
org.apache.spark.unsafe.types.CalendarInterval