TransactionalWrite — Optimistic Transactional Writers

NOTE: OptimisticTransaction is the default and only known TransactionalWrite in Delta Lake (indirectly as a concrete OptimisticTransactionImpl).

Table 1. TransactionalWrite Contract (Abstract Methods Only)
Method Description

deltaLog

deltaLog: DeltaLog

DeltaLog of the delta table that this transaction is changing

Used when:

metadata

metadata: Metadata

Metadata of the delta table that this transaction is changing

Used when…​FIXME

protocol

protocol: Protocol

Protocol of the delta table that this transaction is changing

Used when…​FIXME

snapshot

snapshot: Snapshot

Snapshot of the delta table that this transaction is reading at

Writing Data Out (Result Of Structured Query) — writeFiles Method

writeFiles(
  data: Dataset[_]): Seq[AddFile]
writeFiles(
  data: Dataset[_],
  writeOptions: Option[DeltaOptions]): Seq[AddFile]
writeFiles(
  data: Dataset[_],
  isOptimize: Boolean): Seq[AddFile]
writeFiles(
  data: Dataset[_],
  writeOptions: Option[DeltaOptions],
  isOptimize: Boolean): Seq[AddFile]

writeFiles creates a DeltaInvariantCheckerExec and a DelayedCommitProtocol to write out files to the data path (of the DeltaLog).

writeFiles uses Spark SQL’s FileFormatWriter utility to write out a result of a streaming query.

Read up on FileFormatWriter in The Internals of Spark SQL online book.

writeFiles is executed within SQLExecution.withNewExecutionId.

writeFiles can be tracked using web UI or SQLAppStatusListener (using SparkListenerSQLExecutionStart and SparkListenerSQLExecutionEnd events).

In the end, writeFiles returns the addedStatuses of the DelayedCommitProtocol committer.

Internally, writeFiles turns the hasWritten flag on (true).

After writeFiles, no metadata updates in the transaction are permitted.

writeFiles normalize the given data dataset (based on the partitionColumns of the Metadata).

writeFiles getPartitioningColumns based on the partitionSchema of the Metadata.

writeFiles gets the invariants from the schema of the Metadata.

writeFiles requests a new Execution ID (that is used to track all Spark jobs of FileFormatWriter.write in Spark SQL) with a physical query plan of a new DeltaInvariantCheckerExec unary physical operator (with the executed plan of the normalized query execution as the child operator).

writeFiles is used when:

Creating Committer — getCommitter Method

getCommitter(
  outputPath: Path): DelayedCommitProtocol

getCommitter creates a new DelayedCommitProtocol with the delta job ID and the given outputPath (and no random prefix).

getCommitter is used exclusively when TransactionalWrite is requested to write out a streaming query.

makeOutputNullable Method

makeOutputNullable(output: Seq[Attribute]): Seq[Attribute]

makeOutputNullable…​FIXME

makeOutputNullable is used when…​FIXME

normalizeData Method

normalizeData(
  data: Dataset[_],
  partitionCols: Seq[String]): (QueryExecution, Seq[Attribute])

normalizeData…​FIXME

normalizeData is used when…​FIXME

getPartitioningColumns Method

getPartitioningColumns(
  partitionSchema: StructType,
  output: Seq[Attribute],
  colsDropped: Boolean): Seq[Attribute]

getPartitioningColumns…​FIXME

getPartitioningColumns is used when…​FIXME

hasWritten Flag

hasWritten: Boolean = false

TransactionalWrite uses the hasWritten internal registry to prevent OptimisticTransactionImpl from updating metadata after having written out any files.

hasWritten is initially turned off (false). It can be turned on (true) when TransactionalWrite is requested to write files out.