As it was well said: "Delta is a storage format while Spark is an execution engine...to separate storage from compute."
Delta Lake uses OptimisticTransaction for transactional writes. A commit is successful when the transaction can write the actions to a delta file (in the transactional log). In case the delta file for the commit version already exists, the transaction is retried.
Structured queries can write (transactionally) to a delta table using the following interfaces:
WriteIntoDelta command for batch queries (Spark SQL)
DeltaSink for streaming queries (Spark Structured Streaming)
More importantly, multiple queries can write to the same delta table simultaneously (at exactly the same time).
Delta Lake provides DeltaTable API to programmatically access Delta tables.
Delta Lake supports batch and streaming queries (Spark SQL and Structured Streaming, respectively) using delta format.
In order to fine tune queries over data in Delta Lake use options.
Delta Lake supports reading and writing in batch queries:
Delta Lake supports reading and writing in streaming queries:
Delta Tables in Logical Query Plans¶
Put simply, delta tables are
HadoopFsRelation with TahoeFileIndex in logical query plans.
Concurrent Blind Append Transactions¶
Blind append transactions are marked in the commit info to distinguish them from read-modify-appends (deletes, merges or updates) and assume no conflict between concurrent transactions.
Blind Append Transactions allow for concurrent updates.
Delta Lake supports Generated Columns.
Delta Lake introduces table constraints to ensure data quality and integrity (during writes).