Skip to content

Tungsten Execution Backend (Project Tungsten)

The goal of Project Tungsten is to improve Spark execution by optimizing Spark jobs for CPU and memory efficiency (as opposed to network and disk I/O which are considered fast enough).

Tungsten focuses on the hardware architecture of the platform Spark runs on (including but not limited to JVM, LLVM, GPU, NVRAM) by offering the following optimization features:

  1. Off-Heap Memory Management using binary in-memory data representation aka Tungsten row format and managing memory explicitly

  2. Cache Locality which is about cache-aware computations with cache-aware layout for high cache hit rates

  3. Whole-Stage Code Generation (aka CodeGen)

Project Tungsten uses sun.misc.unsafe API for direct memory access to bypass the JVM in order to avoid garbage collection.

RDD vs DataFrame Size in Memory in web UI

Off-Heap Memory Management

Project Tungsten aims at substantially reducing the usage of JVM objects (and therefore JVM garbage collection) by introducing its own off-heap binary memory management. Instead of working with Java objects, Tungsten uses sun.misc.Unsafe to manipulate raw memory.

Tungsten uses the compact storage format called UnsafeRow for data representation that further reduces memory footprint.

Since Datasets have known schema, Tungsten properly and in a more compact and efficient way lays out the objects on its own. That brings benefits similar to using extensions written in low-level and hardware-aware languages like C or assembler.

It is possible immediately with the data being already serialized (that further reduces or completely avoids serialization between JVM object representation and Spark's internal one).

Cache Locality

Tungsten uses algorithms and cache-aware data structures that exploit the physical machine caches at different levels - L1, L2, L3.

Whole-Stage Java Code Generation

Tungsten does code generation at compile time and generates JVM bytecode to access Tungsten-managed memory structures that gives a very fast access. It uses the Janino compiler that is a super-small, super-fast Java compiler.

Note

The code generation was tracked under SPARK-8159 Improve expression function coverage (Spark 1.5).

Further Reading and Watching