NettyBlockTransferService takes the following to be created:
NettyBlockTransferService is created when:
SparkEnvutility is used to create a SparkEnv (for the driver and executors and creates a BlockManager)
init( blockDataManager: BlockDataManager): Unit
init is part of the BlockTransferService abstraction.
In the end,
init prints out the following INFO message to the logs:
Server created on [hostName]:[port]
fetchBlocks( host: String, port: Int, execId: String, blockIds: Array[String], listener: BlockFetchingListener, tempFileManager: DownloadFileManager): Unit
fetchBlocks prints out the following TRACE message to the logs:
Fetch blocks from [host]:[port] (executor id [execId])
fetchBlocks creates a BlockTransferStarter.
fetchBlocks requests the
BlockFetchStarter to createAndStart (with the
blockIds and the
In case of any
fetchBlocks prints out the following ERROR message to the logs and the given
BlockFetchingListener gets notified.
Exception while beginning fetchBlocks
fetchBlocks is part of the BlockStoreClient abstraction.
In case of an
createAndStart requests the driver RpcEndpointRef to send an
IsExecutorAlive message synchronously (with the given
If the driver
createAndStart throws an ExecutorDeadException:
The relative remote executor(Id: [execId]), which maintains the block data to fetch is dead.
createAndStart (re)throws the
uploadBlock( hostname: String, port: Int, execId: String, blockId: BlockId, blockData: ManagedBuffer, level: StorageLevel, classTag: ClassTag[_]): Future[Unit]
uploadBlock is part of the BlockTransferService abstraction.
uploadBlock creates a
TransportClient (with the given
uploadBlock serializes the given StorageLevel and
ClassTag (using a
uploadBlock uses a stream to transfer shuffle blocks when one of the following holds:
- The size of the block data (
ManagedBuffer) is above spark.network.maxRemoteBlockSizeFetchToMem configuration property
- The given BlockId is a shuffle block
For stream transfer
uploadBlock requests the
uploadBlock requests the
UploadBlock message is processed by NettyBlockRpcServer.
With the upload successful,
uploadBlock prints out the following TRACE message to the logs:
Successfully uploaded block [blockId] [as stream]
With the upload failed,
uploadBlock prints out the following ERROR message to the logs:
Error while uploading block [blockId] [as stream]
ALL logging level for
org.apache.spark.network.netty.NettyBlockTransferService logger to see what happens inside.
Add the following line to
Refer to Logging.