Standalone Worker (aka standalone slave) is a logical node in a Spark Standalone cluster.
You can have one or many standalone workers in a standalone cluster. They can be started and stopped using management scripts.
Worker is created when…FIXME
Working directory of the executors that the
receive: PartialFunction[Any, Unit]
handleRegisterResponse(msg: RegisterWorkerResponse): Unit
main(argStrings: Array[String]): Unit
startRpcEnvAndEndpoint( host: String, port: Int, webUiPort: Int, cores: Int, memory: Int, masterUrls: Array[String], workDir: String, workerNumber: Option[Int] = None, conf: SparkConf = new SparkConf): RpcEnv
startRpcEnvAndEndpoint creates a RpcEnv for the input
startRpcEnvAndEndpoint creates a Worker RPC endpoint (for the RPC environment and the input
Worker takes the following when created:
Worker initializes the internal registries and counters.
In the end,
createWorkDir creates workDir directory (including any necessary but nonexistent parent directories).