FetchFailedException is reported,
TaskRunner catches it and notifies the ExecutorBackend (with
TaskState.FAILED task state).
FetchFailedException takes the following to be created:
FetchFailedException is created when:
ShuffleBlockFetcherIteratoris requested to throw a FetchFailedException (for a
FetchFailedException can be given an error cause when created.
OutOfMemoryErrorcould be thrown (aka OOMed) or some other unhandled exception
- The cluster manager that manages the workers with the executors of your Spark application (e.g. Kubernetes, Hadoop YARN) enforces the container memory limits and eventually decides to kill the executor due to excessive memory usage
A solution is usually to tune the memory of your Spark application.