Skip to content

HadoopWriteConfigUtil

= HadoopWriteConfigUtil

HadoopWriteConfigUtil[K, V] is an <> of <>.

HadoopWriteConfigUtil is used for <> utility when requested to <> (for rdd:PairRDDFunctions.md#saveAsNewAPIHadoopDataset[saveAsNewAPIHadoopDataset] and rdd:PairRDDFunctions.md#saveAsHadoopDataset[saveAsHadoopDataset] transformations).

[[contract]] .HadoopWriteConfigUtil Contract [cols="30m,70",options="header",width="100%"] |=== | Method | Description

| assertConf a| [[assertConf]]

[source, scala]

assertConf( jobContext: JobContext, conf: SparkConf): Unit


| closeWriter a| [[closeWriter]]

[source, scala]

closeWriter( taskContext: TaskAttemptContext): Unit


| createCommitter a| [[createCommitter]]

[source, scala]

createCommitter( jobId: Int): HadoopMapReduceCommitProtocol


| createJobContext a| [[createJobContext]]

[source, scala]

createJobContext( jobTrackerId: String, jobId: Int): JobContext


| createTaskAttemptContext a| [[createTaskAttemptContext]]

[source, scala]

createTaskAttemptContext( jobTrackerId: String, jobId: Int, splitId: Int, taskAttemptId: Int): TaskAttemptContext


Creates a Hadoop https://hadoop.apache.org/docs/r2.7.3/api/org/apache/hadoop/mapreduce/TaskAttemptContext.html[TaskAttemptContext]

| initOutputFormat a| [[initOutputFormat]]

[source, scala]

initOutputFormat( jobContext: JobContext): Unit


| initWriter a| [[initWriter]]

[source, scala]

initWriter( taskContext: TaskAttemptContext, splitId: Int): Unit


| write a| [[write]]

[source, scala]

write( pair: (K, V)): Unit


Writes out the key-value pair

Used when SparkHadoopWriter is requested to <> (while <>)

|===

[[implementations]] .HadoopWriteConfigUtils [cols="30,70",options="header",width="100%"] |=== | HadoopWriteConfigUtil | Description

| <> | [[HadoopMapReduceWriteConfigUtil]]

| <> | [[HadoopMapRedWriteConfigUtil]]

|===


Last update: 2020-10-06