saveAsHiveFile sets Hadoop configuration properties when a compressed file output format is used (based on hive.exec.compress.output configuration property).
saveAsHiveFile uses FileCommitProtocol utility to instantiate a committer for the input outputLocation based on the spark.sql.sources.commitProtocolClass configuration property.
saveAsHiveFile uses FileFormatWriter utility to write out the result of executing the input physical operator (with a HiveFileFormat for the input FileSinkDesc, the new FileCommitProtocol committer, and the input arguments).
getExternalTmpPath finds the Hive version used. getExternalTmpPath requests the input ../SparkSession.md[SparkSession] for the ../SharedState.md#externalCatalog[ExternalCatalog] (that is expected to be a HiveExternalCatalog). getExternalTmpPath requests it for the underlying HiveClient that is in turn requested for the HiveClient.md#version[Hive version].
getExternalTmpPath divides (splits) the supported Hive versions into the ones (old versions) that use index.md#hive.exec.scratchdir[hive.exec.scratchdir] directory (0.12.0 to 1.0.0) and the ones (new versions) that use index.md#hive.exec.stagingdir[hive.exec.stagingdir] directory (1.1.0 to 2.3.3).
getExternalTmpPath <> for the old Hive versions and <> for the new Hive versions.
getExternalTmpPath throws an IllegalStateException for unsupported Hive version:
Unsupported hive version: [hiveVersion]
NOTE: getExternalTmpPath is used when InsertIntoHiveDirCommand.md[InsertIntoHiveDirCommand] and InsertIntoHiveTable.md[InsertIntoHiveTable] logical commands are executed.