Skip to content


Creating Instance

Log takes the following to be created:

  • Directory
  • LogConfig
  • LogSegments
  • logStartOffset
  • recoveryPoint
  • LogOffsetMetadata
  • Scheduler
  • BrokerTopicStats
  • Time
  • producerIdExpirationCheckIntervalMs
  • TopicPartition
  • Optional LeaderEpochFileCache
  • ProducerStateManager
  • LogDirFailureChannel
  • Optional Topic ID
  • keepPartitionMetadataFile

Log is created using apply utility.

Creating Log

  dir: File,
  config: LogConfig,
  logStartOffset: Long,
  recoveryPoint: Long,
  scheduler: Scheduler,
  brokerTopicStats: BrokerTopicStats,
  time: Time = Time.SYSTEM,
  maxProducerIdExpirationMs: Int,
  producerIdExpirationCheckIntervalMs: Int,
  logDirFailureChannel: LogDirFailureChannel,
  lastShutdownClean: Boolean = true,
  topicId: Option[Uuid],
  keepPartitionMetadataFile: Boolean): Log


apply is used when:

  • LogManager is requested to loadLog and getOrCreateLog
  • KafkaMetadataLog is requested to apply

Reading Messages

  startOffset: Long,
  maxLength: Int,
  isolation: FetchIsolation,
  minOneMessage: Boolean): FetchDataInfo

read prints out the following TRACE message to the logs:

Reading maximum [maxLength] bytes at offset [startOffset] from log with total length [size] bytes


read requests the LogSegment to read messages.


read is used when:

  • Partition is requested to readRecords
  • GroupMetadataManager is requested to doLoadGroupsAndOffsets
  • TransactionStateManager is requested to loadTransactionMetadata
  • Log is requested to convertToOffsetMetadataOrThrow
  • KafkaMetadataLog is requested to read


Enable ALL logging level for kafka.log.Log logger to see what happens inside.

Add the following line to


Refer to Logging.

Back to top