Log¶
Creating Instance¶
Log takes the following to be created:
- Directory
-
LogConfig -
LogSegments - logStartOffset
- recoveryPoint
-
LogOffsetMetadata -
Scheduler -
BrokerTopicStats -
Time - producerIdExpirationCheckIntervalMs
-
TopicPartition - Optional
LeaderEpochFileCache -
ProducerStateManager -
LogDirFailureChannel - Optional Topic ID
- keepPartitionMetadataFile
Log is created using apply utility.
Creating Log¶
apply(
dir: File,
config: LogConfig,
logStartOffset: Long,
recoveryPoint: Long,
scheduler: Scheduler,
brokerTopicStats: BrokerTopicStats,
time: Time = Time.SYSTEM,
maxProducerIdExpirationMs: Int,
producerIdExpirationCheckIntervalMs: Int,
logDirFailureChannel: LogDirFailureChannel,
lastShutdownClean: Boolean = true,
topicId: Option[Uuid],
keepPartitionMetadataFile: Boolean): Log
apply...FIXME
apply is used when:
LogManageris requested toloadLogandgetOrCreateLogKafkaMetadataLogis requested toapply
Reading Messages¶
read(
startOffset: Long,
maxLength: Int,
isolation: FetchIsolation,
minOneMessage: Boolean): FetchDataInfo
read prints out the following TRACE message to the logs:
Reading maximum [maxLength] bytes at offset [startOffset] from log with total length [size] bytes
read...FIXME
read requests the LogSegment to read messages.
read...FIXME
read is used when:
Partitionis requested to readRecordsGroupMetadataManageris requested todoLoadGroupsAndOffsetsTransactionStateManageris requested toloadTransactionMetadataLogis requested to convertToOffsetMetadataOrThrowKafkaMetadataLogis requested toread
Logging¶
Enable ALL logging level for kafka.log.Log logger to see what happens inside.
Add the following line to log4j.properties:
log4j.logger.kafka.log.Log=ALL
Refer to Logging.