Consumer that dumps messages to standard out.
Performance test for the full zookeeper consumer
This class records the average end to end latency for a single message to travel through Kafka
A utility that retrieve the offset of broker partitions in ZK and prints to an output file in the following format:
A utility that retrieve the offset of broker partitions in ZK and prints to an output file in the following format:
/consumers/group1/offsets/topic1/1-0:286894308 /consumers/group1/offsets/topic1/2-0:284803985
This utility expects 3 arguments:
To print debug message, add the following line to log4j.properties: log4j.logger.kafka.tools.ExportZkOffsets$=DEBUG (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)
A utility that updates the offset of broker partitions in ZK.
A utility that updates the offset of broker partitions in ZK.
This utility expects 2 input files as arguments:
/consumers/group1/offsets/topic1/3-0:285038193 /consumers/group1/offsets/topic1/1-0:286894308
To print debug message, add the following line to log4j.properties: log4j.logger.kafka.tools.ImportZkOffsets$=DEBUG (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)
The mirror maker has the following architecture: - There are N mirror maker thread shares one ZookeeperConsumerConnector and each owns a Kafka stream.
The mirror maker has the following architecture: - There are N mirror maker thread shares one ZookeeperConsumerConnector and each owns a Kafka stream. - All the mirror maker threads share one producer. - Each mirror maker thread periodically flushes the producer and then commits all offsets.
For mirror maker, the following settings are set by default to make sure there is no data loss:
For verifying the consistency among replicas.
For verifying the consistency among replicas.
The consistency verification is up to the high watermark. The tool reports the max lag between the verified offset and the high watermark among all partitions.
If a broker goes down, the verification of the partitions on that broker is delayed until the broker is up again.
Caveats: 1. The tools needs all brokers to be up at startup time. 2. The tool doesn't handle out of range offsets.
Performance test for the simple consumer
Command line program to dump out messages to standard out using the simple consumer
A utility that merges the state change logs (possibly obtained from different brokers and over multiple days).
A utility that merges the state change logs (possibly obtained from different brokers and over multiple days).
This utility expects at least one of the following two arguments - 1. A list of state change log files 2. A regex to specify state change log file names.
This utility optionally also accepts the following arguments - 1. The topic whose state change logs should be merged 2. A list of partitions whose state change logs should be merged (can be specified only when the topic argument is explicitly specified) 3. Start time from when the logs should be merged 4. End time until when the logs should be merged
A utility that updates the offset of every broker partition to the offset of earliest or latest log segment file, in ZK.
ZooKeeper 3.4.6 broke being able to pass commands on command line.
ZooKeeper 3.4.6 broke being able to pass commands on command line. See ZOOKEEPER-1897. This class is a hack to restore this faclity.
Load test for the producer
Load test for the producer
This class will be replaced by org.apache.kafka.tools.ProducerPerformance after the old producer client is removed
This class records the average end to end latency for a single message to travel through Kafka
broker_list = location of the bootstrap broker for both the producer and the consumer num_messages = # messages to send producer_acks = See ProducerConfig.ACKS_DOC message_size_bytes = size of each message in bytes
e.g. [localhost:9092 test 10000 1 20]