Consumer that dumps messages out to standard out.
Performance test for the full zookeeper consumer
A utility that retrieve the offset of broker partitions in ZK and prints to an output file in the following format:
A utility that updates the offset of broker partitions in ZK.
A utility that updates the offset of broker partitions in ZK.
This utility expects 2 input files as arguments:
/consumers/group1/offsets/topic1/3-0:285038193 /consumers/group1/offsets/topic1/1-0:286894308
To print debug message, add the following line to log4j.properties: log4j.logger.kafka.tools.ImportZkOffsets$=DEBUG (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)
Load test for the producer
For verifying the consistency among replicas.
For verifying the consistency among replicas.
The consistency verification is up to the high watermark. The tool reports the max lag between the verified offset and the high watermark among all partitions.
If a broker goes down, the verification of the partitions on that broker is delayed until the broker is up again.
Caveats: 1. The tools needs all brokers to be up at startup time. 2. The tool doesn't handle out of range offsets.
Performance test for the simple consumer
Command line program to dump out messages to standard out using the simple consumer
A utility that merges the state change logs (possibly obtained from different brokers and over multiple days).
A utility that merges the state change logs (possibly obtained from different brokers and over multiple days).
This utility expects at least one of the following two arguments - 1. A list of state change log files 2. A regex to specify state change log file names.
This utility optionally also accepts the following arguments - 1. The topic whose state change logs should be merged 2. A list of partitions whose state change logs should be merged (can be specified only when the topic argument is explicitly specified) 3. Start time from when the logs should be merged 4. End time until when the logs should be merged
This is a torture test that runs against an existing broker.
This is a torture test that runs against an existing broker. Here is how it works:
It produces a series of specially formatted messages to one or more partitions. Each message it produces it logs out to a text file. The messages have a limited set of keys, so there is duplication in the key space.
The broker will clean its log as the test runs.
When the specified number of messages have been produced we create a consumer and consume all the messages in the topic and write that out to another text file.
Using a stable unix sort we sort both the producer log of what was sent and the consumer log of what was retrieved by the message key. Then we compare the final message in both logs for each key. If this final message is not the same for all keys we print an error and exit with exit code 1, otherwise we print the size reduction and exit with exit code 0.
A utility that updates the offset of every broker partition to the offset of earliest or latest log segment file, in ZK.
ZooKeeper 3.
ZooKeeper 3.4.6 broke being able to pass commands on command line. See ZOOKEEPER-1897. This class is a hack to restore this faclity.
A utility that retrieve the offset of broker partitions in ZK and prints to an output file in the following format:
/consumers/group1/offsets/topic1/1-0:286894308 /consumers/group1/offsets/topic1/2-0:284803985
This utility expects 3 arguments:
To print debug message, add the following line to log4j.properties: log4j.logger.kafka.tools.ExportZkOffsets$=DEBUG (for eclipse debugging, copy log4j.properties to the binary directory in "core" such as core/bin)