Abstract class for fetching data from multiple partitions from the same broker.
This class saves broker's metadata to a file
Broker states are the possible state that a kafka broker can be in.
The ClientIdConfigHandler will process clientId config changes in ZK.
The ClientIdConfigHandler will process clientId config changes in ZK. The callback provides the clientId and the full properties set read from ZK. This implementation reports the overrides to the respective ClientQuotaManager objects
Helper class that records per-client metrics.
Helper class that records per-client metrics. It is also responsible for maintaining Quota usage statistics for all clients.
Configuration settings for quota management
Configuration settings for quota management
The default bytes per second quota allocated to any client
The number of samples to retain in memory
The time span of each sample
The ConfigHandler is used to process config change notifications received by the DynamicConfigManager
A delayed fetch operation that can be created by the replica manager and watched in the fetch operation purgatory
An operation whose processing needs to be delayed for at most the given delayMs.
An operation whose processing needs to be delayed for at most the given delayMs. For example a delayed produce operation could be waiting for specified number of acks; or a delayed fetch operation could be waiting for a given number of bytes to accumulate.
The logic upon completing a delayed operation is defined in onComplete() and will be called exactly once. Once an operation is completed, isCompleted() will return true. onComplete() can be triggered by either forceComplete(), which forces calling onComplete() after delayMs if the operation is not yet completed, or tryComplete(), which first checks if the operation can be completed or not now, and if yes calls forceComplete().
A subclass of DelayedOperation needs to provide an implementation of both onComplete() and tryComplete().
Keys used for delayed operation metrics recording
A helper purgatory class for bookkeeping delayed operations with a timeout, and expiring timed out operations.
A delayed produce operation that can be created by the replica manager and watched in the produce operation purgatory
This class initiates and carries out config changes for all entities defined in ConfigType.
This class initiates and carries out config changes for all entities defined in ConfigType.
It works as follows.
Config is stored under the path: /config/entityType/entityName E.g. /config/topics/<topic_name> and /config/clients/<clientId> This znode stores the overrides for this entity (but no defaults) in properties format.
To avoid watching all topics for changes instead we have a notification path /config/changes The DynamicConfigManager has a child watch on this path.
To update a config we first update the config properties. Then we create a new sequential znode under the change path which contains the name of the entityType and entityName that was updated, say /config/changes/config_change_13321 The sequential znode contains data in this format: {"version" : 1, "entityType":"topic/client", "entityName" : "topic_name/client_id"} This is just a notification--the actual config change is stored only once under the /config/entityType/entityName path.
This will fire a watcher on all brokers. This watcher works as follows. It reads all the config change notifications. It keeps track of the highest config change suffix number it has applied previously. For any previously applied change it finds it checks if this notification is larger than a static expiration time (say 10mins) and if so it deletes this notification. For any new changes it reads the new configuration, combines it with the defaults, and updates the existing config.
Note that config is always read from the config path in zk, the notification is just a trigger to do so. So if a broker is down and misses a change that is fine--when it restarts it will be loading the full config anyway. Note also that if there are two consecutive config changes it is possible that only the last one will be applied (since by the time the broker reads the config the both changes may have been made). In this case the broker would needlessly refresh the config twice, but that is harmless.
On restart the config manager re-processes all notifications. This will usually be wasted work, but avoids any race conditions on startup where a change might be missed between the initial config load and registering for change notifications.
The fetch metadata maintained by the delayed fetch operation
Logic to handle the various Kafka requests
This class registers the broker in zookeeper to allow other brokers and consumers to detect failures.
This class registers the broker in zookeeper to allow other brokers and consumers to detect failures. It uses an ephemeral znode with the path: /brokers/[0...N] --> advertisedHost:advertisedPort
Right now our definition of health is fairly naive. If we register in zk we are healthy, otherwise we are dead.
A thread that answers kafka requests.
Represents the lifecycle of a single Kafka broker.
Represents the lifecycle of a single Kafka broker. Handles all functionality required to start up and shutdown a single Kafka node.
This trait defines a leader elector If the existing leader is dead, this class will handle automatic re-election and if it succeeds, it invokes the leader state change callback
This class saves out a map of topic/partition=>offsets to a file
case class to keep partition offset and its state(active , inactive)
The produce metadata maintained by the delayed produce operation
The TopicConfigHandler will process topic config changes in ZK.
The TopicConfigHandler will process topic config changes in ZK. The callback provides the topic name and the full properties set read from ZK
This class handles zookeeper based leader election based on an ephemeral path.
This class handles zookeeper based leader election based on an ephemeral path. The election module does not handle session expiration, instead it assumes the caller will handle it by probably try to re-elect again. If the existing leader is dead, this class will handle automatic re-election and if it succeeds, it invokes the leader state change callback
Represents all the entities that can be configured via ZK
Broker states are the possible state that a kafka broker can be in. A broker should be only in one state at a time. The expected state transition with the following defined states is:
+-----------+ |Not Running| +-----+-----+ | v +-----+-----+ |Starting +--+ +-----+-----+ | +----+------------+ | +>+RecoveringFrom | v |UncleanShutdown | +----------+ +-----+-----+ +-------+---------+ |RunningAs | |RunningAs | | |Controller+<--->+Broker +<-----------+ +----------+ +-----+-----+ | | | v | +-----+------------+ |-----> |PendingControlled | |Shutdown | +-----+------------+ | v +-----+----------+ |BrokerShutting | |Down | +-----+----------+ | v +-----+-----+ |Not Running| +-----------+
Custom states is also allowed for cases where there are custom kafka states for different scenarios.