org.apache.hadoop.hbase.master.balancer
Class SimpleLoadBalancer

java.lang.Object
  extended by org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer
      extended by org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer
All Implemented Interfaces:
org.apache.hadoop.conf.Configurable, LoadBalancer, Stoppable

@InterfaceAudience.LimitedPrivate(value="Configuration")
public class SimpleLoadBalancer
extends BaseLoadBalancer

Makes decisions about the placement and movement of Regions across RegionServers.

Cluster-wide load balancing will occur only when there are no regions in transition and according to a fixed period of a time using balanceCluster(Map).

Inline region placement with BaseLoadBalancer.immediateAssignment(java.util.List, java.util.List) can be used when the Master needs to handle closed regions that it currently does not have a destination set for. This can happen during master failover.

On cluster startup, bulk assignment can be used to determine locations for all Regions in a cluster.

This classes produces plans for the AssignmentManager to execute.


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer
BaseLoadBalancer.Cluster
 
Field Summary
 
Fields inherited from class org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer
metricsBalancer, services, slop
 
Constructor Summary
SimpleLoadBalancer()
           
 
Method Summary
 List<RegionPlan> balanceCluster(Map<ServerName,List<HRegionInfo>> clusterMap)
          Generate a global load balancing plan according to the specified map of server information to the most loaded regions of each server.
 
Methods inherited from class org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer
getConf, immediateAssignment, initialize, isStopped, needsBalance, randomAssignment, regionOffline, regionOnline, retainAssignment, roundRobinAssignment, setClusterStatus, setConf, setMasterServices, setSlop, stop
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

SimpleLoadBalancer

public SimpleLoadBalancer()
Method Detail

balanceCluster

public List<RegionPlan> balanceCluster(Map<ServerName,List<HRegionInfo>> clusterMap)
Generate a global load balancing plan according to the specified map of server information to the most loaded regions of each server. The load balancing invariant is that all servers are within 1 region of the average number of regions per server. If the average is an integer number, all servers will be balanced to the average. Otherwise, all servers will have either floor(average) or ceiling(average) regions. HBASE-3609 Modeled regionsToMove using Guava's MinMaxPriorityQueue so that we can fetch from both ends of the queue. At the beginning, we check whether there was empty region server just discovered by Master. If so, we alternately choose new / old regions from head / tail of regionsToMove, respectively. This alternation avoids clustering young regions on the newly discovered region server. Otherwise, we choose new regions from head of regionsToMove. Another improvement from HBASE-3609 is that we assign regions from regionsToMove to underloaded servers in round-robin fashion. Previously one underloaded server would be filled before we move onto the next underloaded server, leading to clustering of young regions. Finally, we randomly shuffle underloaded servers so that they receive offloaded regions relatively evenly across calls to balanceCluster(). The algorithm is currently implemented as such:
  1. Determine the two valid numbers of regions each server should have, MIN=floor(average) and MAX=ceiling(average).
  2. Iterate down the most loaded servers, shedding regions from each so each server hosts exactly MAX regions. Stop once you reach a server that already has <= MAX regions.

    Order the regions to move from most recent to least.

  3. Iterate down the least loaded servers, assigning regions so each server has exactly MIN regions. Stop once you reach a server that already has >= MIN regions. Regions being assigned to underloaded servers are those that were shed in the previous step. It is possible that there were not enough regions shed to fill each underloaded server to MIN. If so we end up with a number of regions required to do so, neededRegions. It is also possible that we were able to fill each underloaded but ended up with regions that were unassigned from overloaded servers but that still do not have assignment. If neither of these conditions hold (no regions needed to fill the underloaded servers, no regions leftover from overloaded servers), we are done and return. Otherwise we handle these cases below.
  4. If neededRegions is non-zero (still have underloaded servers), we iterate the most loaded servers again, shedding a single server from each (this brings them from having MAX regions to having MIN regions).
  5. We now definitely have more regions that need assignment, either from the previous step or from the original shedding from overloaded servers. Iterate the least loaded servers filling each to MIN.
  6. If we still have more regions that need assignment, again iterate the least loaded servers, this time giving each one (filling them to MAX) until we run out.
  7. All servers will now either host MIN or MAX regions. In addition, any server hosting >= MAX regions is guaranteed to end up with MAX regions at the end of the balancing. This ensures the minimal number of regions possible are moved.
TODO: We can at-most reassign the number of regions away from a particular server to be how many they report as most loaded. Should we just keep all assignment in memory? Any objections? Does this mean we need HeapSize on HMaster? Or just careful monitor? (current thinking is we will hold all assignments in memory)

Parameters:
clusterMap - Map of regionservers and their load/region information to a list of their most loaded regions
Returns:
a list of regions to be moved, including source and destination, or null if cluster is already balanced


Copyright © 2007–2015 The Apache Software Foundation. All rights reserved.