|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use CompactionThroughputController | |
---|---|
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions |
Uses of CompactionThroughputController in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver that return CompactionThroughputController | |
---|---|
CompactionThroughputController |
CompactSplitThread.getCompactionThroughputController()
|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type CompactionThroughputController | |
---|---|
List<StoreFile> |
Store.compact(CompactionContext compaction,
CompactionThroughputController throughputController)
|
List<StoreFile> |
HStore.compact(CompactionContext compaction,
CompactionThroughputController throughputController)
Compact the StoreFiles. |
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
CompactionThroughputController throughputController)
Called by compaction thread and after region is opened to compact the HStores if necessary. |
Uses of CompactionThroughputController in org.apache.hadoop.hbase.regionserver.compactions |
---|
Classes in org.apache.hadoop.hbase.regionserver.compactions that implement CompactionThroughputController | |
---|---|
class |
NoLimitCompactionThroughputController
A dummy CompactionThroughputController that does nothing. |
class |
PressureAwareCompactionThroughputController
A throughput controller which uses the follow schema to limit throughput If compaction pressure is greater than 1.0, no limitation. In off peak hours, use a fixed throughput limitation "hbase.hstore.compaction.throughput.offpeak" In normal hours, the max throughput is tune between "hbase.hstore.compaction.throughput.lower.bound" and "hbase.hstore.compaction.throughput.higher.bound", using the formula "lower + (higer - lower) * compactionPressure", where compactionPressure is in range [0.0, 1.0] |
Methods in org.apache.hadoop.hbase.regionserver.compactions that return CompactionThroughputController | |
---|---|
static CompactionThroughputController |
CompactionThroughputControllerFactory.create(RegionServerServices server,
org.apache.hadoop.conf.Configuration conf)
|
Methods in org.apache.hadoop.hbase.regionserver.compactions that return types with arguments of type CompactionThroughputController | |
---|---|
static Class<? extends CompactionThroughputController> |
CompactionThroughputControllerFactory.getThroughputControllerClass(org.apache.hadoop.conf.Configuration conf)
|
Methods in org.apache.hadoop.hbase.regionserver.compactions with parameters of type CompactionThroughputController | |
---|---|
List<org.apache.hadoop.fs.Path> |
DefaultCompactor.compact(CompactionRequest request,
CompactionThroughputController throughputController)
Do a minor/major compaction on an explicit set of storefiles from a Store. |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
int targetCount,
long targetSize,
byte[] left,
byte[] right,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController)
|
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
List<byte[]> targetBoundaries,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController)
|
abstract List<org.apache.hadoop.fs.Path> |
CompactionContext.compact(CompactionThroughputController throughputController)
Runs the compaction based on current selection. |
abstract List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
CompactionThroughputController throughputController)
Executes the request against compactor (essentially, just calls correct overload of compact method), to simulate more dynamic dispatch. |
protected boolean |
Compactor.performCompaction(InternalScanner scanner,
Compactor.CellSink writer,
long smallestReadPoint,
CompactionThroughputController throughputController)
Performs the compaction. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |