org.apache.hadoop.hbase.regionserver.compactions
Class DefaultCompactor

java.lang.Object
  extended by org.apache.hadoop.hbase.regionserver.compactions.Compactor<StoreFile.Writer>
      extended by org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor

@InterfaceAudience.Private
public class DefaultCompactor
extends Compactor<StoreFile.Writer>

Compact passed set of files. Create an instance and then call compact(CompactionRequest, CompactionThroughputController, User)


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.hadoop.hbase.regionserver.compactions.Compactor
Compactor.CellSink, Compactor.CellSinkFactory<S>, Compactor.FileDetails, Compactor.InternalScannerFactory
 
Field Summary
 
Fields inherited from class org.apache.hadoop.hbase.regionserver.compactions.Compactor
compactionCompression, conf, defaultScannerFactory, progress, store
 
Constructor Summary
DefaultCompactor(org.apache.hadoop.conf.Configuration conf, Store store)
           
 
Method Summary
protected  void abortWriter(StoreFile.Writer writer)
           
protected  List<org.apache.hadoop.fs.Path> commitWriter(StoreFile.Writer writer, Compactor.FileDetails fd, CompactionRequest request)
           
 List<org.apache.hadoop.fs.Path> compact(CompactionRequest request, CompactionThroughputController throughputController, User user)
          Do a minor/major compaction on an explicit set of storefiles from a Store.
 List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact, boolean isMajor)
          Compact a list of files for testing.
 
Methods inherited from class org.apache.hadoop.hbase.regionserver.compactions.Compactor
compact, createFileScanners, createScanner, createScanner, createTmpWriter, getFileDetails, getProgress, getSmallestReadPoint, performCompaction, postCreateCoprocScanner, preCreateCoprocScanner, preCreateCoprocScanner
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

DefaultCompactor

public DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
                        Store store)
Method Detail

compact

public List<org.apache.hadoop.fs.Path> compact(CompactionRequest request,
                                               CompactionThroughputController throughputController,
                                               User user)
                                        throws IOException
Do a minor/major compaction on an explicit set of storefiles from a Store.

Throws:
IOException

compactForTesting

public List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact,
                                                         boolean isMajor)
                                                  throws IOException
Compact a list of files for testing. Creates a fake CompactionRequest to pass to compact(CompactionRequest, CompactionThroughputController, User);

Parameters:
filesToCompact - the files to compact. These are used as the compactionSelection for the generated CompactionRequest.
isMajor - true to major compact (prune all deletes, max versions, etc)
Returns:
Product of compaction or an empty list if all cells expired or deleted and nothing \ made it through the compaction.
Throws:
IOException

commitWriter

protected List<org.apache.hadoop.fs.Path> commitWriter(StoreFile.Writer writer,
                                                       Compactor.FileDetails fd,
                                                       CompactionRequest request)
                                                throws IOException
Specified by:
commitWriter in class Compactor<StoreFile.Writer>
Throws:
IOException

abortWriter

protected void abortWriter(StoreFile.Writer writer)
                    throws IOException
Specified by:
abortWriter in class Compactor<StoreFile.Writer>
Throws:
IOException


Copyright © 2007–2016 The Apache Software Foundation. All rights reserved.