org.apache.hadoop.hbase.regionserver.compactions
Class Compactor

java.lang.Object
  extended by org.apache.hadoop.hbase.regionserver.compactions.Compactor
Direct Known Subclasses:
DefaultCompactor

@InterfaceAudience.Private
public abstract class Compactor
extends Object

A compactor is a compaction algorithm associated a given policy. Base class also contains reusable parts for implementing compactors (what is common and what isn't is evolving).


Nested Class Summary
static interface Compactor.CellSink
          TODO: Replace this with CellOutputStream when StoreFile.Writer uses cells.
protected static class Compactor.FileDetails
          The sole reason this class exists is that java has no ref/out/pointer parameters.
 
Field Summary
protected  Compression.Algorithm compactionCompression
           
protected  org.apache.hadoop.conf.Configuration conf
           
protected  CompactionProgress progress
           
protected  Store store
           
 
Method Summary
protected  void abortWriter(StoreFile.Writer writer)
           
abstract  List<org.apache.hadoop.fs.Path> compact(CompactionRequest request)
          Do a minor/major compaction on an explicit set of storefiles from a Store.
 List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact, boolean isMajor)
          Compact a list of files for testing.
protected  List<StoreFileScanner> createFileScanners(Collection<StoreFile> filesToCompact)
           
protected  InternalScanner createScanner(Store store, List<StoreFileScanner> scanners, ScanType scanType, long smallestReadPoint, long earliestPutTs)
           
protected  Compactor.FileDetails getFileDetails(Collection<StoreFile> filesToCompact, boolean calculatePutTs)
           
 CompactionProgress getProgress()
           
protected  boolean performCompaction(InternalScanner scanner, Compactor.CellSink writer, long smallestReadPoint)
           
protected  InternalScanner postCreateCoprocScanner(CompactionRequest request, ScanType scanType, InternalScanner scanner)
           
protected  InternalScanner preCreateCoprocScanner(CompactionRequest request, ScanType scanType, long earliestPutTs, List<StoreFileScanner> scanners)
           
protected  long setSmallestReadPoint()
           
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

progress

protected CompactionProgress progress

conf

protected org.apache.hadoop.conf.Configuration conf

store

protected Store store

compactionCompression

protected Compression.Algorithm compactionCompression
Method Detail

compact

public abstract List<org.apache.hadoop.fs.Path> compact(CompactionRequest request)
                                                 throws IOException
Do a minor/major compaction on an explicit set of storefiles from a Store.

Parameters:
request - the requested compaction
Returns:
Product of compaction or an empty list if all cells expired or deleted and nothing made it through the compaction.
Throws:
IOException

compactForTesting

public List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact,
                                                         boolean isMajor)
                                                  throws IOException
Compact a list of files for testing. Creates a fake CompactionRequest to pass to compact(CompactionRequest);

Parameters:
filesToCompact - the files to compact. These are used as the compactionSelection for the generated CompactionRequest.
isMajor - true to major compact (prune all deletes, max versions, etc)
Returns:
Product of compaction or an empty list if all cells expired or deleted and nothing made it through the compaction.
Throws:
IOException

getProgress

public CompactionProgress getProgress()

getFileDetails

protected Compactor.FileDetails getFileDetails(Collection<StoreFile> filesToCompact,
                                               boolean calculatePutTs)
                                        throws IOException
Throws:
IOException

createFileScanners

protected List<StoreFileScanner> createFileScanners(Collection<StoreFile> filesToCompact)
                                             throws IOException
Throws:
IOException

setSmallestReadPoint

protected long setSmallestReadPoint()

preCreateCoprocScanner

protected InternalScanner preCreateCoprocScanner(CompactionRequest request,
                                                 ScanType scanType,
                                                 long earliestPutTs,
                                                 List<StoreFileScanner> scanners)
                                          throws IOException
Throws:
IOException

postCreateCoprocScanner

protected InternalScanner postCreateCoprocScanner(CompactionRequest request,
                                                  ScanType scanType,
                                                  InternalScanner scanner)
                                           throws IOException
Throws:
IOException

performCompaction

protected boolean performCompaction(InternalScanner scanner,
                                    Compactor.CellSink writer,
                                    long smallestReadPoint)
                             throws IOException
Throws:
IOException

abortWriter

protected void abortWriter(StoreFile.Writer writer)
                    throws IOException
Throws:
IOException

createScanner

protected InternalScanner createScanner(Store store,
                                        List<StoreFileScanner> scanners,
                                        ScanType scanType,
                                        long smallestReadPoint,
                                        long earliestPutTs)
                                 throws IOException
Parameters:
scanners - Store file scanners.
scanType - Scan type.
smallestReadPoint - Smallest MVCC read point.
earliestPutTs - Earliest put across all files.
Returns:
A compaction scanner.
Throws:
IOException


Copyright © 2013 The Apache Software Foundation. All Rights Reserved.