org.apache.hadoop.hbase.regionserver
Class HStore

java.lang.Object
  extended by org.apache.hadoop.hbase.regionserver.HStore
All Implemented Interfaces:
HeapSize, Store, StoreConfigInformation

@InterfaceAudience.Private
public class HStore
extends Object
implements Store

A Store holds a column family in a Region. Its a memstore and a set of zero or more StoreFiles, which stretch backwards over time.

There's no reason to consider append-logging at this level; all logging and locking is handled at the HRegion level. Store just provides services to manage sets of StoreFiles. One of the most important of those services is compaction services where files are aggregated once they pass a configurable threshold.

The only thing having to do with logs that Store needs to deal with is the reconstructionLog. This is a segment of an HRegion's log that might NOT be present upon startup. If the param is NULL, there's nothing to do. If the param is non-NULL, we need to process the log to reconstruct a TreeMap that might not have been written to disk before the process died.

It's assumed that after this constructor returns, the reconstructionLog file will be deleted (by whoever has instantiated the Store).

Locking and transactions are handled at a higher level. This API should not be called directly but by an HRegion manager.


Field Summary
static String BLOCKING_STOREFILES_KEY
           
static String COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY
           
static long DEEP_OVERHEAD
           
static int DEFAULT_BLOCKING_STOREFILE_COUNT
           
static int DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER
           
static long FIXED_OVERHEAD
           
protected  MemStore memstore
           
 
Fields inherited from interface org.apache.hadoop.hbase.regionserver.Store
NO_PRIORITY, PRIORITY_USER
 
Constructor Summary
protected HStore(HRegion region, HColumnDescriptor family, org.apache.hadoop.conf.Configuration confParam)
          Constructor
 
Method Summary
 long add(KeyValue kv)
          Adds a value to the memstore
 void addChangedReaderObserver(ChangedReadersObserver o)
           
 boolean areWritesEnabled()
           
 void assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath)
          This throws a WrongRegionException if the HFile does not fit in this region, or an InvalidHFileException if the HFile is not valid.
 void bulkLoadHFile(String srcPathStr, long seqNum)
          This method should only be called from HRegion.
 void cancelRequestedCompaction(CompactionContext compaction)
           
 boolean canSplit()
           
 com.google.common.collect.ImmutableCollection<StoreFile> close()
          Close all the readers We don't need to worry about subsequent requests because the HRegion holds a write lock that will prevent any more reads or writes.
 List<StoreFile> compact(CompactionContext compaction, CompactionThroughputController throughputController)
          Compact the StoreFiles.
 List<StoreFile> compact(CompactionContext compaction, CompactionThroughputController throughputController, User user)
           
 void compactRecentForTestingAssumingDefaultPolicy(int N)
          This method tries to compact N recent files for testing.
protected  void completeCompaction(Collection<StoreFile> compactedFiles)
           
 void completeCompactionMarker(WALProtos.CompactionDescriptor compaction)
          Call to complete a compaction.
 org.apache.hadoop.hbase.regionserver.StoreFlushContext createFlushContext(long cacheFlushId)
           
 StoreFile.Writer createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTag)
           
 StoreFile.Writer createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTag, boolean shouldDropBehind)
           
protected  long delete(KeyValue kv)
          Adds a value to the memstore
 void deleteChangedReaderObserver(ChangedReadersObserver o)
           
protected  List<org.apache.hadoop.fs.Path> flushCache(long logCacheFlushId, SortedSet<KeyValue> snapshot, TimeRangeTracker snapshotTimeRangeTracker, AtomicLong flushedSize, MonitoredTask status)
          Write out current snapshot.
 long getAvgStoreFileAge()
           
 long getBlockingFileCount()
          The number of files required before flushes for this store will be blocked.
static int getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
          Returns the configured bytesPerChecksum value.
 CacheConfig getCacheConfig()
          Used for tests.
static ChecksumType getChecksumType(org.apache.hadoop.conf.Configuration conf)
          Returns the configured checksum algorithm.
static int getCloseCheckInterval()
           
 String getColumnFamilyName()
           
 long getCompactedCellsCount()
           
 long getCompactedCellsSize()
           
 long getCompactionCheckMultiplier()
           
 double getCompactionPressure()
          This value can represent the degree of emergency of compaction for this store.
 CompactionProgress getCompactionProgress()
          getter for CompactionProgress object
 int getCompactPriority()
           
 KeyValue.KVComparator getComparator()
           
 RegionCoprocessorHost getCoprocessorHost()
           
 HFileDataBlockEncoder getDataBlockEncoder()
           
 HColumnDescriptor getFamily()
           
 org.apache.hadoop.fs.FileSystem getFileSystem()
           
 long getFlushableSize()
           
 long getFlushedCellsCount()
           
 long getFlushedCellsSize()
           
 HRegion getHRegion()
           
 long getLastCompactSize()
           
 long getMajorCompactedCellsCount()
           
 long getMajorCompactedCellsSize()
           
 long getMaxMemstoreTS()
           
 long getMaxStoreFileAge()
           
 long getMemstoreFlushSize()
           
 long getMemStoreSize()
           
 long getMinStoreFileAge()
           
 long getNumHFiles()
           
 long getNumReferenceFiles()
           
 HRegionFileSystem getRegionFileSystem()
           
 HRegionInfo getRegionInfo()
           
 KeyValue getRowKeyAtOrBefore(byte[] row)
          Find the key that matches row exactly, or the one that immediately precedes it.
 ScanInfo getScanInfo()
           
 KeyValueScanner getScanner(Scan scan, NavigableSet<byte[]> targetCols, long readPt)
          Return a scanner for both the memstore and the HStore files.
 List<KeyValueScanner> getScanners(boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt)
          Get all scanners with no filtering based on TTL (that happens further down the line).
 long getSize()
           
 long getSmallestReadPoint()
           
 byte[] getSplitPoint()
          Determines if Store should be split
 Collection<StoreFile> getStorefiles()
           
 int getStorefilesCount()
           
 long getStorefilesIndexSize()
           
 long getStorefilesSize()
           
 long getStoreFileTtl()
           
static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir, HRegionInfo hri, byte[] family)
          Deprecated. 
static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir, String encodedName, byte[] family)
          Deprecated. 
 long getStoreSizeUncompressed()
           
 TableName getTableName()
           
 long getTotalStaticBloomSize()
          Returns the total byte size of all Bloom filter bit arrays.
 long getTotalStaticIndexSize()
          Returns the total size of all index blocks in the data block indexes, including the root level, intermediate levels, and the leaf level for multi-level indexes, or just the root level for single-level indexes.
 boolean hasReferences()
           
 boolean hasTooManyStoreFiles()
           
 long heapSize()
           
 boolean isMajorCompaction()
           
 boolean needsCompaction()
          See if there's too much store files in this store
 CompactionContext requestCompaction()
           
 CompactionContext requestCompaction(int priority, CompactionRequest baseRequest)
           
 CompactionContext requestCompaction(int priority, CompactionRequest baseRequest, User user)
           
 void rollback(KeyValue kv)
          Removes a kv from the memstore.
 boolean throttleCompaction(long compactionSize)
           
 long timeOfOldestEdit()
          When was the last edit done in the memstore
 String toString()
           
 void triggerMajorCompaction()
           
 long updateColumnValue(byte[] row, byte[] f, byte[] qualifier, long newValue)
          Used in tests.
 long upsert(Iterable<Cell> cells, long readpoint)
          Adds or replaces the specified KeyValues.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
 

Field Detail

COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY

public static final String COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY
See Also:
Constant Field Values

BLOCKING_STOREFILES_KEY

public static final String BLOCKING_STOREFILES_KEY
See Also:
Constant Field Values

DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER

public static final int DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER
See Also:
Constant Field Values

DEFAULT_BLOCKING_STOREFILE_COUNT

public static final int DEFAULT_BLOCKING_STOREFILE_COUNT
See Also:
Constant Field Values

memstore

protected final MemStore memstore

FIXED_OVERHEAD

public static final long FIXED_OVERHEAD

DEEP_OVERHEAD

public static final long DEEP_OVERHEAD
Constructor Detail

HStore

protected HStore(HRegion region,
                 HColumnDescriptor family,
                 org.apache.hadoop.conf.Configuration confParam)
          throws IOException
Constructor

Parameters:
region -
family - HColumnDescriptor for this column
confParam - configuration object failed. Can be null.
Throws:
IOException
Method Detail

getColumnFamilyName

public String getColumnFamilyName()
Specified by:
getColumnFamilyName in interface Store

getTableName

public TableName getTableName()
Specified by:
getTableName in interface Store

getFileSystem

public org.apache.hadoop.fs.FileSystem getFileSystem()
Specified by:
getFileSystem in interface Store

getRegionFileSystem

public HRegionFileSystem getRegionFileSystem()

getStoreFileTtl

public long getStoreFileTtl()
Specified by:
getStoreFileTtl in interface StoreConfigInformation
Returns:
Gets the cf-specific time-to-live for store files.

getMemstoreFlushSize

public long getMemstoreFlushSize()
Specified by:
getMemstoreFlushSize in interface StoreConfigInformation
Returns:
Gets the Memstore flush size for the region that this store works with.

getFlushableSize

public long getFlushableSize()
Specified by:
getFlushableSize in interface Store
Returns:
The amount of memory we could flush from this memstore; usually this is equal to Store.getMemStoreSize() unless we are carrying snapshots and then it will be the size of outstanding snapshots.

getCompactionCheckMultiplier

public long getCompactionCheckMultiplier()
Specified by:
getCompactionCheckMultiplier in interface StoreConfigInformation
Returns:
Gets the cf-specific compaction check frequency multiplier. The need for compaction (outside of normal checks during flush, open, etc.) will be ascertained every multiplier * HConstants.THREAD_WAKE_FREQUENCY milliseconds.

getBlockingFileCount

public long getBlockingFileCount()
Description copied from interface: StoreConfigInformation
The number of files required before flushes for this store will be blocked.

Specified by:
getBlockingFileCount in interface StoreConfigInformation

getBytesPerChecksum

public static int getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
Returns the configured bytesPerChecksum value.

Parameters:
conf - The configuration
Returns:
The bytesPerChecksum that is set in the configuration

getChecksumType

public static ChecksumType getChecksumType(org.apache.hadoop.conf.Configuration conf)
Returns the configured checksum algorithm.

Parameters:
conf - The configuration
Returns:
The checksum algorithm that is set in the configuration

getCloseCheckInterval

public static int getCloseCheckInterval()
Returns:
how many bytes to write between status checks

getFamily

public HColumnDescriptor getFamily()
Specified by:
getFamily in interface Store

getMaxMemstoreTS

public long getMaxMemstoreTS()
Specified by:
getMaxMemstoreTS in interface Store
Returns:
The maximum memstoreTS in all store files.

getStoreHomedir

@Deprecated
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
                                                                   HRegionInfo hri,
                                                                   byte[] family)
Deprecated. 

Parameters:
tabledir - Path to where the table is being stored
hri - HRegionInfo for the region.
family - HColumnDescriptor describing the column family
Returns:
Path to family/Store home directory.

getStoreHomedir

@Deprecated
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
                                                                   String encodedName,
                                                                   byte[] family)
Deprecated. 

Parameters:
tabledir - Path to where the table is being stored
encodedName - Encoded region name.
family - HColumnDescriptor describing the column family
Returns:
Path to family/Store home directory.

getDataBlockEncoder

public HFileDataBlockEncoder getDataBlockEncoder()
Specified by:
getDataBlockEncoder in interface Store
Returns:
the data block encoder

add

public long add(KeyValue kv)
Description copied from interface: Store
Adds a value to the memstore

Specified by:
add in interface Store
Returns:
memstore size delta

timeOfOldestEdit

public long timeOfOldestEdit()
Description copied from interface: Store
When was the last edit done in the memstore

Specified by:
timeOfOldestEdit in interface Store

delete

protected long delete(KeyValue kv)
Adds a value to the memstore

Parameters:
kv -
Returns:
memstore size delta

rollback

public void rollback(KeyValue kv)
Description copied from interface: Store
Removes a kv from the memstore. The KeyValue is removed only if its key & memstoreTS match the key & memstoreTS value of the kv parameter.

Specified by:
rollback in interface Store

getStorefiles

public Collection<StoreFile> getStorefiles()
Specified by:
getStorefiles in interface Store
Returns:
All store files.

assertBulkLoadHFileOk

public void assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath)
                           throws IOException
Description copied from interface: Store
This throws a WrongRegionException if the HFile does not fit in this region, or an InvalidHFileException if the HFile is not valid.

Specified by:
assertBulkLoadHFileOk in interface Store
Throws:
IOException

bulkLoadHFile

public void bulkLoadHFile(String srcPathStr,
                          long seqNum)
                   throws IOException
Description copied from interface: Store
This method should only be called from HRegion. It is assumed that the ranges of values in the HFile fit within the stores assigned region. (assertBulkLoadHFileOk checks this)

Specified by:
bulkLoadHFile in interface Store
seqNum - sequence Id associated with the HFile
Throws:
IOException

close

public com.google.common.collect.ImmutableCollection<StoreFile> close()
                                                               throws IOException
Description copied from interface: Store
Close all the readers We don't need to worry about subsequent requests because the HRegion holds a write lock that will prevent any more reads or writes.

Specified by:
close in interface Store
Returns:
the StoreFiles that were previously being used.
Throws:
IOException - on failure

flushCache

protected List<org.apache.hadoop.fs.Path> flushCache(long logCacheFlushId,
                                                     SortedSet<KeyValue> snapshot,
                                                     TimeRangeTracker snapshotTimeRangeTracker,
                                                     AtomicLong flushedSize,
                                                     MonitoredTask status)
                                              throws IOException
Write out current snapshot. Presumes snapshot() has been called previously.

Parameters:
logCacheFlushId - flush sequence number
snapshot -
snapshotTimeRangeTracker -
flushedSize - The number of bytes flushed
status -
Returns:
The path name of the tmp file to which the store was flushed
Throws:
IOException

createWriterInTmp

public StoreFile.Writer createWriterInTmp(long maxKeyCount,
                                          Compression.Algorithm compression,
                                          boolean isCompaction,
                                          boolean includeMVCCReadpoint,
                                          boolean includesTag)
                                   throws IOException
Specified by:
createWriterInTmp in interface Store
compression - Compression algorithm to use
isCompaction - whether we are creating a new file in a compaction
includeMVCCReadpoint - whether we should out the MVCC readpoint
Returns:
Writer for a new StoreFile in the tmp dir.
Throws:
IOException

createWriterInTmp

public StoreFile.Writer createWriterInTmp(long maxKeyCount,
                                          Compression.Algorithm compression,
                                          boolean isCompaction,
                                          boolean includeMVCCReadpoint,
                                          boolean includesTag,
                                          boolean shouldDropBehind)
                                   throws IOException
Specified by:
createWriterInTmp in interface Store
compression - Compression algorithm to use
isCompaction - whether we are creating a new file in a compaction
includeMVCCReadpoint - whether we should out the MVCC readpoint
shouldDropBehind - should the writer drop caches behind writes
Returns:
Writer for a new StoreFile in the tmp dir.
Throws:
IOException

getScanners

public List<KeyValueScanner> getScanners(boolean cacheBlocks,
                                         boolean isGet,
                                         boolean usePread,
                                         boolean isCompaction,
                                         ScanQueryMatcher matcher,
                                         byte[] startRow,
                                         byte[] stopRow,
                                         long readPt)
                                  throws IOException
Get all scanners with no filtering based on TTL (that happens further down the line).

Specified by:
getScanners in interface Store
Returns:
all scanners for this store
Throws:
IOException

addChangedReaderObserver

public void addChangedReaderObserver(ChangedReadersObserver o)
Specified by:
addChangedReaderObserver in interface Store

deleteChangedReaderObserver

public void deleteChangedReaderObserver(ChangedReadersObserver o)
Specified by:
deleteChangedReaderObserver in interface Store

compact

public List<StoreFile> compact(CompactionContext compaction,
                               CompactionThroughputController throughputController)
                        throws IOException
Compact the StoreFiles. This method may take some time, so the calling thread must be able to block for long periods.

During this time, the Store can work as usual, getting values from StoreFiles and writing new StoreFiles from the memstore. Existing StoreFiles are not destroyed until the new compacted StoreFile is completely written-out to disk.

The compactLock prevents multiple simultaneous compactions. The structureLock prevents us from interfering with other write operations.

We don't want to hold the structureLock for the whole time, as a compact() can be lengthy and we want to allow cache-flushes during this period.

Compaction event should be idempotent, since there is no IO Fencing for the region directory in hdfs. A region server might still try to complete the compaction after it lost the region. That is why the following events are carefully ordered for a compaction: 1. Compaction writes new files under region/.tmp directory (compaction output) 2. Compaction atomically moves the temporary file under region directory 3. Compaction appends a WAL edit containing the compaction input and output files. Forces sync on WAL. 4. Compaction deletes the input files from the region directory. Failure conditions are handled like this: - If RS fails before 2, compaction wont complete. Even if RS lives on and finishes the compaction later, it will only write the new data file to the region directory. Since we already have this data, this will be idempotent but we will have a redundant copy of the data. - If RS fails between 2 and 3, the region will have a redundant copy of the data. The RS that failed won't be able to finish snyc() for WAL because of lease recovery in WAL. - If RS fails after 3, the region region server who opens the region will pick up the the compaction marker from the WAL and replay it by removing the compaction input files. Failed RS can also attempt to delete those files, but the operation will be idempotent See HBASE-2231 for details.

Specified by:
compact in interface Store
Parameters:
compaction - compaction details obtained from requestCompaction()
Returns:
Storefile we compacted into or null if we failed or opted out early.
Throws:
IOException

compact

public List<StoreFile> compact(CompactionContext compaction,
                               CompactionThroughputController throughputController,
                               User user)
                        throws IOException
Specified by:
compact in interface Store
Throws:
IOException

completeCompactionMarker

public void completeCompactionMarker(WALProtos.CompactionDescriptor compaction)
                              throws IOException
Call to complete a compaction. Its for the case where we find in the WAL a compaction that was not finished. We could find one recovering a WAL after a regionserver crash. See HBASE-2231.

Specified by:
completeCompactionMarker in interface Store
Parameters:
compaction -
Throws:
IOException

compactRecentForTestingAssumingDefaultPolicy

public void compactRecentForTestingAssumingDefaultPolicy(int N)
                                                  throws IOException
This method tries to compact N recent files for testing. Note that because compacting "recent" files only makes sense for some policies, e.g. the default one, it assumes default policy is used. It doesn't use policy, but instead makes a compaction candidate list by itself.

Parameters:
N - Number of files.
Throws:
IOException

hasReferences

public boolean hasReferences()
Specified by:
hasReferences in interface Store
Returns:
true if the store has any underlying reference files to older HFiles

getCompactionProgress

public CompactionProgress getCompactionProgress()
Description copied from interface: Store
getter for CompactionProgress object

Specified by:
getCompactionProgress in interface Store
Returns:
CompactionProgress object; can be null

isMajorCompaction

public boolean isMajorCompaction()
                          throws IOException
Specified by:
isMajorCompaction in interface Store
Returns:
true if we should run a major compaction.
Throws:
IOException

requestCompaction

public CompactionContext requestCompaction()
                                    throws IOException
Specified by:
requestCompaction in interface Store
Throws:
IOException

requestCompaction

public CompactionContext requestCompaction(int priority,
                                           CompactionRequest baseRequest)
                                    throws IOException
Specified by:
requestCompaction in interface Store
Throws:
IOException

requestCompaction

public CompactionContext requestCompaction(int priority,
                                           CompactionRequest baseRequest,
                                           User user)
                                    throws IOException
Specified by:
requestCompaction in interface Store
Throws:
IOException

cancelRequestedCompaction

public void cancelRequestedCompaction(CompactionContext compaction)
Specified by:
cancelRequestedCompaction in interface Store

completeCompaction

protected void completeCompaction(Collection<StoreFile> compactedFiles)
                           throws IOException
Throws:
IOException

getRowKeyAtOrBefore

public KeyValue getRowKeyAtOrBefore(byte[] row)
                             throws IOException
Description copied from interface: Store
Find the key that matches row exactly, or the one that immediately precedes it. WARNING: Only use this method on a table where writes occur with strictly increasing timestamps. This method assumes this pattern of writes in order to make it reasonably performant. Also our search is dependent on the axiom that deletes are for cells that are in the container that follows whether a memstore snapshot or a storefile, not for the current container: i.e. we'll see deletes before we come across cells we are to delete. Presumption is that the memstore#kvset is processed before memstore#snapshot and so on.

Specified by:
getRowKeyAtOrBefore in interface Store
Parameters:
row - The row key of the targeted row.
Returns:
Found keyvalue or null if none found.
Throws:
IOException

canSplit

public boolean canSplit()
Specified by:
canSplit in interface Store

getSplitPoint

public byte[] getSplitPoint()
Description copied from interface: Store
Determines if Store should be split

Specified by:
getSplitPoint in interface Store
Returns:
byte[] if store should be split, null otherwise.

getLastCompactSize

public long getLastCompactSize()
Specified by:
getLastCompactSize in interface Store
Returns:
aggregate size of all HStores used in the last compaction

getSize

public long getSize()
Specified by:
getSize in interface Store
Returns:
aggregate size of HStore

triggerMajorCompaction

public void triggerMajorCompaction()
Specified by:
triggerMajorCompaction in interface Store

getScanner

public KeyValueScanner getScanner(Scan scan,
                                  NavigableSet<byte[]> targetCols,
                                  long readPt)
                           throws IOException
Description copied from interface: Store
Return a scanner for both the memstore and the HStore files. Assumes we are not in a compaction.

Specified by:
getScanner in interface Store
Parameters:
scan - Scan to apply when scanning the stores
targetCols - columns to scan
Returns:
a scanner over the current key values
Throws:
IOException - on failure

toString

public String toString()
Overrides:
toString in class Object

getStorefilesCount

public int getStorefilesCount()
Specified by:
getStorefilesCount in interface Store
Returns:
Count of store files

getMaxStoreFileAge

public long getMaxStoreFileAge()
Specified by:
getMaxStoreFileAge in interface Store
Returns:
Max age of store files in this store

getMinStoreFileAge

public long getMinStoreFileAge()
Specified by:
getMinStoreFileAge in interface Store
Returns:
Min age of store files in this store

getAvgStoreFileAge

public long getAvgStoreFileAge()
Specified by:
getAvgStoreFileAge in interface Store
Returns:
Average age of store files in this store, 0 if no store files

getNumReferenceFiles

public long getNumReferenceFiles()
Specified by:
getNumReferenceFiles in interface Store
Returns:
Number of reference files in this store

getNumHFiles

public long getNumHFiles()
Specified by:
getNumHFiles in interface Store
Returns:
Number of HFiles in this store

getStoreSizeUncompressed

public long getStoreSizeUncompressed()
Specified by:
getStoreSizeUncompressed in interface Store
Returns:
The size of the store files, in bytes, uncompressed.

getStorefilesSize

public long getStorefilesSize()
Specified by:
getStorefilesSize in interface Store
Returns:
The size of the store files, in bytes.

getStorefilesIndexSize

public long getStorefilesIndexSize()
Specified by:
getStorefilesIndexSize in interface Store
Returns:
The size of the store file indexes, in bytes.

getTotalStaticIndexSize

public long getTotalStaticIndexSize()
Description copied from interface: Store
Returns the total size of all index blocks in the data block indexes, including the root level, intermediate levels, and the leaf level for multi-level indexes, or just the root level for single-level indexes.

Specified by:
getTotalStaticIndexSize in interface Store
Returns:
the total size of block indexes in the store

getTotalStaticBloomSize

public long getTotalStaticBloomSize()
Description copied from interface: Store
Returns the total byte size of all Bloom filter bit arrays. For compound Bloom filters even the Bloom blocks currently not loaded into the block cache are counted.

Specified by:
getTotalStaticBloomSize in interface Store
Returns:
the total size of all Bloom filters in the store

getMemStoreSize

public long getMemStoreSize()
Specified by:
getMemStoreSize in interface Store
Returns:
The size of this store's memstore, in bytes

getCompactPriority

public int getCompactPriority()
Specified by:
getCompactPriority in interface Store

throttleCompaction

public boolean throttleCompaction(long compactionSize)
Specified by:
throttleCompaction in interface Store

getHRegion

public HRegion getHRegion()

getCoprocessorHost

public RegionCoprocessorHost getCoprocessorHost()
Specified by:
getCoprocessorHost in interface Store

getRegionInfo

public HRegionInfo getRegionInfo()
Specified by:
getRegionInfo in interface Store
Returns:
the parent region info hosting this store

areWritesEnabled

public boolean areWritesEnabled()
Specified by:
areWritesEnabled in interface Store

getSmallestReadPoint

public long getSmallestReadPoint()
Specified by:
getSmallestReadPoint in interface Store
Returns:
The smallest mvcc readPoint across all the scanners in this region. Writes older than this readPoint, are included in every read operation.

updateColumnValue

public long updateColumnValue(byte[] row,
                              byte[] f,
                              byte[] qualifier,
                              long newValue)
                       throws IOException
Used in tests. TODO: Remove Updates the value for the given row/family/qualifier. This function will always be seen as atomic by other readers because it only puts a single KV to memstore. Thus no read/write control necessary.

Parameters:
row - row to update
f - family to update
qualifier - qualifier to update
newValue - the new value to set into memstore
Returns:
memstore size delta
Throws:
IOException

upsert

public long upsert(Iterable<Cell> cells,
                   long readpoint)
            throws IOException
Description copied from interface: Store
Adds or replaces the specified KeyValues.

For each KeyValue specified, if a cell with the same row, family, and qualifier exists in MemStore, it will be replaced. Otherwise, it will just be inserted to MemStore.

This operation is atomic on each KeyValue (row/family/qualifier) but not necessarily atomic across all of them.

Specified by:
upsert in interface Store
readpoint - readpoint below which we can safely remove duplicate KVs
Returns:
memstore size delta
Throws:
IOException

createFlushContext

public org.apache.hadoop.hbase.regionserver.StoreFlushContext createFlushContext(long cacheFlushId)
Specified by:
createFlushContext in interface Store

needsCompaction

public boolean needsCompaction()
Description copied from interface: Store
See if there's too much store files in this store

Specified by:
needsCompaction in interface Store
Returns:
true if number of store files is greater than the number defined in minFilesToCompact

getCacheConfig

public CacheConfig getCacheConfig()
Description copied from interface: Store
Used for tests.

Specified by:
getCacheConfig in interface Store
Returns:
cache configuration for this Store.

heapSize

public long heapSize()
Specified by:
heapSize in interface HeapSize
Returns:
Approximate 'exclusive deep size' of implementing object. Includes count of payload and hosting object sizings.

getComparator

public KeyValue.KVComparator getComparator()
Specified by:
getComparator in interface Store

getScanInfo

public ScanInfo getScanInfo()
Specified by:
getScanInfo in interface Store

hasTooManyStoreFiles

public boolean hasTooManyStoreFiles()
Specified by:
hasTooManyStoreFiles in interface Store
Returns:
Whether this store has too many store files.

getFlushedCellsCount

public long getFlushedCellsCount()
Specified by:
getFlushedCellsCount in interface Store
Returns:
The number of cells flushed to disk

getFlushedCellsSize

public long getFlushedCellsSize()
Specified by:
getFlushedCellsSize in interface Store
Returns:
The total size of data flushed to disk, in bytes

getCompactedCellsCount

public long getCompactedCellsCount()
Specified by:
getCompactedCellsCount in interface Store
Returns:
The number of cells processed during minor compactions

getCompactedCellsSize

public long getCompactedCellsSize()
Specified by:
getCompactedCellsSize in interface Store
Returns:
The total amount of data processed during minor compactions, in bytes

getMajorCompactedCellsCount

public long getMajorCompactedCellsCount()
Specified by:
getMajorCompactedCellsCount in interface Store
Returns:
The number of cells processed during major compactions

getMajorCompactedCellsSize

public long getMajorCompactedCellsSize()
Specified by:
getMajorCompactedCellsSize in interface Store
Returns:
The total amount of data processed during major compactions, in bytes

getCompactionPressure

public double getCompactionPressure()
Description copied from interface: Store
This value can represent the degree of emergency of compaction for this store. It should be greater than or equal to 0.0, any value greater than 1.0 means we have too many store files.

And for striped stores, we should calculate this value by the files in each stripe separately and return the maximum value.

It is similar to Store.getCompactPriority() except that it is more suitable to use in a linear formula.

Specified by:
getCompactionPressure in interface Store


Copyright © 2007–2016 The Apache Software Foundation. All rights reserved.