|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
@InterfaceAudience.Private @InterfaceStability.Evolving public interface Store
Interface for objects that hold a column family in a Region. Its a memstore and a set of zero or more StoreFiles, which stretch backwards over time.
Field Summary | |
---|---|
static int |
NO_PRIORITY
|
static int |
PRIORITY_USER
|
Method Summary | |
---|---|
long |
add(KeyValue kv)
Adds a value to the memstore |
void |
addChangedReaderObserver(ChangedReadersObserver o)
|
boolean |
areWritesEnabled()
|
void |
assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath)
This throws a WrongRegionException if the HFile does not fit in this region, or an InvalidHFileException if the HFile is not valid. |
void |
bulkLoadHFile(String srcPathStr,
long sequenceId)
This method should only be called from HRegion. |
void |
cancelRequestedCompaction(CompactionContext compaction)
|
boolean |
canSplit()
|
Collection<StoreFile> |
close()
Close all the readers We don't need to worry about subsequent requests because the HRegion holds a write lock that will prevent any more reads or writes. |
List<StoreFile> |
compact(CompactionContext compaction)
|
StoreFile.Writer |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint)
|
void |
deleteChangedReaderObserver(ChangedReadersObserver o)
|
CacheConfig |
getCacheConfig()
Used for tests. |
String |
getColumnFamilyName()
|
CompactionProgress |
getCompactionProgress()
getter for CompactionProgress object |
int |
getCompactPriority()
|
KeyValue.KVComparator |
getComparator()
|
RegionCoprocessorHost |
getCoprocessorHost()
|
HFileDataBlockEncoder |
getDataBlockEncoder()
|
HColumnDescriptor |
getFamily()
|
org.apache.hadoop.fs.FileSystem |
getFileSystem()
|
long |
getLastCompactSize()
|
long |
getMaxMemstoreTS()
|
long |
getMemStoreSize()
|
HRegionInfo |
getRegionInfo()
|
KeyValue |
getRowKeyAtOrBefore(byte[] row)
Find the key that matches row exactly, or the one that immediately precedes it. |
ScanInfo |
getScanInfo()
|
KeyValueScanner |
getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
Return a scanner for both the memstore and the HStore files. |
List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean isGet,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow)
Get all scanners with no filtering based on TTL (that happens further down the line). |
long |
getSize()
|
long |
getSmallestReadPoint()
|
byte[] |
getSplitPoint()
Determines if Store should be split |
Collection<StoreFile> |
getStorefiles()
|
int |
getStorefilesCount()
|
long |
getStorefilesIndexSize()
|
long |
getStorefilesSize()
|
org.apache.hadoop.hbase.regionserver.StoreFlusher |
getStoreFlusher(long cacheFlushId)
|
long |
getStoreSizeUncompressed()
|
String |
getTableName()
|
long |
getTotalStaticBloomSize()
Returns the total byte size of all Bloom filter bit arrays. |
long |
getTotalStaticIndexSize()
Returns the total size of all index blocks in the data block indexes, including the root level, intermediate levels, and the leaf level for multi-level indexes, or just the root level for single-level indexes. |
boolean |
hasReferences()
|
boolean |
hasTooManyStoreFiles()
|
boolean |
isMajorCompaction()
|
boolean |
needsCompaction()
See if there's too much store files in this store |
CompactionContext |
requestCompaction()
|
CompactionContext |
requestCompaction(int priority,
CompactionRequest baseRequest)
|
void |
rollback(KeyValue kv)
Removes a kv from the memstore. |
boolean |
throttleCompaction(long compactionSize)
|
void |
triggerMajorCompaction()
|
long |
upsert(Iterable<? extends Cell> cells,
long readpoint)
Adds or replaces the specified KeyValues. |
Methods inherited from interface org.apache.hadoop.hbase.io.HeapSize |
---|
heapSize |
Methods inherited from interface org.apache.hadoop.hbase.regionserver.StoreConfigInformation |
---|
getMemstoreFlushSize, getStoreFileTtl |
Field Detail |
---|
static final int PRIORITY_USER
static final int NO_PRIORITY
Method Detail |
---|
KeyValue.KVComparator getComparator()
Collection<StoreFile> getStorefiles()
Collection<StoreFile> close() throws IOException
StoreFiles
that were previously being used.
IOException
- on failureKeyValueScanner getScanner(Scan scan, NavigableSet<byte[]> targetCols) throws IOException
scan
- Scan to apply when scanning the storestargetCols
- columns to scan
IOException
- on failureList<KeyValueScanner> getScanners(boolean cacheBlocks, boolean isGet, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow) throws IOException
cacheBlocks
- isGet
- isCompaction
- matcher
- startRow
- stopRow
-
IOException
ScanInfo getScanInfo()
long upsert(Iterable<? extends Cell> cells, long readpoint) throws IOException
For each KeyValue specified, if a cell with the same row, family, and qualifier exists in MemStore, it will be replaced. Otherwise, it will just be inserted to MemStore.
This operation is atomic on each KeyValue (row/family/qualifier) but not necessarily atomic across all of them.
cells
- readpoint
- readpoint below which we can safely remove duplicate KVs
IOException
long add(KeyValue kv)
kv
-
void rollback(KeyValue kv)
kv
- KeyValue getRowKeyAtOrBefore(byte[] row) throws IOException
row
- The row key of the targeted row.
IOException
org.apache.hadoop.fs.FileSystem getFileSystem()
StoreFile.Writer createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint) throws IOException
IOException
boolean throttleCompaction(long compactionSize)
CompactionProgress getCompactionProgress()
CompactionContext requestCompaction() throws IOException
IOException
CompactionContext requestCompaction(int priority, CompactionRequest baseRequest) throws IOException
IOException
void cancelRequestedCompaction(CompactionContext compaction)
List<StoreFile> compact(CompactionContext compaction) throws IOException
IOException
boolean isMajorCompaction() throws IOException
IOException
void triggerMajorCompaction()
boolean needsCompaction()
int getCompactPriority()
org.apache.hadoop.hbase.regionserver.StoreFlusher getStoreFlusher(long cacheFlushId)
boolean canSplit()
byte[] getSplitPoint()
void assertBulkLoadHFileOk(org.apache.hadoop.fs.Path srcPath) throws IOException
IOException
void bulkLoadHFile(String srcPathStr, long sequenceId) throws IOException
srcPathStr
- sequenceId
- sequence Id associated with the HFile
IOException
boolean hasReferences()
long getMemStoreSize()
HColumnDescriptor getFamily()
long getMaxMemstoreTS()
HFileDataBlockEncoder getDataBlockEncoder()
long getLastCompactSize()
long getSize()
int getStorefilesCount()
long getStoreSizeUncompressed()
long getStorefilesSize()
long getStorefilesIndexSize()
long getTotalStaticIndexSize()
long getTotalStaticBloomSize()
CacheConfig getCacheConfig()
HRegionInfo getRegionInfo()
RegionCoprocessorHost getCoprocessorHost()
boolean areWritesEnabled()
long getSmallestReadPoint()
String getColumnFamilyName()
String getTableName()
void addChangedReaderObserver(ChangedReadersObserver o)
void deleteChangedReaderObserver(ChangedReadersObserver o)
boolean hasTooManyStoreFiles()
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |