|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured
org.apache.hadoop.hbase.regionserver.Store
public class Store
A Store holds a column family in a Region. Its a memstore and a set of zero or more StoreFiles, which stretch backwards over time.
There's no reason to consider append-logging at this level; all logging and locking is handled at the HRegion level. Store just provides services to manage sets of StoreFiles. One of the most important of those services is compaction services where files are aggregated once they pass a configurable threshold.
The only thing having to do with logs that Store needs to deal with is the reconstructionLog. This is a segment of an HRegion's log that might NOT be present upon startup. If the param is NULL, there's nothing to do. If the param is non-NULL, we need to process the log to reconstruct a TreeMap that might not have been written to disk before the process died.
It's assumed that after this constructor returns, the reconstructionLog file will be deleted (by whoever has instantiated the Store).
Locking and transactions are handled at a higher level. This API should not be called directly but by an HRegion manager.
Nested Class Summary | |
---|---|
static class |
Store.ScanInfo
Immutable information for scans over a store. |
Field Summary | |
---|---|
static String |
BLOCKING_STOREFILES_KEY
|
static long |
DEEP_OVERHEAD
|
static int |
DEFAULT_BLOCKING_STOREFILE_COUNT
|
static long |
FIXED_OVERHEAD
|
protected MemStore |
memstore
|
static int |
NO_PRIORITY
|
static int |
PRIORITY_USER
|
Fields inherited from class org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured |
---|
SCHEMA_CONFIGURED_UNALIGNED_HEAP_SIZE |
Constructor Summary | |
---|---|
protected |
Store(org.apache.hadoop.fs.Path basedir,
HRegion region,
HColumnDescriptor family,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration confParam)
Constructor |
Method Summary | |
---|---|
protected long |
add(KeyValue kv)
Adds a value to the memstore |
boolean |
canSplit()
|
void |
compactRecentForTesting(int N)
Compact the most recent N files. |
StoreFile.Writer |
createWriterInTmp(int maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint)
|
protected long |
delete(KeyValue kv)
Adds a value to the memstore |
void |
finishRequest(CompactionRequest cr)
|
protected org.apache.hadoop.fs.Path |
flushCache(long logCacheFlushId,
SortedSet<KeyValue> snapshot,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
Write out current snapshot. |
static int |
getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
Returns the configured bytesPerChecksum value. |
CacheConfig |
getCacheConfig()
Used for tests. |
static ChecksumType |
getChecksumType(org.apache.hadoop.conf.Configuration conf)
Returns the configured checksum algorithm. |
CompactionProgress |
getCompactionProgress()
getter for CompactionProgress object |
int |
getCompactPriority()
|
int |
getCompactPriority(int priority)
|
KeyValue.KVComparator |
getComparator()
|
HFileDataBlockEncoder |
getDataBlockEncoder()
|
HColumnDescriptor |
getFamily()
|
HRegion |
getHRegion()
|
long |
getLastCompactSize()
|
static long |
getLowestTimestamp(List<StoreFile> candidates)
|
long |
getMaxMemstoreTS()
|
int |
getNumberOfStoreFiles()
|
Store.ScanInfo |
getScanInfo()
|
KeyValueScanner |
getScanner(Scan scan,
NavigableSet<byte[]> targetCols)
Return a scanner for both the memstore and the HStore files. |
protected List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean isGet,
boolean isCompaction,
ScanQueryMatcher matcher)
Get all scanners with no filtering based on TTL (that happens further down the line). |
long |
getSize()
|
byte[] |
getSplitPoint()
Determines if Store should be split |
List<StoreFile> |
getStorefiles()
|
org.apache.hadoop.hbase.regionserver.StoreFlusher |
getStoreFlusher(long cacheFlushId)
|
static org.apache.hadoop.fs.Path |
getStoreHomedir(org.apache.hadoop.fs.Path parentRegionDirectory,
byte[] family)
|
static org.apache.hadoop.fs.Path |
getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
String encodedName,
byte[] family)
|
static org.apache.hadoop.fs.Path |
getStoreHomedir(org.apache.hadoop.fs.Path tabledir,
String encodedName,
String family)
|
boolean |
hasTooManyStoreFiles()
|
long |
heapSize()
|
boolean |
needsCompaction()
See if there's too much store files in this store |
CompactionRequest |
requestCompaction()
|
CompactionRequest |
requestCompaction(int priority,
CompactionRequest request)
|
protected void |
rollback(KeyValue kv)
Removes a kv from the memstore. |
com.google.common.collect.ImmutableList<StoreFile> |
sortAndClone(List<StoreFile> storeFiles)
|
String |
toString()
|
void |
triggerMajorCompaction()
|
long |
updateColumnValue(byte[] row,
byte[] f,
byte[] qualifier,
long newValue)
Increments the value for the given row/family/qualifier. |
long |
upsert(List<KeyValue> kvs)
Adds or replaces the specified KeyValues. |
Methods inherited from class org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured |
---|
createUnknown, getColumnFamilyName, getSchemaMetrics, getTableName, isSchemaConfigured, passSchemaMetricsTo, resetSchemaMetricsConf, schemaConfAsJSON, schemaConfigurationChanged |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait |
Field Detail |
---|
public static final String BLOCKING_STOREFILES_KEY
public static final int DEFAULT_BLOCKING_STOREFILE_COUNT
protected final MemStore memstore
public static final int PRIORITY_USER
public static final int NO_PRIORITY
public static final long FIXED_OVERHEAD
public static final long DEEP_OVERHEAD
Constructor Detail |
---|
protected Store(org.apache.hadoop.fs.Path basedir, HRegion region, HColumnDescriptor family, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration confParam) throws IOException
basedir
- qualified path under which the region directory lives;
generally the table subdirectoryregion
- family
- HColumnDescriptor for this columnfs
- file system objectconfParam
- configuration object
failed. Can be null.
IOException
Method Detail |
---|
public static int getBytesPerChecksum(org.apache.hadoop.conf.Configuration conf)
conf
- The configuration
public static ChecksumType getChecksumType(org.apache.hadoop.conf.Configuration conf)
conf
- The configuration
public HColumnDescriptor getFamily()
public long getMaxMemstoreTS()
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir, String encodedName, byte[] family)
tabledir
- encodedName
- Encoded region name.family
-
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path tabledir, String encodedName, String family)
tabledir
- encodedName
- Encoded region name.family
-
public static org.apache.hadoop.fs.Path getStoreHomedir(org.apache.hadoop.fs.Path parentRegionDirectory, byte[] family)
parentRegionDirectory
- directory for the parent regionfamily
- family name of this store
public HFileDataBlockEncoder getDataBlockEncoder()
protected long add(KeyValue kv)
kv
-
protected long delete(KeyValue kv)
kv
-
protected void rollback(KeyValue kv)
kv
- public List<StoreFile> getStorefiles()
protected org.apache.hadoop.fs.Path flushCache(long logCacheFlushId, SortedSet<KeyValue> snapshot, TimeRangeTracker snapshotTimeRangeTracker, AtomicLong flushedSize, MonitoredTask status) throws IOException
snapshot()
has been called
previously.
logCacheFlushId
- flush sequence numbersnapshot
- snapshotTimeRangeTracker
- flushedSize
- The number of bytes flushedstatus
-
IOException
public StoreFile.Writer createWriterInTmp(int maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint) throws IOException
IOException
protected List<KeyValueScanner> getScanners(boolean cacheBlocks, boolean isGet, boolean isCompaction, ScanQueryMatcher matcher) throws IOException
IOException
public void compactRecentForTesting(int N) throws IOException
IOException
public static long getLowestTimestamp(List<StoreFile> candidates) throws IOException
IOException
public CompactionProgress getCompactionProgress()
public CompactionRequest requestCompaction() throws IOException
IOException
public CompactionRequest requestCompaction(int priority, CompactionRequest request) throws IOException
IOException
public void finishRequest(CompactionRequest cr)
public com.google.common.collect.ImmutableList<StoreFile> sortAndClone(List<StoreFile> storeFiles)
public int getNumberOfStoreFiles()
public boolean canSplit()
public byte[] getSplitPoint()
public long getLastCompactSize()
public long getSize()
public void triggerMajorCompaction()
public KeyValueScanner getScanner(Scan scan, NavigableSet<byte[]> targetCols) throws IOException
IOException
public String toString()
toString
in class Object
public int getCompactPriority()
public int getCompactPriority(int priority)
priority
-
public HRegion getHRegion()
public long updateColumnValue(byte[] row, byte[] f, byte[] qualifier, long newValue) throws IOException
row
- f
- qualifier
- newValue
- the new value to set into memstore
IOException
public long upsert(List<KeyValue> kvs) throws IOException
For each KeyValue specified, if a cell with the same row, family, and qualifier exists in MemStore, it will be replaced. Otherwise, it will just be inserted to MemStore.
This operation is atomic on each KeyValue (row/family/qualifier) but not necessarily atomic across all of them.
kvs
-
IOException
public org.apache.hadoop.hbase.regionserver.StoreFlusher getStoreFlusher(long cacheFlushId)
public boolean needsCompaction()
public CacheConfig getCacheConfig()
public long heapSize()
heapSize
in interface HeapSize
heapSize
in class SchemaConfigured
public KeyValue.KVComparator getComparator()
public Store.ScanInfo getScanInfo()
public boolean hasTooManyStoreFiles()
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |