|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Uses of KeyValue in org.apache.hadoop.hbase |
---|
Fields in org.apache.hadoop.hbase declared as KeyValue | |
---|---|
static KeyValue |
KeyValue.LOWESTKEY
Lowest possible key. |
Methods in org.apache.hadoop.hbase that return KeyValue | |
---|---|
KeyValue |
KeyValue.clone()
Clones a KeyValue. |
static KeyValue |
KeyValueUtil.copyToNewKeyValue(Cell cell)
copy key only |
static KeyValue |
KeyValue.create(DataInput in)
|
static KeyValue |
KeyValue.create(int length,
DataInput in)
Create a KeyValue reading length from in |
static KeyValue |
KeyValueTestUtil.create(String row,
String family,
String qualifier,
long timestamp,
KeyValue.Type type,
String value)
|
static KeyValue |
KeyValueTestUtil.create(String row,
String family,
String qualifier,
long timestamp,
String value)
|
static KeyValue |
KeyValue.createFirstDeleteFamilyOnRow(byte[] row,
byte[] family)
Create a Delete Family KeyValue for the specified row and family that would be smaller than all other possible Delete Family KeyValues that have the same row and family. |
static KeyValue |
KeyValueUtil.createFirstKeyInIncrementedRow(Cell in)
Increment the row bytes and clear the other fields |
static KeyValue |
KeyValueUtil.createFirstKeyInNextRow(Cell in)
Append single byte 0x00 to the end of the input row key |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row)
Create a KeyValue that is smaller than all other possible KeyValues for the given row. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
byte[] family,
byte[] qualifier)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row,family,qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] buffer,
byte[] row,
byte[] family,
byte[] qualifier)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row, family, qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
byte[] f,
byte[] q,
long ts)
|
static KeyValue |
KeyValue.createFirstOnRow(byte[] buffer,
int boffset,
byte[] row,
int roffset,
int rlength,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row, family, qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
int roffset,
int rlength,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row, family, qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
int roffset,
short rlength)
Create a KeyValue that is smaller than all other possible KeyValues for the given row. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
long ts)
Creates a KeyValue that is smaller than all other KeyValues that are older than the passed timestamp. |
KeyValue |
KeyValue.createFirstOnRowColTS(long ts)
Creates the first KV with the row/family/qualifier of this KV and the given timestamp. |
KeyValue |
KeyValue.createKeyOnly(boolean lenAsVal)
Creates a new KeyValue that only contains the key portion (the value is set to be null). |
static KeyValue |
KeyValue.createKeyValueFromKey(byte[] b)
|
static KeyValue |
KeyValue.createKeyValueFromKey(byte[] b,
int o,
int l)
|
static KeyValue |
KeyValue.createKeyValueFromKey(ByteBuffer bb)
|
static KeyValue |
KeyValue.createLastOnRow(byte[] row)
Creates a KeyValue that is last on the specified row id. |
static KeyValue |
KeyValue.createLastOnRow(byte[] row,
int roffset,
int rlength,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Create a KeyValue for the specified row, family and qualifier that would be larger than or equal to all other possible KeyValues that have the same row, family, qualifier. |
KeyValue |
KeyValue.createLastOnRowCol()
Similar to createLastOnRow(byte[], int, int, byte[], int, int,
byte[], int, int) but creates the last key on the row/column of this KV
(the value part of the returned KV is always empty). |
static KeyValue |
KeyValueUtil.ensureKeyValue(Cell cell)
|
static KeyValue |
KeyValue.iscreate(InputStream in)
Create a KeyValue reading from the raw InputStream. |
static KeyValue |
KeyValueUtil.nextShallowCopy(ByteBuffer bb,
boolean includesMvccVersion)
Creates a new KeyValue object positioned in the supplied ByteBuffer and sets the ByteBuffer's position to the start of the next KeyValue. |
static KeyValue |
KeyValueUtil.previousKey(KeyValue in)
Decrement the timestamp. |
KeyValue |
KeyValue.shallowCopy()
Creates a shallow copy of this KeyValue, reusing the data byte buffer. |
Methods in org.apache.hadoop.hbase that return types with arguments of type KeyValue | |
---|---|
static List<KeyValue> |
KeyValueTestUtil.rewindThenToList(ByteBuffer bb,
boolean includesMemstoreTS)
|
Methods in org.apache.hadoop.hbase with parameters of type KeyValue | |
---|---|
static void |
KeyValueUtil.appendToByteBuffer(ByteBuffer bb,
KeyValue kv,
boolean includeMvccVersion)
|
int |
KeyValue.RowComparator.compare(KeyValue left,
KeyValue right)
|
int |
KeyValue.KVComparator.compareColumns(KeyValue left,
byte[] right,
int roffset,
int rlength,
int rfamilyoffset)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
byte[] row)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
KeyValue right)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
short lrowlength,
KeyValue right,
short rrowlength)
|
int |
KeyValue.KVComparator.compareTimestamps(KeyValue left,
KeyValue right)
|
protected static String |
KeyValueTestUtil.getFamilyString(KeyValue kv)
|
protected static String |
KeyValueTestUtil.getQualifierString(KeyValue kv)
|
protected static String |
KeyValueTestUtil.getRowString(KeyValue kv)
|
protected static String |
KeyValueTestUtil.getTimestampString(KeyValue kv)
|
protected static String |
KeyValueTestUtil.getTypeString(KeyValue kv)
|
protected static String |
KeyValueTestUtil.getValueString(KeyValue kv)
|
static int |
KeyValueUtil.lengthWithMvccVersion(KeyValue kv,
boolean includeMvccVersion)
|
boolean |
KeyValue.matchingFamily(KeyValue other)
|
boolean |
KeyValue.matchingQualifier(KeyValue other)
|
boolean |
KeyValue.matchingRow(KeyValue other)
|
boolean |
KeyValue.KVComparator.matchingRowColumn(KeyValue left,
KeyValue right)
Compares the row and column of two keyvalues for equality |
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
byte[] right)
|
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
KeyValue right)
Compares the row of two keyvalues for equality |
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
short lrowlength,
KeyValue right,
short rrowlength)
|
boolean |
KeyValue.KVComparator.matchingRowsGreaterTimestamp(KeyValue left,
KeyValue right)
Compares the row and timestamp of two keys Was called matchesWithoutColumn in HStoreKey. |
static long |
KeyValue.oswrite(KeyValue kv,
OutputStream out)
Write out a KeyValue in the manner in which we used to when KeyValue was a Writable but do not require a DataOutput , just take plain OutputStream
Named oswrite so does not clash with write(KeyValue, DataOutput) |
static KeyValue |
KeyValueUtil.previousKey(KeyValue in)
Decrement the timestamp. |
protected static String |
KeyValueTestUtil.toStringWithPadding(KeyValue kv,
int maxRowLength,
int maxFamilyLength,
int maxQualifierLength,
int maxTimestampLength,
boolean includeMeta)
|
static long |
KeyValue.write(KeyValue kv,
DataOutput out)
Write out a KeyValue in the manner in which we used to when KeyValue was a Writable. |
Method parameters in org.apache.hadoop.hbase with type arguments of type KeyValue | |
---|---|
static boolean |
KeyValueTestUtil.containsIgnoreMvccVersion(Collection<KeyValue> kvCollection1,
Collection<KeyValue> kvCollection2)
Checks whether KeyValues from kvCollection2 are contained in kvCollection1. |
static boolean |
KeyValueTestUtil.containsIgnoreMvccVersion(Collection<KeyValue> kvCollection1,
Collection<KeyValue> kvCollection2)
Checks whether KeyValues from kvCollection2 are contained in kvCollection1. |
static ByteBuffer |
KeyValueTestUtil.toByteBufferAndRewind(Iterable<? extends KeyValue> kvs,
boolean includeMemstoreTS)
|
static String |
KeyValueTestUtil.toStringWithPadding(Collection<? extends KeyValue> kvs,
boolean includeMeta)
toString |
static int |
KeyValueUtil.totalLengthWithMvccVersion(Iterable<? extends KeyValue> kvs,
boolean includeMvccVersion)
|
Uses of KeyValue in org.apache.hadoop.hbase.client |
---|
Methods in org.apache.hadoop.hbase.client that return KeyValue | |
---|---|
KeyValue |
Result.getColumnLatest(byte[] family,
byte[] qualifier)
The KeyValue for the most recent timestamp for a given column. |
KeyValue |
Result.getColumnLatest(byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
The KeyValue for the most recent timestamp for a given column. |
KeyValue[] |
Result.raw()
Return the array of KeyValues backing this Result instance. |
Methods in org.apache.hadoop.hbase.client that return types with arguments of type KeyValue | |
---|---|
List<KeyValue> |
Put.get(byte[] family,
byte[] qualifier)
Returns a list of all KeyValue objects with matching column family and qualifier. |
List<KeyValue> |
Result.getColumn(byte[] family,
byte[] qualifier)
Return the KeyValues for the specific column. |
Map<byte[],List<KeyValue>> |
Mutation.getFamilyMap()
Deprecated. |
List<KeyValue> |
Result.list()
Create a sorted list of the KeyValue's in this result. |
Methods in org.apache.hadoop.hbase.client with parameters of type KeyValue | |
---|---|
Put |
Put.add(KeyValue kv)
Add the specified KeyValue to this Put operation. |
Delete |
Delete.addDeleteMarker(KeyValue kv)
Advanced use only. |
protected int |
Result.binarySearch(KeyValue[] kvs,
byte[] family,
byte[] qualifier)
|
protected int |
Result.binarySearch(KeyValue[] kvs,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Searches for the latest value for the specified column. |
Constructors in org.apache.hadoop.hbase.client with parameters of type KeyValue | |
---|---|
Result(KeyValue[] kvs)
Instantiate a Result with the specified array of KeyValues. |
Uses of KeyValue in org.apache.hadoop.hbase.client.coprocessor |
---|
Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type KeyValue | |
---|---|
Long |
LongColumnInterpreter.getValue(byte[] colFamily,
byte[] colQualifier,
KeyValue kv)
|
BigDecimal |
BigDecimalColumnInterpreter.getValue(byte[] colFamily,
byte[] colQualifier,
KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.codec.prefixtree |
---|
Methods in org.apache.hadoop.hbase.codec.prefixtree that return KeyValue | |
---|---|
KeyValue |
PrefixTreeSeeker.getKeyValue()
currently must do deep copy into new array |
Uses of KeyValue in org.apache.hadoop.hbase.coprocessor |
---|
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type KeyValue | |
---|---|
abstract T |
ColumnInterpreter.getValue(byte[] colFamily,
byte[] colQualifier,
KeyValue kv)
|
Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type KeyValue | |
---|---|
void |
RegionObserver.postGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
Called after the client performs a Get |
void |
BaseRegionObserver.postGet(ObserverContext<RegionCoprocessorEnvironment> e,
Get get,
List<KeyValue> results)
|
void |
RegionObserver.preGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
Called before the client performs a Get |
void |
BaseRegionObserver.preGet(ObserverContext<RegionCoprocessorEnvironment> e,
Get get,
List<KeyValue> results)
|
Uses of KeyValue in org.apache.hadoop.hbase.filter |
---|
Methods in org.apache.hadoop.hbase.filter that return KeyValue | |
---|---|
KeyValue |
MultipleColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
FuzzyRowFilter.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterWrapper.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterList.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterBase.getNextKeyHint(KeyValue currentKV)
Filters that are not sure which key must be next seeked to, can inherit this implementation that, by default, returns a null KeyValue. |
abstract KeyValue |
Filter.getNextKeyHint(KeyValue currentKV)
If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is the next key it must seek to. |
KeyValue |
ColumnRangeFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPaginationFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
WhileMatchFilter.transform(KeyValue v)
|
KeyValue |
SkipFilter.transform(KeyValue v)
|
KeyValue |
KeyOnlyFilter.transform(KeyValue kv)
|
KeyValue |
FilterWrapper.transform(KeyValue v)
|
KeyValue |
FilterList.transform(KeyValue v)
|
KeyValue |
FilterBase.transform(KeyValue v)
By default no transformation takes place |
abstract KeyValue |
Filter.transform(KeyValue v)
Give the filter a chance to transform the passed KeyValue. |
Methods in org.apache.hadoop.hbase.filter with parameters of type KeyValue | |
---|---|
Filter.ReturnCode |
WhileMatchFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ValueFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
TimestampsFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
SkipFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
SingleColumnValueFilter.filterKeyValue(KeyValue keyValue)
|
Filter.ReturnCode |
RowFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
RandomRowFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
QualifierFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
MultipleColumnPrefixFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
FuzzyRowFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
FirstKeyValueMatchingQualifiersFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FirstKeyOnlyFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FilterWrapper.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FilterList.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FilterBase.filterKeyValue(KeyValue ignored)
Filters that dont filter by key value can inherit this implementation that includes all KeyValues. |
abstract Filter.ReturnCode |
Filter.filterKeyValue(KeyValue v)
A way to filter based on the column family, column qualifier and/or the column value. |
Filter.ReturnCode |
FamilyFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
DependentColumnFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ColumnRangeFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
ColumnPrefixFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
ColumnPaginationFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ColumnCountGetFilter.filterKeyValue(KeyValue v)
|
KeyValue |
MultipleColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
FuzzyRowFilter.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterWrapper.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterList.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterBase.getNextKeyHint(KeyValue currentKV)
Filters that are not sure which key must be next seeked to, can inherit this implementation that, by default, returns a null KeyValue. |
abstract KeyValue |
Filter.getNextKeyHint(KeyValue currentKV)
If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is the next key it must seek to. |
KeyValue |
ColumnRangeFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPaginationFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
WhileMatchFilter.transform(KeyValue v)
|
KeyValue |
SkipFilter.transform(KeyValue v)
|
KeyValue |
KeyOnlyFilter.transform(KeyValue kv)
|
KeyValue |
FilterWrapper.transform(KeyValue v)
|
KeyValue |
FilterList.transform(KeyValue v)
|
KeyValue |
FilterBase.transform(KeyValue v)
By default no transformation takes place |
abstract KeyValue |
Filter.transform(KeyValue v)
Give the filter a chance to transform the passed KeyValue. |
Method parameters in org.apache.hadoop.hbase.filter with type arguments of type KeyValue | |
---|---|
void |
SingleColumnValueExcludeFilter.filterRow(List<KeyValue> kvs)
|
void |
FilterWrapper.filterRow(List<KeyValue> kvs)
|
void |
FilterList.filterRow(List<KeyValue> kvs)
|
void |
FilterBase.filterRow(List<KeyValue> ignored)
Filters that never filter by modifying the returned List of KeyValues can inherit this implementation that does nothing. |
abstract void |
Filter.filterRow(List<KeyValue> kvs)
Chance to alter the list of keyvalues to be submitted. |
void |
DependentColumnFilter.filterRow(List<KeyValue> kvs)
|
Uses of KeyValue in org.apache.hadoop.hbase.io.encoding |
---|
Methods in org.apache.hadoop.hbase.io.encoding that return KeyValue | |
---|---|
KeyValue |
DataBlockEncoder.EncodedSeeker.getKeyValue()
|
Methods in org.apache.hadoop.hbase.io.encoding that return types with arguments of type KeyValue | |
---|---|
Iterator<KeyValue> |
EncodedDataBlock.getIterator(int headerSize)
Provides access to compressed value. |
Uses of KeyValue in org.apache.hadoop.hbase.io.hfile |
---|
Methods in org.apache.hadoop.hbase.io.hfile that return KeyValue | |
---|---|
KeyValue |
HFileScanner.getKeyValue()
|
KeyValue |
HFileReaderV2.ScannerV2.getKeyValue()
|
KeyValue |
HFileReaderV2.EncodedScannerV2.getKeyValue()
|
Methods in org.apache.hadoop.hbase.io.hfile with parameters of type KeyValue | |
---|---|
void |
HFileWriterV2.append(KeyValue kv)
Add key/value to file. |
void |
HFile.Writer.append(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return KeyValue | |
---|---|
KeyValue |
KeyValueSerialization.KeyValueDeserializer.deserialize(KeyValue ignore)
|
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type KeyValue | |
---|---|
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,KeyValue> |
HFileOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
|
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type KeyValue | |
---|---|
KeyValue |
KeyValueSerialization.KeyValueDeserializer.deserialize(KeyValue ignore)
|
void |
KeyValueSerialization.KeyValueSerializer.serialize(KeyValue kv)
|
Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type KeyValue | |
---|---|
KeyValueSerialization.KeyValueDeserializer |
KeyValueSerialization.getDeserializer(Class<KeyValue> t)
|
KeyValueSerialization.KeyValueSerializer |
KeyValueSerialization.getSerializer(Class<KeyValue> c)
|
protected void |
KeyValueSortReducer.reduce(ImmutableBytesWritable row,
Iterable<KeyValue> kvs,
org.apache.hadoop.mapreduce.Reducer.Context context)
|
Uses of KeyValue in org.apache.hadoop.hbase.regionserver |
---|
Fields in org.apache.hadoop.hbase.regionserver declared as KeyValue | |
---|---|
protected KeyValue |
StoreScanner.lastTop
|
Methods in org.apache.hadoop.hbase.regionserver that return KeyValue | |
---|---|
KeyValue |
ScanQueryMatcher.getKeyForNextColumn(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextRow(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getNextKeyHint(KeyValue kv)
|
KeyValue |
Store.getRowKeyAtOrBefore(byte[] row)
Find the key that matches row exactly, or the one that immediately precedes it. |
KeyValue |
HStore.getRowKeyAtOrBefore(byte[] row)
|
KeyValue |
ScanQueryMatcher.getStartKey()
|
KeyValue |
StoreScanner.next()
|
KeyValue |
StoreFileScanner.next()
|
KeyValue |
MemStore.MemStoreScanner.next()
|
KeyValue |
KeyValueScanner.next()
Return the next KeyValue in this scanner, iterating the scanner |
KeyValue |
KeyValueHeap.next()
|
KeyValue |
StoreScanner.peek()
|
KeyValue |
StoreFileScanner.peek()
|
KeyValue |
MemStore.MemStoreScanner.peek()
|
KeyValue |
KeyValueScanner.peek()
Look at the next KeyValue in this scanner, but do not iterate scanner. |
KeyValue |
KeyValueHeap.peek()
|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type KeyValue | |
---|---|
long |
Store.add(KeyValue kv)
Adds a value to the memstore |
long |
HStore.add(KeyValue kv)
|
void |
StoreFile.Writer.append(KeyValue kv)
|
protected long |
HStore.delete(KeyValue kv)
Adds a value to the memstore |
static boolean |
NonLazyKeyValueScanner.doRealSeek(KeyValueScanner scanner,
KeyValue kv,
boolean forward)
|
Iterator<StoreFile> |
StoreFileManager.getCandidateFilesForRowKeyBefore(KeyValue targetKey)
Gets initial, full list of candidate store files to check for row-key-before. |
KeyValue |
ScanQueryMatcher.getKeyForNextColumn(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextRow(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getNextKeyHint(KeyValue kv)
|
void |
TimeRangeTracker.includeTimestamp(KeyValue kv)
Update the current TimestampRange to include the timestamp from KeyValue If the Key is of type DeleteColumn or DeleteFamily, it includes the entire time range from 0 to timestamp of the key. |
ScanQueryMatcher.MatchCode |
ScanQueryMatcher.match(KeyValue kv)
Determines if the caller should do one of several things: - seek/skip to the next row (MatchCode.SEEK_NEXT_ROW) - seek/skip to the next column (MatchCode.SEEK_NEXT_COL) - include the current KeyValue (MatchCode.INCLUDE) - ignore the current KeyValue (MatchCode.SKIP) - got to the next row (MatchCode.DONE) |
boolean |
ScanQueryMatcher.moreRowsMayExistAfter(KeyValue kv)
|
boolean |
StoreFileScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
Pretend we have done a seek but don't do it yet, if possible. |
boolean |
NonLazyKeyValueScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
|
boolean |
KeyValueScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
Similar to KeyValueScanner.seek(org.apache.hadoop.hbase.KeyValue) (or KeyValueScanner.reseek(org.apache.hadoop.hbase.KeyValue) if forward is true) but only
does a seek operation after checking that it is really necessary for the
row/column combination specified by the kv parameter. |
boolean |
KeyValueHeap.requestSeek(KeyValue key,
boolean forward,
boolean useBloom)
Similar to KeyValueScanner.seek(org.apache.hadoop.hbase.KeyValue) (or KeyValueScanner.reseek(org.apache.hadoop.hbase.KeyValue) if forward is true) but only
does a seek operation after checking that it is really necessary for the
row/column combination specified by the kv parameter. |
boolean |
StoreScanner.reseek(KeyValue kv)
|
boolean |
StoreFileScanner.reseek(KeyValue key)
|
boolean |
MemStore.MemStoreScanner.reseek(KeyValue key)
Move forward on the sub-lists set previously by seek. |
boolean |
KeyValueScanner.reseek(KeyValue key)
Reseek the scanner at or after the specified KeyValue. |
boolean |
KeyValueHeap.reseek(KeyValue seekKey)
This function is identical to the KeyValueHeap.seek(KeyValue) function except
that scanner.seek(seekKey) is changed to scanner.reseek(seekKey). |
protected void |
StoreScanner.resetScannerStack(KeyValue lastTopKey)
|
protected boolean |
HRegion.restoreEdit(Store s,
KeyValue kv)
Used by tests |
void |
Store.rollback(KeyValue kv)
Removes a kv from the memstore. |
void |
HStore.rollback(KeyValue kv)
|
boolean |
StoreScanner.seek(KeyValue key)
|
boolean |
StoreFileScanner.seek(KeyValue key)
|
boolean |
MemStore.MemStoreScanner.seek(KeyValue key)
Set the scanner at the seek key. |
boolean |
KeyValueScanner.seek(KeyValue key)
Seek the scanner at or after the specified KeyValue. |
boolean |
KeyValueHeap.seek(KeyValue seekKey)
Seeks all scanners at or below the specified seek key. |
static boolean |
StoreFileScanner.seekAtOrAfter(HFileScanner s,
KeyValue k)
|
void |
StoreFile.Writer.trackTimestamps(KeyValue kv)
Record the earlest Put timestamp. |
Iterator<StoreFile> |
StoreFileManager.updateCandidateFilesForRowKeyBefore(Iterator<StoreFile> candidateFiles,
KeyValue targetKey,
KeyValue candidate)
Updates the candidate list for finding row key before. |
Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type KeyValue | |
---|---|
protected List<org.apache.hadoop.fs.Path> |
HStore.flushCache(long logCacheFlushId,
SortedSet<KeyValue> snapshot,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
Write out current snapshot. |
List<org.apache.hadoop.fs.Path> |
DefaultStoreFlusher.flushSnapshot(SortedSet<KeyValue> snapshot,
long cacheFlushId,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
|
boolean |
StoreScanner.next(List<KeyValue> outResult)
|
boolean |
KeyValueHeap.next(List<KeyValue> result)
Gets the next row of keys from the top-most scanner. |
boolean |
InternalScanner.next(List<KeyValue> results)
Grab the next row's worth of values. |
boolean |
StoreScanner.next(List<KeyValue> outResult,
int limit)
Get the next row of values from this Store. |
boolean |
KeyValueHeap.next(List<KeyValue> result,
int limit)
Gets the next row of keys from the top-most scanner. |
boolean |
InternalScanner.next(List<KeyValue> result,
int limit)
Grab the next row's worth of values with a limit on the number of values to return. |
boolean |
RegionScanner.nextRaw(List<KeyValue> result)
Grab the next row's worth of values with the default limit on the number of values to return. |
boolean |
RegionScanner.nextRaw(List<KeyValue> result,
int limit)
Grab the next row's worth of values with a limit on the number of values to return. |
void |
RegionCoprocessorHost.postGet(Get get,
List<KeyValue> results)
|
boolean |
RegionCoprocessorHost.preGet(Get get,
List<KeyValue> results)
|
void |
RowProcessor.process(long now,
HRegion region,
List<KeyValue> mutations,
WALEdit walEdit)
HRegion handles the locks and MVCC and invokes this method properly. |
Uses of KeyValue in org.apache.hadoop.hbase.regionserver.compactions |
---|
Methods in org.apache.hadoop.hbase.regionserver.compactions with parameters of type KeyValue | |
---|---|
void |
Compactor.CellSink.append(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.regionserver.handler |
---|
Constructors in org.apache.hadoop.hbase.regionserver.handler with parameters of type KeyValue | |
---|---|
ParallelSeekHandler(KeyValueScanner scanner,
KeyValue keyValue,
long readPoint,
CountDownLatch latch)
|
Uses of KeyValue in org.apache.hadoop.hbase.regionserver.wal |
---|
Methods in org.apache.hadoop.hbase.regionserver.wal that return types with arguments of type KeyValue | |
---|---|
List<KeyValue> |
WALEdit.getKeyValues()
|
Methods in org.apache.hadoop.hbase.regionserver.wal with parameters of type KeyValue | |
---|---|
WALEdit |
WALEdit.add(KeyValue kv)
|
static WALProtos.CompactionDescriptor |
WALEdit.getCompaction(KeyValue kv)
Deserialized and returns a CompactionDescriptor is the KeyValue contains one. |
Uses of KeyValue in org.apache.hadoop.hbase.rest |
---|
Methods in org.apache.hadoop.hbase.rest that return KeyValue | |
---|---|
KeyValue |
ScannerResultGenerator.next()
|
KeyValue |
RowResultGenerator.next()
|
Methods in org.apache.hadoop.hbase.rest with parameters of type KeyValue | |
---|---|
void |
ScannerResultGenerator.putBack(KeyValue kv)
|
void |
RowResultGenerator.putBack(KeyValue kv)
|
abstract void |
ResultGenerator.putBack(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.rest.model |
---|
Constructors in org.apache.hadoop.hbase.rest.model with parameters of type KeyValue | |
---|---|
CellModel(KeyValue kv)
Constructor from KeyValue |
Uses of KeyValue in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access with parameters of type KeyValue | |
---|---|
boolean |
TableAuthManager.authorize(User user,
TableName table,
KeyValue kv,
Permission.Action action)
|
boolean |
TablePermission.implies(TableName table,
KeyValue kv,
Permission.Action action)
Checks if this permission grants access to perform the given action on the given table and key value. |
Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type KeyValue | |
---|---|
void |
AccessController.preGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
|
Uses of KeyValue in org.apache.hadoop.hbase.thrift |
---|
Methods in org.apache.hadoop.hbase.thrift with parameters of type KeyValue | |
---|---|
static List<org.apache.hadoop.hbase.thrift.generated.TCell> |
ThriftUtilities.cellFromHBase(KeyValue in)
This utility method creates a list of Thrift TCell "struct" based on an Hbase Cell object. |
static List<org.apache.hadoop.hbase.thrift.generated.TCell> |
ThriftUtilities.cellFromHBase(KeyValue[] in)
This utility method creates a list of Thrift TCell "struct" based on an Hbase Cell array. |
Uses of KeyValue in org.apache.hadoop.hbase.util |
---|
Methods in org.apache.hadoop.hbase.util that return KeyValue | |
---|---|
KeyValue |
CollectionBackedScanner.next()
|
KeyValue |
CollectionBackedScanner.peek()
|
Methods in org.apache.hadoop.hbase.util with parameters of type KeyValue | |
---|---|
boolean |
CollectionBackedScanner.reseek(KeyValue seekKv)
|
boolean |
CollectionBackedScanner.seek(KeyValue seekKv)
|
Constructors in org.apache.hadoop.hbase.util with parameters of type KeyValue | |
---|---|
CollectionBackedScanner(KeyValue.KVComparator comparator,
KeyValue... array)
|
Constructor parameters in org.apache.hadoop.hbase.util with type arguments of type KeyValue | |
---|---|
CollectionBackedScanner(List<KeyValue> list)
|
|
CollectionBackedScanner(List<KeyValue> list,
KeyValue.KVComparator comparator)
|
|
CollectionBackedScanner(SortedSet<KeyValue> set)
|
|
CollectionBackedScanner(SortedSet<KeyValue> set,
KeyValue.KVComparator comparator)
|
Uses of KeyValue in org.apache.hadoop.hbase.util.test |
---|
Methods in org.apache.hadoop.hbase.util.test that return types with arguments of type KeyValue | |
---|---|
List<KeyValue> |
RedundantKVGenerator.generateTestKeyValues(int howMany)
Generate test data useful to test encoders. |
Method parameters in org.apache.hadoop.hbase.util.test with type arguments of type KeyValue | |
---|---|
static ByteBuffer |
RedundantKVGenerator.convertKvToByteBuffer(List<KeyValue> keyValues,
boolean includesMemstoreTS)
Convert list of KeyValues to byte buffer. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |