|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use KeyValue | |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.client.coprocessor | Provides client classes for invoking Coprocessor RPC protocols |
org.apache.hadoop.hbase.codec | |
org.apache.hadoop.hbase.coprocessor | Table of Contents |
org.apache.hadoop.hbase.filter | Provides row-level filters applied to HRegion scan results during calls to
ResultScanner.next() . |
org.apache.hadoop.hbase.io.encoding | |
org.apache.hadoop.hbase.io.hfile | Provides the hbase data+index+metadata file. |
org.apache.hadoop.hbase.mapreduce | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.wal | |
org.apache.hadoop.hbase.rest | HBase REST |
org.apache.hadoop.hbase.rest.model | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.thrift | Provides an HBase Thrift service. |
org.apache.hadoop.hbase.util |
Uses of KeyValue in org.apache.hadoop.hbase |
---|
Fields in org.apache.hadoop.hbase declared as KeyValue | |
---|---|
static KeyValue |
KeyValue.LOWESTKEY
Lowest possible key. |
Methods in org.apache.hadoop.hbase that return KeyValue | |
---|---|
KeyValue |
KeyValue.clone()
Clones a KeyValue. |
static KeyValue |
KeyValue.createFirstDeleteFamilyOnRow(byte[] row,
byte[] family)
Create a Delete Family KeyValue for the specified row and family that would be smaller than all other possible Delete Family KeyValues that have the same row and family. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row)
Create a KeyValue that is smaller than all other possible KeyValues for the given row. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
byte[] family,
byte[] qualifier)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row,family,qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
byte[] f,
byte[] q,
long ts)
|
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
int roffset,
int rlength,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Create a KeyValue for the specified row, family and qualifier that would be smaller than all other possible KeyValues that have the same row, family, qualifier. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
int roffset,
short rlength)
Create a KeyValue that is smaller than all other possible KeyValues for the given row. |
static KeyValue |
KeyValue.createFirstOnRow(byte[] row,
long ts)
Creates a KeyValue that is smaller than all other KeyValues that are older than the passed timestamp. |
KeyValue |
KeyValue.createFirstOnRowColTS(long ts)
Creates the first KV with the row/family/qualifier of this KV and the given timestamp. |
KeyValue |
KeyValue.createKeyOnly(boolean lenAsVal)
Creates a new KeyValue that only contains the key portion (the value is set to be null). |
static KeyValue |
KeyValue.createKeyValueFromKey(byte[] b)
|
static KeyValue |
KeyValue.createKeyValueFromKey(byte[] b,
int o,
int l)
|
static KeyValue |
KeyValue.createKeyValueFromKey(ByteBuffer bb)
|
static KeyValue |
KeyValue.createLastOnRow(byte[] row)
Creates a KeyValue that is last on the specified row id. |
static KeyValue |
KeyValue.createLastOnRow(byte[] row,
int roffset,
int rlength,
byte[] family,
int foffset,
int flength,
byte[] qualifier,
int qoffset,
int qlength)
Create a KeyValue for the specified row, family and qualifier that would be larger than or equal to all other possible KeyValues that have the same row, family, qualifier. |
KeyValue |
KeyValue.createLastOnRowCol()
Similar to createLastOnRow(byte[], int, int, byte[], int, int,
byte[], int, int) but creates the last key on the row/column of this KV
(the value part of the returned KV is always empty). |
KeyValue |
KeyValue.deepCopy()
Creates a deep copy of this KeyValue, re-allocating the buffer. |
KeyValue |
KeyValue.shallowCopy()
Creates a shallow copy of this KeyValue, reusing the data byte buffer. |
Methods in org.apache.hadoop.hbase with parameters of type KeyValue | |
---|---|
int |
KeyValue.KVComparator.compare(KeyValue left,
KeyValue right)
|
int |
KeyValue.RowComparator.compare(KeyValue left,
KeyValue right)
|
int |
KeyValue.KVComparator.compareColumns(KeyValue left,
byte[] right,
int roffset,
int rlength,
int rfamilyoffset)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
byte[] row)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
KeyValue right)
|
int |
KeyValue.KVComparator.compareRows(KeyValue left,
short lrowlength,
KeyValue right,
short rrowlength)
|
int |
KeyValue.KVComparator.compareTimestamps(KeyValue left,
KeyValue right)
|
boolean |
KeyValue.matchingFamily(KeyValue other)
|
boolean |
KeyValue.matchingQualifier(KeyValue other)
|
boolean |
KeyValue.matchingRow(KeyValue other)
|
boolean |
KeyValue.KVComparator.matchingRowColumn(KeyValue left,
KeyValue right)
Compares the row and column of two keyvalues for equality |
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
byte[] right)
|
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
KeyValue right)
Compares the row of two keyvalues for equality |
boolean |
KeyValue.KVComparator.matchingRows(KeyValue left,
short lrowlength,
KeyValue right,
short rrowlength)
|
boolean |
KeyValue.KVComparator.matchingRowsGreaterTimestamp(KeyValue left,
KeyValue right)
Compares the row and timestamp of two keys Was called matchesWithoutColumn in HStoreKey. |
Uses of KeyValue in org.apache.hadoop.hbase.client |
---|
Fields in org.apache.hadoop.hbase.client with type parameters of type KeyValue | |
---|---|
protected Map<byte[],List<KeyValue>> |
Mutation.familyMap
|
Methods in org.apache.hadoop.hbase.client that return KeyValue | |
---|---|
KeyValue |
Result.getColumnLatest(byte[] family,
byte[] qualifier)
The KeyValue for the most recent for a given column. |
KeyValue[] |
Result.raw()
Return the array of KeyValues backing this Result instance. |
Methods in org.apache.hadoop.hbase.client that return types with arguments of type KeyValue | |
---|---|
List<KeyValue> |
Put.get(byte[] family,
byte[] qualifier)
Returns a list of all KeyValue objects with matching column family and qualifier. |
List<KeyValue> |
Result.getColumn(byte[] family,
byte[] qualifier)
Return the KeyValues for the specific column. |
Map<byte[],List<KeyValue>> |
Mutation.getFamilyMap()
Method for retrieving the put's familyMap |
List<KeyValue> |
Result.list()
Create a sorted list of the KeyValue's in this result. |
Methods in org.apache.hadoop.hbase.client with parameters of type KeyValue | |
---|---|
Append |
Append.add(KeyValue kv)
Add the specified KeyValue to this operation. |
Put |
Put.add(KeyValue kv)
Add the specified KeyValue to this Put operation. |
Delete |
Delete.addDeleteMarker(KeyValue kv)
Advanced use only. |
protected int |
Result.binarySearch(KeyValue[] kvs,
byte[] family,
byte[] qualifier)
|
Method parameters in org.apache.hadoop.hbase.client with type arguments of type KeyValue | |
---|---|
void |
Mutation.setFamilyMap(Map<byte[],List<KeyValue>> map)
Method for setting the put's familyMap |
Constructors in org.apache.hadoop.hbase.client with parameters of type KeyValue | |
---|---|
Result(KeyValue[] kvs)
Instantiate a Result with the specified array of KeyValues. |
Constructor parameters in org.apache.hadoop.hbase.client with type arguments of type KeyValue | |
---|---|
Result(List<KeyValue> kvs)
Instantiate a Result with the specified List of KeyValues. |
Uses of KeyValue in org.apache.hadoop.hbase.client.coprocessor |
---|
Methods in org.apache.hadoop.hbase.client.coprocessor with parameters of type KeyValue | |
---|---|
Long |
LongColumnInterpreter.getValue(byte[] colFamily,
byte[] colQualifier,
KeyValue kv)
|
BigDecimal |
BigDecimalColumnInterpreter.getValue(byte[] family,
byte[] qualifier,
KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.codec |
---|
Fields in org.apache.hadoop.hbase.codec declared as KeyValue | |
---|---|
protected KeyValue |
BaseDecoder.current
|
Methods in org.apache.hadoop.hbase.codec that return KeyValue | |
---|---|
KeyValue |
Decoder.current()
|
KeyValue |
BaseDecoder.current()
|
protected KeyValue |
KeyValueCodec.KeyValueDecoder.parseCell()
|
protected abstract KeyValue |
BaseDecoder.parseCell()
|
Methods in org.apache.hadoop.hbase.codec with parameters of type KeyValue | |
---|---|
abstract void |
BaseEncoder.write(KeyValue cell)
|
void |
Encoder.write(KeyValue cell)
Implementation must copy the entire state of the cell. |
void |
KeyValueCodec.KeyValueEncoder.write(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.coprocessor |
---|
Methods in org.apache.hadoop.hbase.coprocessor with parameters of type KeyValue | |
---|---|
T |
ColumnInterpreter.getValue(byte[] colFamily,
byte[] colQualifier,
KeyValue kv)
|
Method parameters in org.apache.hadoop.hbase.coprocessor with type arguments of type KeyValue | |
---|---|
void |
BaseRegionObserver.postGet(ObserverContext<RegionCoprocessorEnvironment> e,
Get get,
List<KeyValue> results)
|
void |
RegionObserver.postGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
Called after the client performs a Get |
void |
BaseRegionObserver.preGet(ObserverContext<RegionCoprocessorEnvironment> e,
Get get,
List<KeyValue> results)
|
void |
RegionObserver.preGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
Called before the client performs a Get |
Uses of KeyValue in org.apache.hadoop.hbase.filter |
---|
Methods in org.apache.hadoop.hbase.filter that return KeyValue | |
---|---|
KeyValue |
MultipleColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
FilterList.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterBase.getNextKeyHint(KeyValue currentKV)
Filters that are not sure which key must be next seeked to, can inherit this implementation that, by default, returns a null KeyValue. |
KeyValue |
FuzzyRowFilter.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
Filter.getNextKeyHint(KeyValue currentKV)
If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is the next key it must seek to. |
KeyValue |
ColumnRangeFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
SkipFilter.transform(KeyValue v)
|
KeyValue |
FilterList.transform(KeyValue v)
|
KeyValue |
FilterBase.transform(KeyValue v)
By default no transformation takes place |
KeyValue |
Filter.transform(KeyValue v)
Give the filter a chance to transform the passed KeyValue. |
KeyValue |
WhileMatchFilter.transform(KeyValue v)
|
KeyValue |
KeyOnlyFilter.transform(KeyValue kv)
|
Methods in org.apache.hadoop.hbase.filter with parameters of type KeyValue | |
---|---|
Filter.ReturnCode |
MultipleColumnPrefixFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
SingleColumnValueFilter.filterKeyValue(KeyValue keyValue)
|
Filter.ReturnCode |
RandomRowFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
RowFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ColumnCountGetFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
TimestampsFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ColumnPaginationFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
SkipFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FilterList.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
DependentColumnFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
InclusiveStopFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FilterBase.filterKeyValue(KeyValue ignored)
Filters that dont filter by key value can inherit this implementation that includes all KeyValues. |
Filter.ReturnCode |
FamilyFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
FuzzyRowFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
FirstKeyOnlyFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
Filter.filterKeyValue(KeyValue v)
A way to filter based on the column family, column qualifier and/or the column value. |
Filter.ReturnCode |
QualifierFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
PrefixFilter.filterKeyValue(KeyValue ignored)
|
Filter.ReturnCode |
ColumnRangeFilter.filterKeyValue(KeyValue kv)
|
Filter.ReturnCode |
WhileMatchFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ValueFilter.filterKeyValue(KeyValue v)
|
Filter.ReturnCode |
ColumnPrefixFilter.filterKeyValue(KeyValue kv)
|
KeyValue |
MultipleColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
FilterList.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
FilterBase.getNextKeyHint(KeyValue currentKV)
Filters that are not sure which key must be next seeked to, can inherit this implementation that, by default, returns a null KeyValue. |
KeyValue |
FuzzyRowFilter.getNextKeyHint(KeyValue currentKV)
|
KeyValue |
Filter.getNextKeyHint(KeyValue currentKV)
If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is the next key it must seek to. |
KeyValue |
ColumnRangeFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
ColumnPrefixFilter.getNextKeyHint(KeyValue kv)
|
KeyValue |
SkipFilter.transform(KeyValue v)
|
KeyValue |
FilterList.transform(KeyValue v)
|
KeyValue |
FilterBase.transform(KeyValue v)
By default no transformation takes place |
KeyValue |
Filter.transform(KeyValue v)
Give the filter a chance to transform the passed KeyValue. |
KeyValue |
WhileMatchFilter.transform(KeyValue v)
|
KeyValue |
KeyOnlyFilter.transform(KeyValue kv)
|
Method parameters in org.apache.hadoop.hbase.filter with type arguments of type KeyValue | |
---|---|
void |
FilterList.filterRow(List<KeyValue> kvs)
|
void |
DependentColumnFilter.filterRow(List<KeyValue> kvs)
|
void |
FilterBase.filterRow(List<KeyValue> ignored)
Filters that never filter by modifying the returned List of KeyValues can inherit this implementation that does nothing. |
void |
Filter.filterRow(List<KeyValue> kvs)
Chance to alter the list of keyvalues to be submitted. |
void |
SingleColumnValueExcludeFilter.filterRow(List<KeyValue> kvs)
|
Uses of KeyValue in org.apache.hadoop.hbase.io.encoding |
---|
Methods in org.apache.hadoop.hbase.io.encoding that return KeyValue | |
---|---|
KeyValue |
DataBlockEncoder.EncodedSeeker.getKeyValue()
|
Methods in org.apache.hadoop.hbase.io.encoding that return types with arguments of type KeyValue | |
---|---|
Iterator<KeyValue> |
EncodedDataBlock.getIterator()
Provides access to compressed value. |
Methods in org.apache.hadoop.hbase.io.encoding with parameters of type KeyValue | |
---|---|
void |
EncodedDataBlock.addKv(KeyValue kv)
Add KeyValue and compress it. |
Uses of KeyValue in org.apache.hadoop.hbase.io.hfile |
---|
Methods in org.apache.hadoop.hbase.io.hfile that return KeyValue | |
---|---|
KeyValue |
HFileReaderV2.ScannerV2.getKeyValue()
|
KeyValue |
HFileReaderV2.EncodedScannerV2.getKeyValue()
|
KeyValue |
HFileScanner.getKeyValue()
|
KeyValue |
HFileReaderV1.ScannerV1.getKeyValue()
|
Methods in org.apache.hadoop.hbase.io.hfile with parameters of type KeyValue | |
---|---|
void |
HFile.Writer.append(KeyValue kv)
|
void |
HFileWriterV2.append(KeyValue kv)
Add key/value to file. |
void |
HFileWriterV1.append(KeyValue kv)
Add key/value to file. |
Uses of KeyValue in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type KeyValue | |
---|---|
org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,KeyValue> |
HFileOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
|
Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type KeyValue | |
---|---|
protected void |
KeyValueSortReducer.reduce(ImmutableBytesWritable row,
Iterable<KeyValue> kvs,
org.apache.hadoop.mapreduce.Reducer.Context context)
|
Uses of KeyValue in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver that return KeyValue | |
---|---|
KeyValue |
KeyValueSkipListSet.ceiling(KeyValue e)
|
KeyValue |
KeyValueSkipListSet.first()
|
KeyValue |
KeyValueSkipListSet.floor(KeyValue e)
|
KeyValue |
KeyValueSkipListSet.get(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextColumn(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextRow(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getNextKeyHint(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getStartKey()
|
KeyValue |
KeyValueSkipListSet.higher(KeyValue e)
|
KeyValue |
KeyValueSkipListSet.last()
|
KeyValue |
KeyValueSkipListSet.lower(KeyValue e)
|
KeyValue |
KeyValueScanner.next()
Return the next KeyValue in this scanner, iterating the scanner |
KeyValue |
StoreScanner.next()
|
KeyValue |
StoreFileScanner.next()
|
KeyValue |
MemStore.MemStoreScanner.next()
|
KeyValue |
KeyValueHeap.next()
|
KeyValue |
KeyValueScanner.peek()
Look at the next KeyValue in this scanner, but do not iterate scanner. |
KeyValue |
StoreScanner.peek()
|
KeyValue |
StoreFileScanner.peek()
|
KeyValue |
MemStore.MemStoreScanner.peek()
|
KeyValue |
KeyValueHeap.peek()
|
KeyValue |
KeyValueSkipListSet.pollFirst()
|
KeyValue |
KeyValueSkipListSet.pollLast()
|
Methods in org.apache.hadoop.hbase.regionserver that return types with arguments of type KeyValue | |
---|---|
Comparator<? super KeyValue> |
KeyValueSkipListSet.comparator()
|
Iterator<KeyValue> |
KeyValueSkipListSet.descendingIterator()
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.descendingSet()
|
SortedSet<KeyValue> |
KeyValueSkipListSet.headSet(KeyValue toElement)
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.headSet(KeyValue toElement,
boolean inclusive)
|
Iterator<KeyValue> |
KeyValueSkipListSet.iterator()
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.subSet(KeyValue fromElement,
boolean fromInclusive,
KeyValue toElement,
boolean toInclusive)
|
SortedSet<KeyValue> |
KeyValueSkipListSet.subSet(KeyValue fromElement,
KeyValue toElement)
|
SortedSet<KeyValue> |
KeyValueSkipListSet.tailSet(KeyValue fromElement)
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.tailSet(KeyValue fromElement,
boolean inclusive)
|
Methods in org.apache.hadoop.hbase.regionserver with parameters of type KeyValue | |
---|---|
protected long |
Store.add(KeyValue kv)
Adds a value to the memstore |
boolean |
KeyValueSkipListSet.add(KeyValue e)
|
void |
StoreFile.Writer.append(KeyValue kv)
|
KeyValue |
KeyValueSkipListSet.ceiling(KeyValue e)
|
protected long |
Store.delete(KeyValue kv)
Adds a value to the memstore |
static boolean |
NonLazyKeyValueScanner.doRealSeek(KeyValueScanner scanner,
KeyValue kv,
boolean forward)
|
KeyValue |
KeyValueSkipListSet.floor(KeyValue e)
|
KeyValue |
KeyValueSkipListSet.get(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextColumn(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getKeyForNextRow(KeyValue kv)
|
KeyValue |
ScanQueryMatcher.getNextKeyHint(KeyValue kv)
|
SortedSet<KeyValue> |
KeyValueSkipListSet.headSet(KeyValue toElement)
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.headSet(KeyValue toElement,
boolean inclusive)
|
KeyValue |
KeyValueSkipListSet.higher(KeyValue e)
|
void |
TimeRangeTracker.includeTimestamp(KeyValue kv)
Update the current TimestampRange to include the timestamp from KeyValue If the Key is of type DeleteColumn or DeleteFamily, it includes the entire time range from 0 to timestamp of the key. |
KeyValue |
KeyValueSkipListSet.lower(KeyValue e)
|
ScanQueryMatcher.MatchCode |
ScanQueryMatcher.match(KeyValue kv)
Determines if the caller should do one of several things: - seek/skip to the next row (MatchCode.SEEK_NEXT_ROW) - seek/skip to the next column (MatchCode.SEEK_NEXT_COL) - include the current KeyValue (MatchCode.INCLUDE) - ignore the current KeyValue (MatchCode.SKIP) - got to the next row (MatchCode.DONE) |
boolean |
ScanQueryMatcher.moreRowsMayExistAfter(KeyValue kv)
|
boolean |
NonLazyKeyValueScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
|
boolean |
KeyValueScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
Similar to KeyValueScanner.seek(org.apache.hadoop.hbase.KeyValue) (or KeyValueScanner.reseek(org.apache.hadoop.hbase.KeyValue) if forward is true) but only
does a seek operation after checking that it is really necessary for the
row/column combination specified by the kv parameter. |
boolean |
StoreFileScanner.requestSeek(KeyValue kv,
boolean forward,
boolean useBloom)
Pretend we have done a seek but don't do it yet, if possible. |
boolean |
KeyValueHeap.requestSeek(KeyValue key,
boolean forward,
boolean useBloom)
Similar to KeyValueScanner.seek(org.apache.hadoop.hbase.KeyValue) (or KeyValueScanner.reseek(org.apache.hadoop.hbase.KeyValue) if forward is true) but only
does a seek operation after checking that it is really necessary for the
row/column combination specified by the kv parameter. |
boolean |
KeyValueScanner.reseek(KeyValue key)
Reseek the scanner at or after the specified KeyValue. |
boolean |
StoreScanner.reseek(KeyValue kv)
|
boolean |
StoreFileScanner.reseek(KeyValue key)
|
boolean |
MemStore.MemStoreScanner.reseek(KeyValue key)
Move forward on the sub-lists set previously by seek. |
boolean |
KeyValueHeap.reseek(KeyValue seekKey)
This function is identical to the KeyValueHeap.seek(KeyValue) function except
that scanner.seek(seekKey) is changed to scanner.reseek(seekKey). |
protected boolean |
HRegion.restoreEdit(Store s,
KeyValue kv)
Used by tests |
protected void |
Store.rollback(KeyValue kv)
Removes a kv from the memstore. |
boolean |
KeyValueScanner.seek(KeyValue key)
Seek the scanner at or after the specified KeyValue. |
boolean |
StoreScanner.seek(KeyValue key)
|
boolean |
StoreFileScanner.seek(KeyValue key)
|
boolean |
MemStore.MemStoreScanner.seek(KeyValue key)
Set the scanner at the seek key. |
boolean |
KeyValueHeap.seek(KeyValue seekKey)
Seeks all scanners at or below the specified seek key. |
static boolean |
StoreFileScanner.seekAtOrAfter(HFileScanner s,
KeyValue k)
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.subSet(KeyValue fromElement,
boolean fromInclusive,
KeyValue toElement,
boolean toInclusive)
|
SortedSet<KeyValue> |
KeyValueSkipListSet.subSet(KeyValue fromElement,
KeyValue toElement)
|
SortedSet<KeyValue> |
KeyValueSkipListSet.tailSet(KeyValue fromElement)
|
NavigableSet<KeyValue> |
KeyValueSkipListSet.tailSet(KeyValue fromElement,
boolean inclusive)
|
void |
StoreFile.Writer.trackTimestamps(KeyValue kv)
Record the earlest Put timestamp. |
Method parameters in org.apache.hadoop.hbase.regionserver with type arguments of type KeyValue | |
---|---|
boolean |
KeyValueSkipListSet.addAll(Collection<? extends KeyValue> c)
|
protected org.apache.hadoop.fs.Path |
Store.flushCache(long logCacheFlushId,
SortedSet<KeyValue> snapshot,
TimeRangeTracker snapshotTimeRangeTracker,
AtomicLong flushedSize,
MonitoredTask status)
Write out current snapshot. |
boolean |
StoreScanner.next(List<KeyValue> outResult)
|
boolean |
InternalScanner.next(List<KeyValue> results)
Grab the next row's worth of values. |
boolean |
KeyValueHeap.next(List<KeyValue> result)
Gets the next row of keys from the top-most scanner. |
boolean |
StoreScanner.next(List<KeyValue> outResult,
int limit)
Get the next row of values from this Store. |
boolean |
InternalScanner.next(List<KeyValue> result,
int limit)
Grab the next row's worth of values with a limit on the number of values to return. |
boolean |
KeyValueHeap.next(List<KeyValue> result,
int limit)
Gets the next row of keys from the top-most scanner. |
boolean |
StoreScanner.next(List<KeyValue> outResult,
int limit,
String metric)
Get the next row of values from this Store. |
boolean |
InternalScanner.next(List<KeyValue> result,
int limit,
String metric)
Grab the next row's worth of values with a limit on the number of values to return. |
boolean |
KeyValueHeap.next(List<KeyValue> result,
int limit,
String metric)
Gets the next row of keys from the top-most scanner. |
boolean |
StoreScanner.next(List<KeyValue> outResult,
String metric)
|
boolean |
InternalScanner.next(List<KeyValue> results,
String metric)
Grab the next row's worth of values. |
boolean |
KeyValueHeap.next(List<KeyValue> result,
String metric)
|
boolean |
RegionScanner.nextRaw(List<KeyValue> result,
int limit,
String metric)
Grab the next row's worth of values with a limit on the number of values to return. |
boolean |
RegionScanner.nextRaw(List<KeyValue> result,
String metric)
Grab the next row's worth of values with the default limit on the number of values to return. |
void |
RegionCoprocessorHost.postGet(Get get,
List<KeyValue> results)
|
boolean |
RegionCoprocessorHost.preGet(Get get,
List<KeyValue> results)
|
long |
Store.upsert(List<KeyValue> kvs)
Adds or replaces the specified KeyValues. |
long |
MemStore.upsert(List<KeyValue> kvs)
Update or insert the specified KeyValues. |
Uses of KeyValue in org.apache.hadoop.hbase.regionserver.wal |
---|
Methods in org.apache.hadoop.hbase.regionserver.wal that return types with arguments of type KeyValue | |
---|---|
List<KeyValue> |
WALEdit.getKeyValues()
|
Methods in org.apache.hadoop.hbase.regionserver.wal with parameters of type KeyValue | |
---|---|
void |
WALEdit.add(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.rest |
---|
Methods in org.apache.hadoop.hbase.rest that return KeyValue | |
---|---|
KeyValue |
RowResultGenerator.next()
|
KeyValue |
ScannerResultGenerator.next()
|
Methods in org.apache.hadoop.hbase.rest with parameters of type KeyValue | |
---|---|
abstract void |
ResultGenerator.putBack(KeyValue kv)
|
void |
RowResultGenerator.putBack(KeyValue kv)
|
void |
ScannerResultGenerator.putBack(KeyValue kv)
|
Uses of KeyValue in org.apache.hadoop.hbase.rest.model |
---|
Constructors in org.apache.hadoop.hbase.rest.model with parameters of type KeyValue | |
---|---|
CellModel(KeyValue kv)
Constructor from KeyValue |
Uses of KeyValue in org.apache.hadoop.hbase.security.access |
---|
Methods in org.apache.hadoop.hbase.security.access with parameters of type KeyValue | |
---|---|
boolean |
TableAuthManager.authorize(User user,
byte[] table,
KeyValue kv,
Permission.Action action)
|
boolean |
TablePermission.implies(byte[] table,
KeyValue kv,
Permission.Action action)
Checks if this permission grants access to perform the given action on the given table and key value. |
Method parameters in org.apache.hadoop.hbase.security.access with type arguments of type KeyValue | |
---|---|
void |
AccessController.preGet(ObserverContext<RegionCoprocessorEnvironment> c,
Get get,
List<KeyValue> result)
|
Uses of KeyValue in org.apache.hadoop.hbase.thrift |
---|
Methods in org.apache.hadoop.hbase.thrift with parameters of type KeyValue | |
---|---|
static List<TCell> |
ThriftUtilities.cellFromHBase(KeyValue in)
This utility method creates a list of Thrift TCell "struct" based on an Hbase Cell object. |
static List<TCell> |
ThriftUtilities.cellFromHBase(KeyValue[] in)
This utility method creates a list of Thrift TCell "struct" based on an Hbase Cell array. |
Uses of KeyValue in org.apache.hadoop.hbase.util |
---|
Methods in org.apache.hadoop.hbase.util that return KeyValue | |
---|---|
KeyValue |
CollectionBackedScanner.next()
|
KeyValue |
CollectionBackedScanner.peek()
|
Methods in org.apache.hadoop.hbase.util with parameters of type KeyValue | |
---|---|
boolean |
CollectionBackedScanner.reseek(KeyValue seekKv)
|
boolean |
CollectionBackedScanner.seek(KeyValue seekKv)
|
Constructors in org.apache.hadoop.hbase.util with parameters of type KeyValue | |
---|---|
CollectionBackedScanner(KeyValue.KVComparator comparator,
KeyValue... array)
|
Constructor parameters in org.apache.hadoop.hbase.util with type arguments of type KeyValue | |
---|---|
CollectionBackedScanner(List<KeyValue> list)
|
|
CollectionBackedScanner(List<KeyValue> list,
KeyValue.KVComparator comparator)
|
|
CollectionBackedScanner(SortedSet<KeyValue> set)
|
|
CollectionBackedScanner(SortedSet<KeyValue> set,
KeyValue.KVComparator comparator)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |