|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Result | |
---|---|
org.apache.hadoop.hbase.avro | Provides an HBase Avro service. |
org.apache.hadoop.hbase.catalog | |
org.apache.hadoop.hbase.client | Provides HBase Client |
org.apache.hadoop.hbase.ipc | Tools to help define network clients and servers. |
org.apache.hadoop.hbase.mapred | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.mapreduce | Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility |
org.apache.hadoop.hbase.mapreduce.replication | |
org.apache.hadoop.hbase.master.handler | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.rest.client | |
org.apache.hadoop.hbase.thrift | Provides an HBase Thrift service. |
Uses of Result in org.apache.hadoop.hbase.avro |
---|
Methods in org.apache.hadoop.hbase.avro with parameters of type Result | |
---|---|
static org.apache.avro.generic.GenericArray<AResult> |
AvroUtil.resultsToAResults(Result[] results)
|
static AResult |
AvroUtil.resultToAResult(Result result)
|
Uses of Result in org.apache.hadoop.hbase.catalog |
---|
Methods in org.apache.hadoop.hbase.catalog that return types with arguments of type Result | |
---|---|
static List<Result> |
MetaReader.fullScanOfResults(CatalogTracker catalogTracker)
Performs a full scan of .META. . |
static NavigableMap<HRegionInfo,Result> |
MetaReader.getServerUserRegions(CatalogTracker catalogTracker,
HServerInfo hsi)
|
Methods in org.apache.hadoop.hbase.catalog with parameters of type Result | |
---|---|
static Pair<HRegionInfo,HServerAddress> |
MetaReader.metaRowToRegionPair(Result data)
|
static Pair<HRegionInfo,HServerInfo> |
MetaReader.metaRowToRegionPairWithInfo(Result data)
|
boolean |
MetaReader.Visitor.visit(Result r)
Visit the catalog table row. |
Uses of Result in org.apache.hadoop.hbase.client |
---|
Methods in org.apache.hadoop.hbase.client that return Result | |
---|---|
Result[] |
ScannerCallable.call()
|
Result |
HTable.get(Get get)
|
Result |
HTableInterface.get(Get get)
Extracts certain cells from a given row. |
Result[] |
HTable.get(List<Get> gets)
|
Result[] |
HTableInterface.get(List<Get> gets)
Extracts certain cells from the given rows, in batch. |
Result |
Action.getResult()
|
Result |
HTable.getRowOrBefore(byte[] row,
byte[] family)
|
Result |
HTableInterface.getRowOrBefore(byte[] row,
byte[] family)
Return the row that matches row exactly, or the one that immediately precedes it. |
Result |
HTable.increment(Increment increment)
|
Result |
HTableInterface.increment(Increment increment)
Increments one or more columns within a single row. |
Result |
HTable.ClientScanner.next()
|
Result |
ResultScanner.next()
Grab the next row's worth of values. |
Result[] |
HTable.ClientScanner.next(int nbRows)
Get nbRows rows. |
Result[] |
ResultScanner.next(int nbRows)
|
static Result[] |
Result.readArray(DataInput in)
|
Methods in org.apache.hadoop.hbase.client that return types with arguments of type Result | |
---|---|
Iterator<Result> |
HTable.ClientScanner.iterator()
|
Methods in org.apache.hadoop.hbase.client with parameters of type Result | |
---|---|
static void |
Result.compareResults(Result res1,
Result res2)
Does a deep comparison of two Results, down to the byte arrays. |
static long |
Result.getWriteArraySize(Result[] results)
|
boolean |
MetaScanner.MetaScannerVisitor.processRow(Result rowResult)
Visitor method that accepts a RowResult and the meta region location. |
void |
Action.setResult(Result result)
|
static void |
Result.writeArray(DataOutput out,
Result[] results)
|
Uses of Result in org.apache.hadoop.hbase.ipc |
---|
Methods in org.apache.hadoop.hbase.ipc that return Result | |
---|---|
Result |
HRegionInterface.get(byte[] regionName,
Get get)
Perform Get operation. |
Result |
HRegionInterface.getClosestRowBefore(byte[] regionName,
byte[] row,
byte[] family)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it. |
Result |
HRegionInterface.increment(byte[] regionName,
Increment increment)
Increments one or more columns values in a row. |
Result |
HRegionInterface.next(long scannerId)
Get the next set of values |
Result[] |
HRegionInterface.next(long scannerId,
int numberOfRows)
Get the next set of values |
Uses of Result in org.apache.hadoop.hbase.mapred |
---|
Methods in org.apache.hadoop.hbase.mapred that return Result | |
---|---|
Result |
TableRecordReader.createValue()
|
Result |
TableRecordReaderImpl.createValue()
|
Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type Result | |
---|---|
org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Builds a TableRecordReader. |
Methods in org.apache.hadoop.hbase.mapred with parameters of type Result | |
---|---|
protected byte[][] |
GroupingTableMap.extractKeyValues(Result r)
Deprecated. Extract columns values from the current record. |
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Extract the grouping columns from value to construct a new key. |
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Pass the key, value to reduce |
boolean |
TableRecordReader.next(ImmutableBytesWritable key,
Result value)
|
boolean |
TableRecordReaderImpl.next(ImmutableBytesWritable key,
Result value)
|
Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Result | |
---|---|
void |
GroupingTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Extract the grouping columns from value to construct a new key. |
void |
IdentityTableMap.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output,
org.apache.hadoop.mapred.Reporter reporter)
Deprecated. Pass the key, value to reduce |
Uses of Result in org.apache.hadoop.hbase.mapreduce |
---|
Methods in org.apache.hadoop.hbase.mapreduce that return Result | |
---|---|
Result |
TableRecordReader.getCurrentValue()
Returns the current value. |
Result |
TableRecordReaderImpl.getCurrentValue()
Returns the current value. |
Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type Result | |
---|---|
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> |
TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
Builds a TableRecordReader. |
Methods in org.apache.hadoop.hbase.mapreduce with parameters of type Result | |
---|---|
protected byte[][] |
GroupingTableMapper.extractKeyValues(Result r)
Extract columns values from the current record. |
void |
GroupingTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Extract the grouping columns from value to construct a new key. |
void |
IdentityTableMapper.map(ImmutableBytesWritable key,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Pass the key, value to reduce. |
Uses of Result in org.apache.hadoop.hbase.mapreduce.replication |
---|
Methods in org.apache.hadoop.hbase.mapreduce.replication with parameters of type Result | |
---|---|
void |
VerifyReplication.Verifier.map(ImmutableBytesWritable row,
Result value,
org.apache.hadoop.mapreduce.Mapper.Context context)
Map method that compares every scanned row with the equivalent from a distant cluster. |
Uses of Result in org.apache.hadoop.hbase.master.handler |
---|
Methods in org.apache.hadoop.hbase.master.handler with parameters of type Result | |
---|---|
static boolean |
ServerShutdownHandler.processDeadRegion(HRegionInfo hri,
Result result,
AssignmentManager assignmentManager,
CatalogTracker catalogTracker)
Process a dead region from a dead RS. |
Uses of Result in org.apache.hadoop.hbase.regionserver |
---|
Methods in org.apache.hadoop.hbase.regionserver that return Result | |
---|---|
Result |
HRegionServer.get(byte[] regionName,
Get get)
Perform Get operation. |
Result |
HRegion.get(Get get,
Integer lockid)
|
Result |
HRegion.getClosestRowBefore(byte[] row,
byte[] family)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
Result |
HRegionServer.getClosestRowBefore(byte[] regionName,
byte[] row,
byte[] family)
|
Result |
HRegionServer.increment(byte[] regionName,
Increment increment)
|
Result |
HRegion.increment(Increment increment,
Integer lockid,
boolean writeToWAL)
Perform one or more increment operations on a row. |
Result |
HRegionServer.next(long scannerId)
|
Result[] |
HRegionServer.next(long scannerId,
int nbRows)
|
Uses of Result in org.apache.hadoop.hbase.rest.client |
---|
Methods in org.apache.hadoop.hbase.rest.client that return Result | |
---|---|
protected Result[] |
RemoteHTable.buildResultFromModel(CellSetModel model)
|
Result |
RemoteHTable.get(Get get)
|
Result[] |
RemoteHTable.get(List<Get> gets)
|
Result |
RemoteHTable.getRowOrBefore(byte[] row,
byte[] family)
|
Result |
RemoteHTable.increment(Increment increment)
|
Uses of Result in org.apache.hadoop.hbase.thrift |
---|
Methods in org.apache.hadoop.hbase.thrift with parameters of type Result | |
---|---|
static List<TRowResult> |
ThriftUtilities.rowResultFromHBase(Result in)
|
static List<TRowResult> |
ThriftUtilities.rowResultFromHBase(Result[] in)
This utility method creates a list of Thrift TRowResult "struct" based on an Hbase RowResult object. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |